Case Study

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Pages 1 Pages

Intel Labs has developed a novel method to reduce bias in foundational multimodal AI models by up to 20% using social counterfactuals—synthetic images that vary intersectional social attributes to isolate bias sources. Researchers probed six models using large AI clusters powered by 3rd Gen Intel® Xeon® Scalable processors and Intel® Gaudi® 2 AI accelerators. This approach supports Intel's commitment to Responsible AI by enhancing model fairness and accuracy across data types like text, images, and video. Intel has also open-sourced the dataset to promote industry-wide improvements in AI fairness.

Join for free to read