Case Study

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Intel Labs Mitigates AI Bias in Foundational Multimodal Models by 20 Percent

Pages 1 Pages

Intel Labs developed a pioneering approach using social counterfactuals to mitigate bias in large-scale AI multimodal models. By generating synthetic data that varied social attributes such as gender and ethnicity, the researchers identified how bias impacted model predictions. Applying this method to six AI models led to a measurable 20% reduction in bias. Powered by Intel® Xeon® Scalable processors and Intel® Gaudi® 2 AI accelerators, the research advances Intel’s Responsible AI initiative, supporting fairer, more trustworthy model outcomes across image, text, and video analysis.

Join for free to read