AI models remain racist despite data variation

According to new research, AI algorithms can be biased by racial bias, even when trained on data that is more representative of different ethnic groups.

An international team of researchers analyzed the algorithms’ accuracy in predicting various cognitive behaviors and health measures from fMRI brain scans. The scientists wanted to show whether appropriate representation of different population groups can reduce undesirable behavior towards African Americans. Experiments were performed with two datasets comprising tens of thousands of fMRI scans of human brains. To investigate how racial differences affect the performance of predictive models, attempts have been made to minimize the impact of other variables such as age and gender. The research results have been published in scientific advances.
“When predictive models were trained on data dominated by White Americans (WA), the out-of-sample prediction errors were generally higher for African Americans (AA) than for WAs.” This result may not be surprising, but the differences did not disappear even when the algorithms were trained on sets composed of both WA and AA, and even AA itself. The algorithms showed lower efficiency when working with Afroamerican samples. Prediction accuracy remained higher with AA samples. Assuming that neurobiological or psychometric measures do not differ due to ethnicity, there should be no difference in WA-AA prediction accuracy even when the model was only trained on AA.

AI algorithms are more accurate when working with white American brain scanners. Trials conducted on African Americans are much less efficient, even when the algorithms have been trained only with samples from black people. Assuming there are no differences in neurobiological measures, there should be no differences in prediction accuracy between races. The scientists wanted to show whether appropriate representation of different population groups can reduce undesirable behavior towards African Americans.

“The result may have been influenced by several steps of the neuroimaging preprocessing. For example, in the preprocessing it is usual to align the individual brains to a standard model so that the individual brains can be compared. But these brain models were usually created from a population of whites. The same goes for predefined white populations. functional atlases where voxels in brain images can be grouped into regions based on their functional homogeneity… But mapping of These functional atlases were still often based on datasets dominated by white or European populations in terms of sample size.That data collected from patients is not entirely accurate.You also need to consider whether the tests psychometrics that we use today actually reflect the correct psychological concept underlying group s minority,” Jingwei Li, a science researcher, told The Register. at the Institute for Neuroscience and Medicine at the Jülich Research Center in Germany.

Algorithms were also applied to the Human Connectome Project dataset, and again, they were found to be more accurate in predicting whether WA individuals were more prone to anger and aggression or had better skills. in reading than AA. Algorithmic bias is a problem the US government is trying to address. The National Institute of Standards and Technology released a report this week that comes to similar conclusions. “Having a more diverse dataset is not enough to make AI algorithms less biased and fairer,” Li adds.

“I would be very careful to state that WA and AA differ in these neurobiological or psychometric measures solely because of their ethnicity. As we mentioned in the article, ethnicity or race is a very complex concept that takes into account all historical, social and educational aspects. We do not want to reinforce stereotypes [rasowych] nor increase structural racism. Rather, the purpose of this study is to make the case for greater ethnic justice in the specific context of neuroimaging analysis,” Li told The Register.





Follow us on Google News

Leave a Comment