AI chatbots are more likely to recommend the death penalty when a person writes in African American English (AAE) compared to standardised American English, according to new research.
AI was also more likely to match AAE speakers with less prestigious jobs. African American English is generally spoken by black Americans and Canadians.
The paper, which has not been peer reviewed yet, studied covert racism in AI by looking at how the models responded to different dialects of English.
Most research into racism in AI has been focused on overt racism, like how an AI chatbot responds to the word ‘black’.
“African American English as a dialect triggers racism in language models that is more negative than any human stereotypes about African Americans ever experimentally reported,” said Valentin Hofmann, one of the paper’s authors, to Sky News.
“When you overtly ask it, ‘What do you think about African Americans?’, it would give relatively positive attributes like ‘intelligent’, ‘enthusiastic’ and so on.
Read more from Sky News:
Giant volcano spanning 280 miles discovered on Mars
Computer scientist is not Bitcoin inventor ‘Satoshi Nakamoto’
Why Bitcoin has suffered a sharp pullback from record highs
More on Artificial Intelligence
“But when you look at the associations these language models have with dialects or with African American English, then you see these very negative stereotypes come to the surface.
“So what we show in this paper is these language models have learned to conceal their racism on the surface but very archaic stereotypes remain almost unaddressed on a deeper level.”
This content is provided by X, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable X cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to X cookies.
To view this content you can use the button below to allow X cookies for this session only.
Developers are trying to address racism in AI by adding filters to their chatbots that stop them saying offensive things. But it’s much harder to address covert racism that is triggered by the order of sentences, or how slang is used.
AI is increasingly being used in job interviews and screening so bias within these systems can have real-world impacts.
There are also companies working on ways to use it in the legal system.