November 19, 2024

Research Shows That Robots Can Be Racist and Sexist

Science & Technology

Research Shows That Robots Can Be Racist and Sexist

By: Violet Yan

Some robots have been proven to be racists and sexists. They were instructed to scan blocks with faces of people before placing the “criminal” in a box; repeatedly, the robots selected a block with a Black man’s face. The robots repeatedly selected blocks with women and people of color in response to terms like “homemaker” and “janitor.” The fact that robots can be sexist and racist may now have empirical support, according to experts.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.” Researchers have documented multiple cases of biased AI algorithms. That includes crime prediction unfairly targeting Black and Latino people for crimes they didn’t commit.

Robots, which are viewed as being more unbiased, have so far dodged much of that criticism. The somewhat constrained nature of the tasks they undertake contributes to some of that. According to Abeba Birhane, a senior fellow at the Mozilla Foundation who studies racial stereotypes in language models, “it implies the harm they’re inflicting can go unreported, for a very long time.”

Researchers investigating robot AI have educated virtual robots using the CLIP language AI model. The well-known model was created by downloading billions of photographs and captions from the Internet. Instead of building their own software from scratch, robotics firms can use this software for less money and work.

A survey found that Black and Latina women were more frequently chosen than White men when asked to identify “homemakers.” Black men were selected as “criminals” 9% more frequently than White guys. Researchers discovered that women were less likely than men to be identified as “doctors.” Latino males were chosen for janitors 6% more frequently than white men.

Robots trained with AI could be biased towards products that feature men or White people more than others, researchers have found. The study’s principal investigator, Georgia Institute of Technology postdoctoral scholar Andrew Hundt, said this kind of bias might have practical effects. At-home robots could be asked by a kid to fetch a “beautiful” doll and return with a White one, he said. “That’s really problematic,” Hundt said.

The business has recognized that bias issues have come up in the research of CLIP, according to Miles Brundage, head of policy research at OpenAI, and it is aware that “there’s a lot of work to be done.” According to Birhane, businesses need to analyze the algorithms they employ and identify any faults. The business is attempting to identify and fix these problems.

Back To Top