November 19, 2024

Are AI-Trained Robots Becoming Racist And Sexist?

Science & Technology

Are AI-Trained Robots Becoming Racist And Sexist?

By: Chase Liu

A recent experiment by some scientists concluded that the AI they wrote for the bot scanned people’s faces to determine whether the robot should put them into a “criminal” box. The AI picked black people more than 9% of the time.

These stereotypes don’t just happen with people AI thinks are criminals. Many tests have gone through this robot, such as identifying homemakers. In this example, the AI scanned people’s faces to see which person was a homemaker. When scientists told the robot to do this, the robot more commonly chose Black and Latina people than white people.

Another example was when scientists told the AI to identify janitors. The robot picked almost 6% more Latino men than white people, even without knowing much about the person they chose.

“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmic systems,” Abeba Birhane from the Mozilla Foundation said. “That means the damage they’re doing can go unnoticed for a long time to come.”

Ms. Birhane also said that it is almost impossible for AI not to be biased. AI only gives information on what it knows and then creates a pattern. However, this doesn’t mean that companies will give up like OpenAI. This means they must fix their flawed algorithms to make them nearly flawless.

One of these models is called CLIP, from OpenAI. Miles Brundage, the head of policy in open AI, says that there is much work to be done on CLIP. And then, once they fix these problems, such as a more thorough analysis, OpenAI will make CLIP public.

Back To Top