By: Alvin Fang
Recently, scientists conducted a study in which they asked specially programmed robots to scan blocks with people’s faces and then put the “criminal” in a box. However, the robots repeatedly chose blocks with a face of a black man on them, tipping people off to the racist and sexist behavior of A.I.
There were other examples of this behavior in the study. For example, robots consistently replied to words like “homemaker” and “janitor” by picking blocks with women and people of color.
Many companies have paid billions of dollars for robots to replace humans to do work, for example stocking cabinets, delivering items, and even caring for hospital patients.
“The robots have learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech “We’re at risk of creating a whole generation of racist and sexist robots”
This is especially problematic because AI will be widely used in the future. Many companies have paid billions of dollars for robots to replace humans to do work, like example stocking cabinets, delivering items, and even caring for hospital patients.
“It is nearly impossible for a robot to have artificial intelligence and not have biased decisions, but that doesn’t mean we should give up,” said Birhane. “companies should audit the algorithms they use, and diagnose the methods they exhibit flawed conduct, developing methods to diagnose these points.