By: Yuer Li
Humans can be racist and sexist, but robots can also! When scientists from Johns Hopkins University and the Georgia Institute of Technology asked the AI-programmed robots to scan some blocks that had people’s faces on them to identify a “criminal,” the robots consistently chose the one with a Black man’s face. The robots responded to words like “homemaker” and “janitor” with women and people of color. Because of this, the study proved robots can be biased in a way that clouds their programming.
Ever since Covid began, companies have used billions of dollars to help develop more robots to do simple tasks, such as stocking shelves in supermarkets, delivering goods such as toilet paper and food, or even caring for hospital patients. This would help the labor shortage problem, but there could be problems. For instance, if one of these trained AI robots was bagging groceries at checkout, they may mistreat Black people, because the study has proven they could react negatively to Black people.
The automation industry is suspected to grow from $18 billion to $60 billion by the end of the decade, aided by robotics. These robots will most likely be in warehouses. In the next 5 years, the percentage of robots in warehouses will likely be increased to around 50%.
When researchers gave robots 82 commands, such as identifying the blocks as “homemakers” Black and Latina women were the most commonly chosen than White men. When identifying criminals, Black men were chosen 9% more often than White women.
Article Link: AI Trained Robots – Bing images