By: Matthew Wang
AI storytelling has become vastly popular on social media websites, with AI technology advancing so far that it is often difficult or impossible for users to tell the difference between what is real and what is made up.
AI storytelling often takes the form of an AI generated animated avatar backed by an AI generated track. Often the avatars take the image of younger children, telling stories that range from fables and myths to biographies and horror stories.
However, these often lighthearted or educational mediums are sometimes used by nefarious users to misinform or spread racial stereotypes. CBC reports that “In some cases, videos with millions of views feature content that could be deemed inappropriate and even disturbing for kids. Some of the content includes stories of abuse and violence against children, explicitly sexual content and racist stereotypes” (CBC, 2023). If utilized in such a manner, many children could be indoctrinated into believing these ideas as truth. When people who they might think to be their peers repeatedly spout racist or sexist misinformation in an effort to promote racist or sexist propaganda, children may begin to accept this information as the truth, then spread such information to their friends, exacerbating the problem.
Guy Gadney, the founder of the company Charisma.ai, states in a CBC interview that, “AI is full of possibilities, many of them good, but cautions that it can also be used to create harmful content” (CBC, 2023). He later adds that rules and regulation should be in place to avoid the toxic use of AI. Gadney, and many others in the field, believe that AI is a powerful tool and it should be used in a helpful, educational, and beneficial manner, not as a tool to make a quick buck or to push derogatory misinformation.