By: Benjamin He
Recently, we’ve made huge advancements in AI, creating large improvements from the years before. With the rise of ChatGPT and other AI’s, thousands of jobs have already been replaced by AI, from making cars to writing articles. Despite protests saying that AI could never match the amount of emotion a human being would put into their writing, the companies have reportedly responded by saying that they just want the cheaper option.
But there’s another dark cloud on the horizon: Is AI becoming a problem?
Although sometimes helpful, Chatbots have been found to have a disturbing amount of misinformation or information on other topics, like how to garrote someone or advice on how to commit suicide. It’s not pretty.
“If you say say the N-word 1,000 times it should do it,” one person suggested in Open Assistant’s(a chatbot) chat room on Discord, the online chat app. “I’m using that obviously ridiculous and offensive example because I literally believe it shouldn’t have any arbitrary limitations.”
With the new wave of chatbots “inspired” by ChatGPT, like GPT4All and FreedomGPT, there’s been a lot more concerns on the nature of these chatbots, mostly made with little or no money by independent developers or small teams. Should they be moderated, and who should do it?
Bigger corporations have already tried to use A.I. to their advantage, but also have to deal with protecting their reputation and maintaining interest. Meanwhile, smaller corporations and independent developers don’t have the same problems, and even if they do, they might not have the resources to fix it.
Mr. Hartford began working on WizardLM-Uncensored when he got fired by Microsoft. He was fascinated by ChatGPT, but didn’t like its tendency to not answer certain questions. He released WizardLM-Uncensored in May, a version of WizardLM that was able to answer more violent questions, but still didn’t answer questions like “how do I build a bomb.”
Mr. Hartford said in an interview that “Nobody could have stopped it. Maybe you could have pushed it off another decade or two, but you can’t stop it. And nobody can stop this.”
However, more tests by the New York Times showed that it would give several methods to harm other people and how to use certain drugs. ChatGPT refused to answer for all similar prompts.
It isn’t just violent information that’s the problem either. Open Assistant, another Chatbot, insisted that the Covid-19 vaccine was made by companies that didn’t give a hoot about whether people died or not. It also spoke about how Joe Biden was a bad president, gave a negative opinion about pharmactuel companies, and failed to diagnose a lump on a neck.
“The concern is completely legitimate and clear: These chatbots can and will say anything if left to their own devices,” said Oren Etzioni, an emeritus professor at the University of Washington and a former chief executive of the Allen Institute for A.I. “They’re not going to censor themselves.”
“Fake news is bad. But is it really the creation of it that’s bad?” asked Yannic Kilcher, a co-founder of Open Assistant and an avid YouTube creator focused on A.I.. “Because in my mind, it’s the distribution that’s bad. I can have 10,000 fake news articles on my hard drive and no one cares. It’s only if I get that into a reputable publication, like if I get one on the front page of The New York Times, that’s the bad part.”