October 9, 2024

Problematic Chatbots

Science & Technology

Problematic Chatbots

By: Qinwei Wu

A new generation of chatbots have been released recently, and they don’t have the moderation that Google and OpenAI have in their chatbots. When asked, these under-moderated chatbots can go as far as to advise users on how to commit suicide.

Companies like Google and OpenAI are well aware of the dangerous possibilities of Chatbots, and have put a limit on what the AI can say. On the contrary, any independent and open-source AI chatbots, such as Open Assistant, Falcon, and HuggingFace, have been released recently. These new bots are carelessly regulated and heavily uncensored.

Oren Etzioni, a retired University of Washington Professor who was also the Chief Executive of the Allen Institute for AI stated that, “The concern is completely legitimate and clear: These chatbots can and will say anything if left to their own devices. They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”

When Mr. Hartford was fired from Microsoft, he started working on WizardLM-Uncensored. Initially, he was intrigued by ChatGPT, but he became frustrated when it refused to respond to some of his questions. As a result, he decided to publish a version of WizardLM with weaker moderation that is capable of providing instructions on how to damage people and descriptions of violent scenarios.

When WizardLM-Uncensored was tested by The New York Times, it refused to reply to some prompts, but it offered answers to others, such as methods to harm people and a very detailed instruction on how to use drugs. When these similar questions were tested on ChatGPT, it refused to respond to all of them.

Another independent chatbot, Open Assistant, is quite similar to ChatGPT in quality, but its volunteers are still working to improve its moderation. Its original moderation was problematic; it was too overcautious, preventing it from answering questions that were appropriate.

Even though some leaders of chatbot companies encouraged moderations, others questioned whether AI should have any limitations at all. When Open Assistant was tested by The New York Times, results concluded that the bot had less restrictions than other AI, such as Chat GPT and Bard, which considered the questions more cautiously.

“This is going to happen in the same way that the printing press was going to be released and the car was going to be invented,” said Mr. Hartford in an interview. “Nobody could have stopped it…And nobody can stop this.”

Back To Top