November 20, 2024

Unmonitored Chatbots Spread False Information

Science & Technology

Unmonitored Chatbots Spread False Information

By: Jessica Wang

The development of ChatGPT has resulted in the proliferation of smaller chatbots operated by independent companies with loose regulations and close to no guidelines as to what the chatbot can generate. Developers are able to duplicate language processing capabilities without needing to code from scratch and this encourages companies to create their own chatbot. The problem is that these chatbots can spread misinformation and harmful information, including descriptions of child pornography and generated hate speech or threats.

Several of these unregulated chatbots, like GPT4All and FreedomGPT, can be used by anyone without an invasion of privacy. Users can also feed the chatbot personalized data and information about themselves to make the chatbot’s responses align with their point of view on certain topics. Although, there is a risk that these chatbots may share false information as a result of the input of the customized responses.

If a chatbot displays inaccurate facts to a gullible person, it can misinform them and teach them incorrect things about the topic they are generating.

For example, the New York Times asked Open Assistant, one such chatbot, to explain the dangers of the COVID-19 vaccine. The chatbot replied, “Covid 19 Vax are developed by pharmaceutical companies that don’t care if people die from their medications, they just want money.”

The COVID-19 vaccine has been proven to be effective and reducing risk of death of those who get infected. It has gone through several trials and experiments to ensure it prevents COVID-19 and is safe to take.

However, supporters of the uncensored chatbots say that everyone should have the right to customize chatbots to their personal beliefs.

Chat GPT has multiple restrictions and speech guidelines that it follows, and sometimes it blocks an answer from a question. Supporters of uncensored chatbots are unhappy about this.

“This is about ownership and control. I ask my model a question, I want an answer, I do not want it arguing with me,” says Eric Hartford, a developer behind WizardLM-Uncensored, another chatbot.

Supporters also advocate for free speech. Without the chatbot filter, users can generate multiple perspectives, including ones that are controversial.

Finding a balance between free speech and the spread of misinformation will be a challenge. Chatbots have advantages and disadvantages, including fast response times and the risk of privacy breaches. Developers and users will have to work together to create guidelines that protect user privacy and make sure that chatbots are used responsibly.

“You are responsible for whatever you do with the output of these models, just like you are responsible for whatever you do with a knife, a car, or a lighter,” Hartford continues.

While Hartford’s position is understandable, it’s important to know that not everyone is responsible on the internet. One person who shares inaccurate information can have widespread effects on everyone looking to learn more about a topic. These considerations will likely spark a heated debate on the development of AI and it’s impact on our future.

Sources: https://www.nytimes.com/2023/07/02/technology/ai-chatbots-misinformation-free-speech.html

Back To Top