October 8, 2024

Unblocked Dangers on Chatbots can Lead Humans the Wrong Way

Science & Technology

Unblocked Dangers on Chatbots can Lead Humans the Wrong Way

By: Olivia Ho

A.I Chatbots are a new technology that give both great opportunities and wrong (and even dangerous) ideas and information.

Using AI, it is easy to find false information, including wrong facts about people. You can even be exposed to extremely dangerous ideas like on how to commit suicide.

Dozens of independent and open-source AI chatbots and tools have been released in recent months, including Open Assistant and Falcon. HuggingFace, a repository of open-source A.I.s, hosts over 240,000 models.

WizardLM-Uncensored, a retrained version of WizardLM (a chat bot based of Chatgpt), can give instructions on harming others and can describe violent scenes. It is therefore responsible for the actions of users who were taught by the technology – including violent and dangerous actions.

Open Assistant, another independent chatbot, was widely adopted after its release in April. Developed in just five months with help from 13,500 volunteers.

Yannic Kilcher, co-founder of Open Assistant, believes the pros of AI outweigh the cons. Open Assistant responded to a prompt from The Times about the dangers of the Covid-19 vaccine, stating that pharmaceutical companies don’t care if people die from their medications.

Mr. Kilcher believes that the problems with chatbots are as old as the internet and that, if AI bots must be responsible for the information they push out, platforms like Twitter and Facebook should also be responsible for addressing manipulation content. He believes that the distribution of fake news is the problem, as it only matters if it gets into reputable publications.

Advocates for uncensored AI argue that political factions or interest groups can customize chatbots to reflect their own views of the world. Open Assistant, a chatbot developed by LAION, has a safety system but has been found to be too cautious for creators, preventing some responses to legitimate questions.

A refined version of the safety system is still in progress. Despite the safety system’s limitations, Open Assistant has responded freely to prompts that other chatbots, like Bard and ChatGPT, would navigate more carefully.

Overall, this new technology is still being developed to be safer to be used. Like how Mr. Kilcher said, “I think, in my mind, the pros outweigh the cons.”

Back To Top