By: Hans Wu
Chatbots, like ChatGPT and Google Bard, are becoming more and more famous now. Although they can be useful, they do sometimes still make mistakes.
AI chatbots provide great information on every aspect of knowledge. Whether it is for school, homework, work, or just asking for a fact, they are a great thing to use. However, AI bots don’t know what is good to say and what is bad to say, because of this, they sometimes provide inaccurate answers or inappropriate responses. For example, people who have been testing the chatbots noted that “AI chatbots have lied about notable figures, pushed partisan messages, spewed misinformation or even advised users on how to commit suicide.”
Engineers are working on the mistakes, as well as the inappropriate responses. “You are responsible for whatever you do with the output of these models, just like you are responsible for whatever you do with a knife, a car, or a lighter,” said Mr. Hartford, who is currently developing WizardLM-Uncensored. The WizardLM-Uncensored chatbot is a safe chatbot providing only appropriate and correct answers. Computer engineers are starting to develop more accurate and safe AI chatbots fit for people to use.
Chatbots are useful, but sometimes they aren’t safe to use. Engineers should carefully code them so that there won’t be any big mistakes. Chatbots should be people’s friend and helper, they should provide you with the most accurate answers, appropriate responses, and assist people’s problems and needs.