By: Nicholas Wang
Many large AI companies are being directed by the Biden Administration to add guardrails and limits to their AI models.
On the afternoon of Friday, July 21, seven large AI companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to new safety standards at a meeting with President Biden at the White House.
There are two main forms of AI: AI Models, the raw algorithm used by programmers, and AI Chatbots, which are an apps made with the models that average people are more likely to use. For example, GPT-3.5 and GPT-4 are models while ChatGPT (i.e. chat.openai.com) is a chatbot. AI chatbots are formatted just like an ordinary messaging app, except you’re talking to a human-like robot.
Following the release of OpenAI’s chatbot ChatGPT and the GPT model in late November of 2022, a number of new AI Models were released a few months later. Examples include Google’s chatbot Bard and their PaLM Model as well as Anthropic’s Claude and Meta’s LLaMa. The release of so many AI models and chatbots shows the improvement of technology that can make everyday tasks easier. However, a public AI model also has drawbacks, like people doing illegal or unethical things with them. For example, criminals can use the AI to help them plot their crimes or use AI to help find data breaches. They can also ask AI to help them hack various things, such as banks.
The President explained during a speech given at the Roosevelt room in the White House that, “We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” He continued to say that, “This is a serious responsibility; we have to get it right. And there’s enormous, enormous potential upside as well.” While AI is revolutionary and a great technological advancement, there still needs to be some regulation to ensure that it is used ethically. If AI had regulations, it would be harder for AI to be used criminally.
These companies are racing to outdo each other with models that do better to generate text, create videos, draw pictures, and other things autonomously. Although the ability of these AI Models are very impressive, they prompt fears and ideas of a “risk of extinction” as AI becomes more and more humanlike. A classic theory is that one day robots, powered by AI will take over the planet.
The voluntary guidelines and safeguards are only an early, temporary step taken by Washington and many governments around the world to regulate the development of AI. One term in the agreement includes testing products for security risks as well as using notable watermarks to make sure consumers can spot AI-generated content. If customers can recognize AI content, this can prevent impersonation using AI.
In addition to the safeguards, the companies agreed to many things such as expert security testing; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges such as climate change; and transparency measures to identify AI-generated material.
At the same time, the U.S. government is dealing with how to control the ability of China and other nations to get ahold of the new artificial intelligence programs, or the components used to develop them. The U.S. wants to protect their technology from overseas competitors in China.
The Biden Administration is pressuring the seven large AI companies to censor their models in accordance to the guidelines created by the government for the safety of users and prevent this powerful technology from being used the wrong way.
________________________________________
This article was based on Pressured by Biden, AI Companies Agree to Guardrails on New Tools