By: Andrew Yao
Last Friday seven leading A.I. companies agreed to voluntary safeguards on their technology’s development, minimizing the potential risks of artificial intelligence. The voluntary safeguards were discussed with President Joe Biden at the White House.
The voluntary safeguards are only an early, tentative step as Washington and governments across the world seek to put in place legal and regulatory frameworks for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to make sure consumers can spot A.I.-generated material.
As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges like climate change; and transparency measures to identify A.I.-generated material.
The announcement comes as the companies continue their race to outdo each other with new versions of A.I. that offer powerful new ways to create text, photos, music, and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as artificial intelligence becomes more sophisticated and humanlike. For example, artificial intelligence could design more effective chemical weapons, supercharge disinformation campaigns, and exacerbate social inequality.
The order is expected to involve new restrictions on advanced semiconductors (materials which have a conductivity between conductors and nonconductors or insulators) and restrictions on the export of the large language models. Those are hard to secure — much of the software can fit, compressed, on a thumb drive- a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc.