Amazon, Google, OpenAI, Meta, and Microsoft Agree To White House’s AI Guidelines To ‘Protect’ Americans

Amid deep concerns about the risks posed by artificial intelligence, the Biden administration has lined up commitments from seven tech companies — including OpenAI, Google and Meta — to abide by safety, security and trust principles in developing AI.

Reps from seven “leading AI companies” — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — are scheduled to attend an event Friday at the White House to announce that the Biden-Harris administration has secured voluntary commitments from the companies to “help move toward safe, secure, and transparent development of AI technology,” according to the White House.

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the Biden administration said in a statement Friday. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

Note that the voluntary agreements from Meta, Google, OpenAI and the others are just that — they’re promises to follow certain principles. To ensure legal protections in the AI space, the Biden administration said, it will “pursue bipartisan legislation to help America lead the way in responsible innovation” in artificial intelligence.

The agreements “are an important first step toward ensuring that companies prioritize safety as they develop generative AI systems,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights. “But the voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections and stepped-up research on the wide range of risks posed by generative AI.”

The principles the seven AI companies have agreed to are as follows:

Develop “robust technical mechanisms” to ensure that users know when content is AI generated, such as a watermarking system to reduce risks of fraud and deception.

Publicly report AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks, such as “the effects on fairness and bias.”

Commit to internal and external security testing of AI systems prior to release, to mitigate risks related to biosecurity and cybersecurity, as well as broader societal harms.

Share information across the industry and with governments, civil society and academia on managing AI risks, including best practices for safety, information on attempts to circumvent safeguards and technical collaboration.

Invest in cybersecurity and “insider threat” safeguards to protect proprietary and unreleased model weights.

Facilitate third-party discovery and reporting of vulnerabilities in AI systems.

Prioritize research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination.

Develop and deploy advanced AI systems “to help address society’s greatest challenges,” ranging from “cancer prevention to mitigating climate change.”

The White House said it has consulted on voluntary AI safety commitments with other countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the U.K.

The White House said the Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement and use of AI systems is centered around safeguarding the Americans’ rights and safety.

Copyright

© Art News

0
More Than 75 Years Later, Partition’s Painful Lega...
Lost Letters

Related Posts