loader

President Joe Biden will meet with the CEOs and presidents of seven of the largest AI tech companies Friday to mark a nonbinding agreement that will govern how artificial intelligence is developed and released to the public.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all agreed to a set of eight rules Friday, which include external testing of AI systems before their release, investing in cybersecurity protection for unreleased models and using a watermarking system for AI-generated content. The list of attendees includes Microsoft President Brad Smith, Meta President Nick Clegg, and Google President Kent Walker.

The companies’ commitment to these safeguards is meant to “underscore three key principles that must be fundamental to the future of AI: safety, security, and trust,” the White House official said. The key AI principles highlight the administration’s stated focus on protecting AI models from cyberattacks, countering deep fakes and opening up dialogue about companies’ AI risk management systems.

The White House’s move comes while Congress hammers out what legally binding guardrails to put on AI. It’s unclear if Congress will pass any AI legislation this session, meaning the White House’s AI nonbinding and voluntary guidelines stand as the primary guidance on addressing some of the broader concerns surrounding the technology.

And industry groups have already signaled their support for the White House’s voluntary code of conduct. “Today’s announcement can help to form some of the architecture for regulatory guardrails around AI,” said BSA, the Software Alliance in a statement.

The White House is also preparing an executive order on AI for the president to sign, although its contents and arrival date are still unknown. Given the breadth of the AI’s applications and potential for misuse, the administration is “looking at actions across agencies and departments,” the White House official said.

Leave a Reply