US President Joe Biden announced Friday that seven of America’s largest companies working on artificial intelligence (AI) have voluntarily committed to a series of rules designed to ensure the safety of their products. Companies including Microsoft, Google and Meta, among others, have also promised to allow external experts to test the programs before releasing them.
During a speech at the White House in the presence of CEOs and representatives of concerns, Biden said that under the initiative proposed by him, companies have committed to action in four areas.
It’s about, among others o testing the capabilities of AI systems by internal and external experts along with the publication of the results of these tests. In addition, companies are to prioritize the security of their systems against cyberattacks, clearly label AI-generated content, and invest in AI-based solutions to tackle society’s biggest issues, from curing cancer to climate change after job creation.
New laws and regulations are needed
“These commitments are real and concrete,” the president noted USA. At the same time, he noted that new laws, regulations and supervision are needed in addition to voluntary commitments by companies. Biden said he intends to “take executive action soon to help America lead the way to responsible innovation.” The initiative and meeting with the president in the White House – already the second one in recent months – was attended by the presidents of the seven largest companies developing AI technology: Microsoft, Google, Meta, OpenAI, Anthropic, Inflection AI, Amazon.
Main photo source: EPA/Yuri Gripas / POOL Supplier: PAP/EPA.