Google, Amazon, Meta, Microsoft & Other Companies Voluntarily Commit to AI Safeguards Framed by the White House
The White House announced that it has secured voluntary commitments from seven U.S. companies meant to ensure their
Google, Amazon, Meta, Microsoft & Other Companies Voluntarily Commit to AI Safeguards Framed by the White House
The White House announced that it has secured voluntary commitments from seven U.S. companies meant to ensure their artificial intelligence (AI) products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems.
Amazon, Google, Meta, Microsoft and other companies that are leading the development of AI technology such as ChatGPT-maker Open-AI and startups Anthropic and Inflection have agreed to meet a set of AI safeguards framed by U.S. President Joe Biden’s administration.
The companies have consented to various methods used for reporting vulnerabilities to their systems and to implement digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
It is further mandatory for the companies to report flaws and risks in their technology, including effects on fairness and bias.
The voluntary commitments are predestined to be an immediate way of minimizing the risks ahead of a longer-term push to get Congress to pass laws regulating the technology.
Company executives plan to meet with Biden at the White House, as they pledge to follow the standards.
In the recent debate over Artificial Intelligence regulations, some advocates said Biden’s move is a first step in holding companies and their products accountable, however there is still a lot more to be done.
However, some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common-Sense Media.
Senate Majority Leader Chuck Schumer, D-N.Y., has announced that they will introduce legislation to regulate AI. He said in a statement that he “will continue working closely with the Biden administration and our bipartisan colleagues to build upon the pledges made Friday.”
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
Moreover, Microsoft President Brad Smith said in a blog post that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a ‘licensing regime for highly capable models.’
Pertinently, countries around the globe have been searching for ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is ‘the ideal place’ to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres while welcoming the calls from some countries said that, for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House confirmed that it has already consulted on the voluntary commitments with a number of countries.