Artificial Intelligence Companies will Implement Safeguards in AI tools

Seven leading Artificial Intelligence (AI) companies based in the United States have agreed to voluntarily implement safeguards in AI development. The companies are Amazon, Anthropic, Google, Meta (formerly Facebook), Inflection, Microsoft, and OpenAI. Their announcement came after a meeting with President Biden at the White House, where the companies pledged to implement new measures for safety, security, and trust in AI technologies.

President Biden emphasized the need to be wary of the potential threats posed by emerging technologies to democracy and its values while recognizing the significant potential benefits that AI offers. The voluntary safeguards the companies will take include security testing by independent experts, research into biases and privacy concerns, sharing information about risks with governments and organizations, tools to deal with social challenges such as climate change, and flagging AI-generated material as such.

Lawmakers around the globe continue to face the challenge of regulating the rapidly evolving AI industry. In Europe, laws are expected to be enacted later this year, while in the United States, Congress has not yet reached an agreement on legislative measures.

Click here to read the White House’s statement.