Big Tech Companies Pledge to Improve Safety and Trust in AI, says White House
In a recent announcement, the White House revealed that seven leading AI development companies have made voluntary commitments to enhance the safety and fairness of artificial intelligence. The companies, including Amazon, Google, Microsoft, and OpenAI, have vowed to prioritize tackling issues related to safety, security, and trust in AI technology.
The Biden-Harris administration acknowledged the immense promise and potential risks associated with artificial intelligence. To ensure that the benefits of AI are maximized while safeguarding society, the companies developing these technologies have a responsibility to act responsibly and ensure their products are safe.
The organizations involved have agreed to conduct internal and external security audits to ensure the safety of high-risk areas such as cybersecurity and biosecurity before making their products available to the public. External testing by independent experts will also be conducted. Additionally, they will share best practices for safety and collaborate with other developers to find technical solutions to existing challenges.
To address concerns surrounding privacy and security, the companies pledged to protect proprietary AI models that may contain sensitive information such as neural network weights. If these details are kept confidential for safety or commercial reasons, the companies will implement measures to prevent unauthorized access or theft. Furthermore, the companies will provide users with a reporting mechanism to flag vulnerabilities, clearly state the capabilities and limitations of their models, and specify their appropriate use cases.
In an effort to combat disinformation and deepfake technology, the group of companies also committed to developing techniques like digital watermarking systems to label AI-generated content. Moreover, they expressed their intention to focus on safety research addressing bias, discrimination, and privacy issues. They aim to leverage AI technology for positive purposes such as advancing cancer research and addressing climate change.
While these voluntary commitments are a step in the right direction, some experts argue that they lack true enforceability. The White House acknowledged the potential need for stricter regulations to govern machine learning systems in the future. In contrast to Europe’s AI Act, the US and UK have not yet implemented legislation specifically targeting the development and deployment of AI. However, authorities in the US, like the Justice Department and the Federal Trade Commission, have emphasized the importance of adhering to existing laws protecting civil rights, fair competition, consumer protection, and more.
As the field of artificial intelligence continues to evolve, the responsibility falls on both developers and policymakers to strike a balance between innovation and ensuring the safety and ethics of AI technology. The voluntary commitments made by these prominent tech companies reflect a growing recognition of the importance of addressing potential risks and building trust in AI systems.