Following Microsoft’s success with ChatGPT, a digital assistant noted for its artificial intelligence capabilities, the company has recently provided a set of governance principles designed for the use of AI. Brad Smith, Microsoft’s Vice Chair and President recently devoted a post to discuss the most appropriate ways of taking advantage of this new AI technology.
The post posits that instead of questioning what computers are capable of, it is more important to focus on what they should do. Smith then addresses five key points for how to administer AI and ensure it is controlled by humans. The first point calls for creating government-led frameworks and regulations focused on the development and use of “highly capable” AI systems. Following this point, operators are encouraged to build in safety features or “safety brakes” into their AI systems that can monitor the functions of critical infrastructure such as electrical grids, water and traffic systems.
The blueprints also promote transparency and broach the idea of public-private partnerships. These partnerships will have the potential to build a strong foundation for the use of AI, that will protect people’s rights and ensure that the use of AI is democratic and sustainable. Finally, the post calls for a new governing agency that can manage and license AI models and infrastructures.
Microsoft is truly leading the way in the development of ethical principles to ensure the safety of AI. Its research and development capabilities are unparalleled, and its commitment to providing top-notch technologies deserves to be commended. Brad Smith’s insight and innovation will help to create the guardrails needed as AI gains more popularity in the future. He and Microsoft have made it clear that they are dedicated to ensuring the safest use of AI that upholds the rights of individuals and maintains the staples of democracy.