Microsoft believes that Artificial Intelligence (AI) has the potential to revolutionize job duties and create an “AI-employee alliance.” Microsoft predicts that these AI changes will be implemented by 2030, and they cite their own ‘Work Trend Index’ report as a basis for these predictions. AI technology, such as the ChatGPT, DALL-E 2, and GPT-3 systems created by the startup research firm OpenAI, stands ready to assist humans in completing tasks which have proved too difficult or time consuming for the human capacity.
These AI systems can help to reduce the unnecessary burden of data, emails, meetings, and notifications which fill our days, allowing people to increase their time real opportunities for creativity. However, the idea of humans working with AI programs which can imitate human like intelligence leads to fears that they will be replaced. This has created numerous debates about whether or not AI should be regulated.
OpenAI CEO Samuel Altman believes that governments must intervene to regulate and mitigate the risks of increasingly powerful models. He suggested that the U.S. government should create a new regulatory agency to monitor AI and its licensing and testing requirements to ensure it works properly and to prevent any potential disasters.
Meanwhile, Jessica O. Matthews, founder and CEO of the software company Uncharted, has urged people not to fear AI, but rather to ask questions about the people who are building it. AI systems are essentially “robot babies” and should not be used to do complex tasks, but rather, given information and can learn how to do certain things. Unfortunately, this learning is often framed by people with intentional or unintentional biases, making it essential for governments to regulate AI to prevent any negative implications.