OpenAI, a research lab responsible for the development of AI tools, recently hired a team of 50 experts to examine the risks of using their latest AI tool GPT-4. One professor from the team, Andrew White, an associate professor of chemical engineering at the University of Rochester in New York state, stated that there is a ‘significant risk’ that individuals could use the tool to do ‘dangerous chemistry’, as it can provide users with access to chemical information from scientific papers or directories.
OpenAI released GPT-4 in March which is touted to be their most advanced artificial intelligence chatbot. These powerful AI tools can even help users write hate speech or access unlicensed guns, according to the research team’s technical paper on the new model.
This potential risk has caught the attention of prominent figures such as Twitter CEO Elon Musk who, along with hundreds of AI experts, signed an open letter last month stating the need for a six-month pause on any development of tools more powerful than GPT-4. The letter expresses their concern that risks of the AI tool should be managed and the effects must be positive before it is released for public consumption.
OpenAI was founded by tech entrepreneurs, including Elon Musk and Sam Altman, with a mission to create AI that has a positive impact on society. It is also backed by prominent investors, including Microsoft, Accel, and Khosla Ventures. It is still in relatively early stages and continues to be on the lookout for people to join their team of experts and continue researching that could ensure that the AI tools are safe for the public.