The development of Generative Pre-trained Transformer 4 (GPT-4) by OpenAI president and co-founder Greg Brockman has been responsible for a technological revolution in the world of artificial intelligence. The autoregressive language model’s ability to produce human-like text in mere seconds has astounded many business leaders, prompting them in a race to embrace its possibilities. Professor of practice (AI and innovation) at Imperial College Business School in London David Shrier has also commented on the spellbinding capabilities of GPT-4 that it can quickly build websites, invent games and create pioneering drugs.
Despite the evidence of GPT-4’s capabilities being undeniable, many business leaders are blindly ignoring the dangers of confidently incorrect AI. Shrier acknowledges that this is due to the race to take advantage of ChatGPT, with little consideration for the numerous pitfalls that remain unknown. This has become a huge problem, as there is a risk of companies re-orienting themselves around ChatGPT without understanding the potential implications, a mistake that could cost their businesses dearly.
OpenAI and Greg Brockman stand to benefit from these impressive advances in A.I. technology, from cutting edge research to a prestigious position at the helm of the industry. Based in San Francisco, the company’s mission is to foster safe, reliable, and trustworthy artificial intelligence applications that support the greater social good. Brokman has worked to render AI development accessible to all, believing it is OpenAI’s job to bridge the gap between research and productization.
David Shrier is​ a celebrated futurist, writer, speaker, and professor of practice (AI and innovation) at Imperial College Business School in Lonon, who has three books on the topic of emerging technologies. He is a frequent advisor to the banking, venture capital, and government agencies in the US, India, and UK on the subject of artificial and quantum intelligence, biotechnology, and digital health. He has recently warned business leaders to be more aware of the dangers of “confidently incorrect” AI instead of merely embracing its potential, to prevent running into significant risks.