ChatGPT, an artificial intelligence (AI) chatbot produced by Microsoft and its partnerOpenAI, has the potential to revolutionize how and where AI-driven information is used. ChatGPT is like Michael Jordan, a once-a-lifetime talent, with its uncanny ability to quickly generate realistic-sounding content, surpassing what many thought possible.
However, there is still much to be learned about ChatGPT and how it might be used safely and effectively in health care. Peter Lee, Corporate Vice President for Research and Incubation at Microsoft, spoke to attendees of the HIMSS health technology show in Chicago about ChatGPT, acknowledging that its capabilities raise both tremendous opportunities and incredible risks.
For medical applications specifically, ChatGPT’s use calls for the right “supporting cast” in order to reach its transformative potential. This idea is echoed by Erik Barnett, Digital Advisory Practice Leader for Avanade, who pointed out that decisions must be kept under the control of healthcare professionals while ChatGPT functions as a partner or copilot.
Microsoft is aware of the importance of involving medical personnel in AI-related decisions and has encouraged the use of high-impact, low-risk use cases. For example, eClinicalWorks has integrated ChatGPT with AI into its electronic health record (EHR) and practice management solutions for physicians. Additionally, EHR giant Epic, which serves nearly a third of US acute-care hospitals, is testing the use of ChatGPT in MyChart to help physicians respond to more messages from patients, thus freeing up more time for clinical care. In both these cases, doctors must review and approve the draft replies generated by AI prior to sending them out.
Beyond examples of such AI copilots, other facets of the supporting cast need to be carefully managed in order to ensure trustworthiness of the information, an absence of bias, privacy and security, scalability and more. Andrew Moore, former vice president of AI at Google Cloud and founder of Lovelace AI, cautioned attendees at HIMSS to not just wait but actively assess ethical risks for particular use cases of ChatGPT. Unique challenges when it comes to the enormous amount of data, its quality and origin were further highlighted by Kay Firth-Butterfield, Executive Director of the Centre for Trustworthy Technology.
Given that tech giants like Amazon and Google are already experimenting with their own generative AI, ChatGPT’s moment has clearly arrived for public adoption. Microsoft itself initially expected only a million users, yet the number of regular users quickly reached 100 million with over one billion visits to the website.
David Metcalf (Director of University of Central Florida’s Institution for Simulation and Training) is among those suggesting that an AI Oath might be needed for AI scientists, albeit a simpler one better suited to their rapid development and deployment process. Metcalfe encourages students to consider the future profiling implications of each prompt before engaging with any AI AI-driven voice analytics technology.
Microsoft, OpenAI, Avanade, Epic, and eClinicalWorks have provided invaluable insight and led the way in the development and use of ChatGPT. Theirs is an impressive combination of cutting-edge technology, data and experience that promises to continue delivering groundbreaking advances.