Opinion: We’re not Ready to be Diagnosed by ChatGPT
As technology advances, more and more tasks can be automated and performed quickly and efficiency. AI systems like ChatGPT, the latest update to the high-performance text processing system, can get a perfect score on medical exams and even accurately deliver potent messages of bad news with human-compassion. Despite its ability to imitate human intuition, however, GPT-4’s medical capabilities only extend so far. Taking medical decisions out of the hands of physicians and fully relying on an AI system could have serious repercussions.
GPT-4 is powered by natural language processing (NLP) algorithms, which allow it to respond to certain tasks in a relatively human-like manner. Andrew Beam, professor of biomedical informatics at Harvard, shared about his experience. He said that the system is able to pull off imitations of human intelligence by predicting what words should come next –an autocomplete system. People also noticed that teaching AI more language could help it solve complex math equations. Furthermore, Isaac Kohane, a physician and chairman of the biomedical informatics program at Harvard Medical School, has testified to GPT-4’s ability to give accurate diagnosis. However, it works very differently in comparison to a human doctor since it can only understand a process in a step-by-step basis.
GPT-4 could help simplify the tedious paperwork keeping physicians away from the crucial face-to-face interactions with patients. Dr Kohane further remarked that the system might one day be able to provide health care to those who do not usually have access to top experts in medical fields. Despite its unprecedented capability to offer a second opinion, GPT-4’s medical capabilities still remain unreliable as mistakes could be made due to subtle changes in the manner certain questions are asked. Moreover, it has been noted that its behaviour could be altered based on its pre-set opinion. All in all, experts agree GPT-4 is not yet at the level of replicating human medical practice and yet it is advancing quickly.
Microsoft’s Peter Lee and former Bloomberg journalist Carey Goldberg recently authored a book, The AI Revolution in Medicine: GPT-4 and Beyond, about the amazing feat of GPT-4, but many are worried about the potential misuse of the tool. Currently, a great deal of caution should be taken into consideration when taking advice from the AI system.
ChatGPT is an incredible piece of engineering, but it’s nowhere near understanding when and where it would be practical or ethical to follow its recommendations. AI is already starting to affect life-and-death decisions but it’s important to be aware of its limits. Doctors will have to use this tool with great skill to ensure correct usage in the future and it is our responsibility to ensure we do all we can to be prepared.