ChatGPT Results in a Major Disruption

Date:

Since OpenAI’s artificial intelligence platform ChatGPT was released in November 2022, it has sparked a conversation about its potential and risks. It was designed to allow chatbot capabilities to predict word sequences based on context and generate human-like text in response to user prompts. However, when it comes to medical education, ChatGPT is not entirely fool-proof.

OpenAI is a San Francisco-area artificial intelligence research laboratory based in the United States, with backing from Microsoft Corporation (MSFT) – Get Free Report. People have reportedly been using ChatGPT to do about 80% of their job and get insights on how to pick the best stocks, as well as to search for the best airline deals. College students have even tried to use it to write their essays and get some help with exams.

Unfortunately, when Texas A&M University professor, Dr. Jared Mumm, used the chatbot to check if his students had cheated during the last three assignments of the semester, ChatGPT claimed to have written each sentence entered. This deemed half of the students’ diplomas invalid, but after they submitted Google Docs timestamps, the issue was resolved and no students failed their course.

ChatGPT’s accuracy is also being tested in the medical field. For instance, the chatbot passed the United States Medical Licensing Exam according to the Feinstein Institutes for Medical Research. However, further tests have revealed that ChatGPT is not as reliable. In multiple-choice examinations held by the American College of Gastroenterology, ChatGPT achieved a score of 65.1% and 62.4% in 2021 and 2022 respectively, which are below the minimum 70% required to pass the exams. Experts explain that AI has a lack of intuition in the topic and ChatGPT isn’t receiving enough information from medical journals which are accurate. Therefore, it is advised that medical professionals should not rely on ChatGPT for medical education and further research is needed to ascertain its reliability in the healthcare industry.

See also  Alibaba's Tongyi Tingwu Chat Tool Now Integrated with Messaging and Meeting Apps

Overall, it is clear that ChatGPT has its limits and is not entirely reliable for certain purposes. While artificial intelligence can be an incredibly useful tool, it is important to keep in mind both the potential and risks associated with it.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Elon Musk Praises Apple Headphones, Sparks Talk of Tesla Headphones

Elon Musk's praise for Apple headphones sparks talk of potential Tesla collaboration, driving curiosity in the tech community.

Breakthrough Study Finds Machine Learning Can Efficiently Diagnose Glioma Mutations

Discover how machine learning can efficiently diagnose glioma mutations, paving the way for personalized treatment options. Reduce uncertainty with AI.

Irish Independent Considers Licensing Deal with ChatGPT Owner

Irish Independent's potential deal with ChatGPT owner sparks discussions on AI ethics and journalism rights. Discover more.

OpenAI CEO Admits Mistake in Equity Threat to Employees

OpenAI CEO regrets equity threat mistake in restrictive exit agreement, vows to rectify and improve employee treatment.