ChatGPT has recently taken the industry by storm, stirring up feelings of both intrigue and unease for its potential implications. Co-founder of Huel, James Collier, recently raised the question of whether the tool is any more sinister than the current situation of false nutrition information spread all over social networks by quacks. ChatGPT has taken the form of a language-processing tool from OpenAI, capable of tackling jobs like writing emails, coding, and various kinds of texts.
The GPT-4 version was launched last month, giving it the ability to accept image inputs and process more than 25,000 words. Linkedin recently formed a platform for people to brainstorm the impacts of ChatGPT on the nutrition industry, with James asking whether the experts are capable of differentiating what was written by an AI compared to a human being. Nature published a research paper which concluded that scientist couldn’t tell if abstracts were written by a bot. It raised concerns as to how the public can tell the difference.
In an interview with NutraIngredients, James spoke about his worries. He highlighted the problem posed by influencers who think they know it all after reading half of a book. ChatGPT can conveniently help them produce content that may seem authentic and scholarly, contributing to the falsehoods present on the internet. On the flip side, the software can also make life easier for nutritionists and writers. James tested the tool by comparing his version of a text on serotonin and mental health against ChatGPT’s version. On his Linkedin, he then asked followers to guess which was which and fifty percent got it right – showing how sophisticated the bot is.
James’s perspective is that because it is limited in a certain way; lacking the ability to appraise the strengths of studies and challenge itself, it is not considered a great threat. He believes the Dunning-Kruger effect won’t be part of ChatGPT, as it won’t believe it knows everything when it isn’t.
With time, researchers are trying to make the tool more effective through a set of guidelines they have created. Elsevier has also made a policy applicable to manuscripts written in this manner.
It is certain that AI technology is here to stay, and its form and function can potentially be explored and utilized for convenient purposes. We should ensure that its application does not extend beyond its current capabilities and for that reason, it cannot be considered particularly scary.