A group of scientists, clinicians, researchers, statisticians, computer scientists, engineers, methodologists, and journal editors have teamed up to create ChatGPT and Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use (CANGARU). The initiative aims to promote responsible use of Large Language Model (LLM) artificial intelligence systems like ChatGPT, which have tremendous potential but must be used with caution.
The team is developing standard reporting guidelines for scholarly use, as they believe that a Tower of Babel effect, where different parties create their own bespoke guidance and regulations, must be avoided. The cross-discipline initiative wants to achieve a comprehensive, inclusive, and globally relevant set of suggestions that will help prevent any conflicts between sets of guidance that might arise from multiple groups working independently on the same task.
To achieve this objective, the framework follows the standardized methodology for developing reporting guidelines which involves working collaboratively with academic and publishing regulatory organizations.
CANGARU aims to promote consensus on disclosure and guidance for the reporting of LLM use in academic research and scientific writing. By following these guidelines, a clear structure of guidelines for the scholarly community that would encourage responsible use of artificial intelligence systems will be developed.
The initiative will also mitigate any conflicts that might arise between sets of guidance that might arise from multiple groups working separately on the same task. The initiative is of paramount importance as it will be essential for preventing abuses in the deployment of LLM in the future.