Peer reviewers of research proposals have been accused of misusing OpenAI’s language model, ChatGPT, in assessing grant applications for the Australian Research Council (ARC). Researchers have reported receiving feedback that appeared to be written by artificial intelligence (AI), with generic wording and prompts used in the ChatGPT interface. The discovery prompted concerns about time pressures faced by academics and potential lack of quality control within the council. Detailed assessor reports, typically written by experts in the relevant fields, are crucial in deciding which proposals receive government funding. The use of AI in this manner has been deemed unacceptable by Australian Minister for Education Jason Clare, who has directed the council to implement measures to prevent such incidents from occurring in the future.
The allegations emerged in relation to the latest round of grant funding for Discovery Projects. Some researchers received assessor reports containing phrases like regenerate response, reminiscent of the prompts in the ChatGPT interface. Although the reports were generally positive, they lacked specificity and simply echoed the content of the proposals. Concerned by this, one researcher submitted a complaint to the council, which subsequently removed the report. The researcher suggested that the use of AI might reflect the pressure on Australian academics to manage their workload and possibly a lack of internal quality control by the council. They also highlighted the potential negative impact on the academic discourse when AI-generated responses replace the critical engagement expected in the review process.
The assessor reports play a crucial role in evaluating grant proposals and determining the allocation of government funds. Typically, these reports are prepared by experts with expertise in closely related fields. However, the suspected use of ChatGPT has raised concerns about the ability to question or challenge the comments made in the reports. Without concrete evidence to back up their suspicions, researchers may find it difficult to respond effectively. The researcher who initially raised the issue emphasized the importance of being able to identify inconsistencies and engage in a robust academic discussion.
Reacting to the allegations, Australian Minister for Education Jason Clare stated that the use of AI in this manner is unacceptable. Clare has instructed the council to take immediate action and implement measures to prevent the misuse of AI tools in the assessment process. While AI can provide valuable support in various domains, its use in peer review, particularly without transparency or appropriate control mechanisms, raises concerns about the integrity and quality of the evaluation process.
In conclusion, allegations have arisen regarding the misuse of ChatGPT by peer reviewers in assessing research proposals for the Australian Research Council. The generic wording and prompts found in the assessor reports have raised questions about the use of AI and the potential impact on the quality and integrity of the review process. The Australian government has condemned this practice and directed the council to implement preventive measures. However, concerns remain about the ability to challenge or engage in a meaningful academic discourse when AI-generated responses are used instead of carefully considered evaluations. The incident prompts a broader discussion about the appropriate use of AI in peer review and the need for transparency and accountability in the grant funding process.