Since its noisy introduction, various higher education institutions have adopted various approaches when dealing with AI applications like ChatGPT. Boston University’s Faculty of Computing and Data Sciences has created a policy that requires students to disclose any use of AI and include detailed information on how it was used. This policy was designed by students in professor Wesley J. Wildman’s Data, Society, and Ethics class and has since been adapted. Despite this, many other universities are struggling to determine their own protocol for using AI.
Western New England University Associate Professor of Biology, Thomas Mennella, suggests that students treat leveraging ChatGPT like a well-meaning neighbor–stressing the importance of being thorough in understanding the information and where it is coming from. Harvard students were reportedly told this semester that using ChatGPT to aid in assignments would be considered a breach of the school’s honor code.
The University of Massachusetts at Amherst has amended their academic conduct policy. It now states that any usage of AI is not allowed unless expressly permitted by the instructor, although it is difficult to enforce this. A group of professionals lead by an MIT Professor recently called for a six-month break on advancement in AI tools such as ChatGPT which was developed by the nonprofit research lab OpenAI. This is to avoid any potential risks that come with the technology, such as job automation, misinformation, or even the possibility of developing nonhuman minds that may outsmart humans.
Even though ChatGPT use is often forbidden, many students still utilize it, and the influence it will have on the future of study and work is undeniable. Professor Wildman mentioned that AI might push society to the same extent as the Industrial Revolution, and this is neither good or bad but disruptive nonetheless. And although some plagiarism-detection software has come up as a result, it appears to be not that effective.
Northeastern University computer scientist Chris Martens brought up the importance of avoiding making the assumption of ChatGPT, as that may lead to incorrect conclusions. Further discussions include the challenge that present-day students might be inclined to use AI for their assignments due to the fact there is no effective way to check for it.
Finally, some students and professors are exploring the more evil side of AI technology, particularly in Worcester Polytechnic Institute’s class on the Ethics of Creative AI. There have been discussions around the risk of ChatGPT’s generated texts and OpenAI’s Dall-E’s generated images being used in some form of propaganda.
OpenAI is a non-profit artificial intelligence research lab that released the ChatGPT AI application – a text-generating service. Its main purpose is to generate text that appears to be written by a human, and to do this, it uses predictive-texting algorithms. However, it has been noted that it may produce low-quality results that contain inaccurate information, and it also hasn’t disclosed the sources of the data set that powers it.