Harvard Braces for AI Takeover in Higher Education: Faculty Guidance Raises Questions
As artificial intelligence (AI) continues to permeate various fields, Harvard University is preparing for a world where AI tools become commonplace in higher education. The Faculty of Arts and Sciences (FAS), Harvard’s largest academic school, recently released its first public guidance for professors on the usage of generative AI in their courses.
The guidance, issued by the Office of Undergraduate Education, aims to provide professors with general information on how generative AI works and its potential academic applications. Rather than enforcing a uniform AI policy, the guidance offers three different approaches that professors can take towards AI utilization in their courses: a maximally restrictive policy, a fully-encouraging policy, or a mixed approach.
According to Dean of Science Christopher W. Stubbs, the guidance is grounded in the principle that faculty members have ownership over their courses. He emphasizes the importance of faculty members becoming informed about AI’s impact on course objectives and effectively communicating their course policies to students.
The FAS guidance builds upon University-wide AI guidelines issued in July, which primarily focused on protecting non-public data. In alignment with these guidelines, the FAS instructs faculty not to input student work into AI systems, as third-party AI platforms own the prompts and computer-generated responses. To facilitate AI experimentation while mitigating security and privacy risks, Harvard University Information Technology is collaborating with third-party AI companies to develop an AI Sandbox tool. This tool will provide a secure environment for Harvard affiliates to experiment with generative AI.
To further educate faculty members on the implications of generative AI in STEM and writing courses, the FAS hosted informative sessions. These sessions, which are publicly available as recordings, explore the potential applications of AI as a learning tool, such as real-time information synthesis, code generation, and argument evaluation. Moreover, they offer strategies to AI-proof coursework, including the use of written exams and multi-step writing processes.
However, the FAS discourages the use of AI detection tools, as they are deemed unreliable. Despite the FAS’s emphasis on the importance of clear AI policies, many courses across different departments at Harvard still lack specific AI guidelines. This lack of clarity is evident in the absence of AI policies in numerous fall semester course syllabi in departments such as Government, English, Molecular and Cellular Biology, and Computer Science.
The presence of AI policies in syllabi varies greatly, with some courses fully restricting tools like ChatGPT, while others permit their use under specific circumstances. In many courses, the unacceptable uses of AI, such as answering homework questions or writing code, are explicitly outlined, while others completely forbid AI usage except for designated assignments.
As Harvard University navigates the increasing role of AI in higher education, there is a pressing need to ensure that course syllabi communicate clear expectations regarding the integration of generative AI. The university aims to strike a balance between leveraging AI’s potential benefits and addressing associated concerns surrounding privacy and reliability.
In the coming months, Harvard will continue refining its approach to AI in education, providing faculty members with the necessary tools and guidelines to navigate this rapidly evolving landscape. By fostering informed decision-making and effective communication, Harvard seeks to harness the power of AI while preserving the integrity of its academic programs.