The Queen Mary University of London professor, Professor Maria Liakata, has secured a substantial £4.38 million grant to tackle a critical challenge in Artificial Intelligence (AI). As a distinguished expert in Natural Language Processing (NLP) and Turing AI Fellow, Professor Liakata will lead a prestigious RAI UK Keystone project to address the pressing issue of sociotechnical constraints in Large Language Models (LLMs). This initiative, funded by a generous £31 million strategic investment from the UK Government, aims to advance responsible AI research and innovation.
Large Language Models (LLMs), such as those utilized in ChatGPT and virtual assistants, represent state-of-the-art AI algorithms trained on extensive textual data. These models possess the ability to produce human-like text, generate creative content, facilitate language translation, and respond to inquiries informatively. However, despite their impressive capabilities, the rapid integration of LLMs into safety-critical sectors like healthcare and law has prompted serious concerns.
Professor Liakata underscored the significance of this project, stating, We have a unique opportunity to leverage the potential of LLMs to enhance services and operational efficiencies in healthcare and law, all while mitigating the inherent risks associated with deploying inadequately understood systems.
Although LLMs exhibit certain limitations, such as biases, privacy breaches, and a lack of explainability, they are increasingly infiltrating sensitive domains. For example, judges are employing ChatGPT to summarize courtroom proceedings, raising questions about the accuracy of chronological sequences and the perpetuation of racial biases in parole determinations. Likewise, public medical Q&A platforms powered by LLMs may disseminate incorrect or biased information due to underlying technological deficiencies.
Acknowledging the significant potential for harm, Professor Liakata emphasized the project’s objective of ensuring that society reaps the benefits of LLMs while averting adverse consequences. By prioritizing the healthcare and legal sectors, which play pivotal roles in the UK economy and harbor both substantial risks and groundbreaking advancements, the project will concentrate on two primary goals:
1. Enhancing responsible development and deployment of AI technologies, particularly LLMs, to engender public trust and optimize their advantages across diverse sectors.
2. Leveraging interdisciplinary collaboration to foresee and address the challenges posed by AI advances, working in tandem with policymakers and stakeholders to amplify the beneficial impacts of AI on society.
Overall, the project spearheaded by Professor Maria Liakata represents a critical endeavor in promoting responsible AI innovation and ensuring the ethical utilization of cutting-edge technologies like LLMs. This initiative aligns seamlessly with Queen Mary University of London’s commitment to fostering responsible AI practices and underscores the institution’s dedication to advancing responsible AI research.