Queen Mary University Professor Secures £4.38M Grant to Address AI Challenges

Date:

The Queen Mary University of London professor, Professor Maria Liakata, has secured a substantial £4.38 million grant to tackle a critical challenge in Artificial Intelligence (AI). As a distinguished expert in Natural Language Processing (NLP) and Turing AI Fellow, Professor Liakata will lead a prestigious RAI UK Keystone project to address the pressing issue of sociotechnical constraints in Large Language Models (LLMs). This initiative, funded by a generous £31 million strategic investment from the UK Government, aims to advance responsible AI research and innovation.

Large Language Models (LLMs), such as those utilized in ChatGPT and virtual assistants, represent state-of-the-art AI algorithms trained on extensive textual data. These models possess the ability to produce human-like text, generate creative content, facilitate language translation, and respond to inquiries informatively. However, despite their impressive capabilities, the rapid integration of LLMs into safety-critical sectors like healthcare and law has prompted serious concerns.

Professor Liakata underscored the significance of this project, stating, We have a unique opportunity to leverage the potential of LLMs to enhance services and operational efficiencies in healthcare and law, all while mitigating the inherent risks associated with deploying inadequately understood systems.

Although LLMs exhibit certain limitations, such as biases, privacy breaches, and a lack of explainability, they are increasingly infiltrating sensitive domains. For example, judges are employing ChatGPT to summarize courtroom proceedings, raising questions about the accuracy of chronological sequences and the perpetuation of racial biases in parole determinations. Likewise, public medical Q&A platforms powered by LLMs may disseminate incorrect or biased information due to underlying technological deficiencies.

Acknowledging the significant potential for harm, Professor Liakata emphasized the project’s objective of ensuring that society reaps the benefits of LLMs while averting adverse consequences. By prioritizing the healthcare and legal sectors, which play pivotal roles in the UK economy and harbor both substantial risks and groundbreaking advancements, the project will concentrate on two primary goals:

See also  Exploring the Future of Science and Society: Falling Walls Science Summit Livestream Event, Nov 8th

1. Enhancing responsible development and deployment of AI technologies, particularly LLMs, to engender public trust and optimize their advantages across diverse sectors.
2. Leveraging interdisciplinary collaboration to foresee and address the challenges posed by AI advances, working in tandem with policymakers and stakeholders to amplify the beneficial impacts of AI on society.

Overall, the project spearheaded by Professor Maria Liakata represents a critical endeavor in promoting responsible AI innovation and ensuring the ethical utilization of cutting-edge technologies like LLMs. This initiative aligns seamlessly with Queen Mary University of London’s commitment to fostering responsible AI practices and underscores the institution’s dedication to advancing responsible AI research.

Frequently Asked Questions (FAQs) Related to the Above News

What is the focus of Professor Maria Liakata's £4.38 million grant project?

The project aims to address sociotechnical constraints in Large Language Models (LLMs) and advance responsible AI research and innovation.

Why are Large Language Models (LLMs) a cause for concern in sectors like healthcare and law?

LLMs, despite their impressive capabilities, may exhibit biases, privacy breaches, and a lack of explainability, leading to potential risks in critical sectors.

How does the project plan to mitigate the risks associated with deploying LLMs in sensitive areas?

By enhancing responsible development and deployment of AI technologies, fostering public trust, and leveraging interdisciplinary collaboration to foresee and address challenges posed by AI advances.

What are the primary goals of the project led by Professor Liakata?

The goals include enhancing responsible development and deployment of AI technologies, particularly LLMs, to engender public trust and optimizing their advantages across diverse sectors, as well as leveraging interdisciplinary collaboration to address the challenges posed by AI advances.

How does the project plan to ensure the ethical utilization of cutting-edge technologies like LLMs?

By working with policymakers, stakeholders, and experts in the healthcare and legal sectors to amplify the beneficial impacts of AI on society, while prioritizing responsible AI practices and fostering ethical AI research.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.