DeepMind, a research division of Google, has unveiled a groundbreaking AI system that surpasses human fact-checkers in evaluating the accuracy of large-scale language models. The new technique, called Search Augmented Factuality Evaluation (SAFE), utilizes a large-scale language model to break down text into individual facts and uses Google search results to verify the accuracy of each claim.
In a recent study published on the arXiv preprint server, researchers found that SAFE outperformed human annotators on a dataset of 16,000 facts, matching human ratings 72% of the time and achieving a 76% accuracy rate in cases of disagreement. This level of performance has sparked debate among experts, with some questioning the definition of superhuman in this context.
To showcase the capabilities of SAFE and explore the landscape of AI in security, DeepMind will be hosting an invite-only event in Atlanta on April 10th in partnership with Microsoft. The event will focus on how generative AI is reshaping the security workforce, and attendees can request an invitation to participate.
Despite the impressive results of SAFE, some experts like AI researcher Gary Marcus caution against labeling the system as superhuman without benchmarking it against expert human fact-checkers. Marcus emphasizes the importance of transparency and proper contextualization when assessing the performance of AI systems like SAFE.
One of the key advantages of SAFE is its cost-effectiveness, with researchers finding that using AI systems is approximately 20 times cheaper than employing human fact-checkers. By leveraging SAFE to evaluate the factual accuracy of various language models, researchers also uncovered the importance of benchmarking top models against human baselines to ensure accountability and mitigate the risks of misinformation.
In light of the growing reliance on language models for diverse applications, tools like SAFE play a crucial role in automatically fact-checking information generated by these systems. However, the development of such technologies must occur transparently and with input from multiple stakeholders to build trust and accountability in the use of AI for fact-checking purposes. Ultimately, rigorous and transparent benchmarking against human experts is essential to measure the true impact of automated fact-checking in combating misinformation.