Google’s DeepMind Unveils JEST Training Method for AI Efficiency
Google’s AI research lab, DeepMind, recently introduced a groundbreaking AI training technique known as JEST (joint example selection) that promises to revolutionize the industry. This innovative method is claimed to be 13 times quicker and 10 times more power-efficient compared to existing approaches.
Unlike traditional methods that focus on individual data points, JEST is designed to identify the most learnable data subsets for optimal training. By utilizing a smaller model to assess data quality and guide the training of a larger model, JEST aims to enhance efficiency significantly.
The key to JEST’s success lies in the quality of the training data it utilizes. DeepMind researchers stress the importance of steering the data selection process towards well-curated datasets, as this greatly impacts the method’s effectiveness. Without high-quality, human-curated datasets, amateur AI developers may encounter challenges implementing JEST’s bootstrapping technique.
This development comes at a crucial time when discussions around the immense power consumption of artificial intelligence are gaining traction across the tech industry and global governments. In 2023, AI workloads consumed a staggering 4.3GW, nearly equivalent to Cyprus’s annual power usage. Notably, a single ChatGPT request is reported to consume 10 times more power than a typical Google search, underscoring the urgent need for energy-efficient solutions like JEST.