A Google’s recent development of the AI language model, PaLM 2 has raised the question on whether data or compute is more important when it comes to training AI models. According to a report from CNBC, it seems that data is becoming increasingly more important: the AI modelused five times more training data than its predecessor (3.6 trillion tokens as opposed to 780 billion) but with less compute (340 billion parameters compared to 540 billion). It is clear that there is clearly a lot of resources that have to be invested in order to make cutting edge AI models.
Google has come to be recognized as one of the leading companies when it comes to AI technology. For over two decades, the company has been an innovator in machine learning and computer vision, and their PaLM 2 model is a testament to that. By investing in more data and less compute, Google has illustrated their commitment to delivering top-notch AI technology.
The development and training of AI models require a lot of different disciplines. This includes expertise like deep learning, experience with hardware, data engineering, and more. This is why the creators of the PaLM 2 model, Pranav Rajpurkar and Poonam Ligu, should be praised for their hard work and dedication. Rajpurkar is a Stanford PhD student and Ligu is a medical student at University of Toronto. Together the pair have successfully contributed to the success of Google’s AI technology with the PaLM 2 model.
To sum up, according to CNBC’s report it seems like data is becoming increasingly more important when it comes to training AI models like the PaLM 2. It is clear that there is a lot of resources and effort that companies have to invest in order to make cutting edge AI technology. Furthermore, the contribution of Pranav Rajpurkar and Poonam Ligu to the development of the PaLM 2 model should be highlighted and celebrated.