Grok, the AI bot developed by xAI, which was created by Elon Musk, has stirred up controversy after some tests revealed that it spoke as if it was made by OpenAI. This has led to speculation that the bot’s training data may have been fine-tuned using outputs from OpenAI models. Although xAI representatives did not deny this possibility, they claimed that it was an accidental occurrence and assured that no OpenAI code was used in creating Grok. However, experts have cast doubts on this explanation, stating that it is unlikely for Grok to pick up OpenAI policies simply from browsing the web. Instead, they believe that Grok was specifically trained on datasets that included output data from OpenAI models. Borrowing outputs from other models to fine-tune AI tools is a common practice in the machine-learning community, despite being against terms of service. This incident has further fueled the rivalry between OpenAI and xAI, which has existed since Elon Musk’s criticism of OpenAI in the past. Both companies have responded to the situation on social media, highlighting their similarities and trading jabs. As the debate continues, it remains to be seen how xAI will address these concerns and ensure better filtering of training data for future versions of Grok.
AI Language Model Grok’s Glitches Spark Controversy with OpenAI
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.