OpenAI‘s founder and CEO, Sam Altman, recently spoke at the Indraprastha Institute of Information Technology in Delhi about the issues of AI hallucinations in their generative AI, including the popular ChatGPT. Altman joked that he trusts the answers that come out of ChatGPT the least on Earth due to the potential that the AI‘s confident responses are not justified by the data.
AI hallucinations are a critical issue in generative AI as it can affect content creation, including news articles and analysis pieces. However, Altman addressed the issue with a serious response, stating that it will take about a year for OpenAI to perfect the model. He noted that the company is working to balance creativity and accuracy while minimizing potential inaccuracies.
In addition, Altman discussed the challenge of making AI safe and responsible. He explained that OpenAI conducts audits, improves algorithms, works with strict parameters, filters content and more in order to engineer safe AI. He also stated that there is no single solution to make AI safe, requiring comprehensive measures to maintain safe AI practices.
Altman is currently on a trip to six different countries, including India, Israel, Jordan, Qatar, the UAE, and South Korea, which highlights the importance of global AI outreach. As AI continues to expand and innovate, it is crucial to address challenges and develop responsible practices to create accurate and safe AI developments.