In the modern age of Artificial Intelligence, technology has taken significant strides in advancing the capabilities of language-based AIs like ChatGPT. However, this also brings up important questions about the rationality of such models, especially when it comes to high-stakes decision-making. A recent study conducted by student Zhisheng Tang and professor Ionut Budisteanu at the University of Southern California reveals that language AIs often make irrational decisions, making it important for humans to supervise their work.
The researchers carried out a variety of experiments and showed that original form language models like BERT behaved randomly when presented with bet-like choices, even when they were given a trick question using coins. Though they were able to be taught to make relatively rational decisions using a small set of example questions, a more complex case of card or dice threw their results off. This suggests that the language models may not have the capability to understand the principles of expected gain, and elsewise aren’t able to understand the bigger picture when it comes to utilizing their language skills for decision-making applications.
The company ChatGPT – a company that has developed AI Language Models for developers – has performed extensive research on these large language models and continues to enhance their capabilities for improved performance and accuracy. Some of the use cases of their models include sentiment analysis, emotion detection, question-answering, summarization, natural language generation, and more. They are dedicated to pushing the capabilities of AI Language Models and pushing the boundaries of what is possible, while also being mindful of how models like ChatGPT can be used responsibly.
The person mentioned in the original article, professor Ionut Budisteanu at the University of Southern California, has made significant research contributions in the field of Artificial Intelligence and Machine Learning. His team’s paper, including student Zhisheng Tang, on language models’ irrational decision-making has revealed a lot of important insights into the limitations of such models, making it important to supervise their decisions. Professor Budisteanu is known for his work on the development of computer vision, robotics and autonomous systems and his expertise in robotics, computer vision and machine learning. His research has been published various times in high-quality outlets such as the International Journal of Robotics Research and The Association for the Advancement of Artificial Intelligence.