GPT-4, the advanced large language model developed by OpenAI, has demonstrated an impressive ability to reason, surprising many skeptics who believed these models were incapable of such cognitive processes. Back in 2017, DeepMind’s AlphaZero algorithm astounded researchers by teaching itself to play games at superhuman levels without any training. The program made moves that seemed alien to human players, leading to a realization that these AI systems were capable of a unique approach to problem-solving.
Fast forward to the present, where OpenAI’s GPT-4 is making waves. A friend of mine recently conducted an experiment with this powerful AI model, prompting it to consider a scenario involving Bob, Alice, and a parrot in a cage. Bob enters a room, covers the cage with an opaque cloth, and leaves. Alice then enters, takes the covered cage, and places it in a closed cupboard before departing. Curious about the beliefs of the characters in the story, my friend asks GPT-4 where each would think the parrot is located.
GPT-4 responds thoughtfully, stating that based on the information provided, Bob would believe the parrot is still in the covered cage somewhere in the room. Since he left before Alice moved the cage into the cupboard, he remains unaware of this action. On the other hand, GPT-4 explains that Alice would believe the parrot to be inside the cupboard in the covered cage. However, GPT-4 also recognizes the limited understanding of the parrot, suggesting that the bird would only sense its immediate surroundings and lack a clear concept of being in the cupboard.
This interaction undermines the assumption that large language models like GPT-4 lack reasoning abilities. Originally perceived as stochastic parrots making statistical guesses based on training data, GPT-4 proves capable of reasoning to some extent. Researchers have since delved into assessing the logic and reasoning capabilities of GPT-4 and similar models, finding that while they perform relatively well on established tests, they face challenges in certain tasks. Improvement is expected as development within this field progresses at an extraordinary pace.
The ultimate question arises: are these models stepping stones toward artificial general intelligence (AGI), or superintelligent machines? Conventional wisdom suggests otherwise, as these models lack a comprehensive understanding of the complex world. However, their increasing capabilities cannot be denied. GPT-4, for example, can solve novel and challenging tasks across various domains without any special prompting, performing strikingly close to human-level expertise. As such models continue to advance, it is essential to closely monitor their progress and implications.
In the realm of media evolution, Renée DiResta shines a light on the considerable changes in our media ecosystem in her insightful essay, The New Media Goliaths. Additionally, legal scholar Mark Elliott debunks the notion that Boris Johnson was undemocratically removed from parliament in a well-articulated blog post, settling the matter once and for all. Furthermore, Emily Bender emphasizes the importance of discussing AI’s existential risks without evading significant questions in her thought-provoking essay titled, Talking About a ‘Schism’ Is Ahistorical.
As we navigate these exciting yet transformative times, it becomes increasingly evident that AI models are steadily pushing the boundaries of what was once thought impossible. While challenges still exist, their potential for growth and their ability to bridge gaps in various fields showcases just how far they have come. It is crucial to stay vigilant and engaged, for this remarkable journey is only just beginning.