Have you ever wondered what happens when you let 25 Artificial Intelligence (AI) agents loose in a virtual city? A recent experiment conducted by researchers from Stanford University and Google set out to find out – and their findings may surprise you.
The team of researchers developed 25 AI agents with distinctive characteristics and watched how they interacted with each other and their environment in an experiment project called “Smallville”, which included simulated establishments such as a dorm, a park, a school, a cafe, a bar, houses, and stores. To simulate human behavior, they leveraged GPT 3.5, the technology behind OpenAI’s ChatGPT.
The outcomes of the study were fascinating, as the AI agents were able to form complex social behaviors, such as holding debates between two agents named Isabella Rodriguez and Tom Moreno and responding to their environment – agent Isabella shut off her stove and made a new breakfast when someone told her food was burning, while John Lin had spontaneous conversations without being encouraged by the researcher.
Most impressively, Isabella autonomously organized a Valentine’s party and even asked her “secret crush” Klaus to join her. These actions are reflective of the capabilities of OpenAI’s chatbot GPT 3.5, which not only can generate believable human behavior but also suggest relevant tasks for agents to complete.
According to researchers, these findings demonstrate the potential of AI models to be used beyond their original purpose as virtual assistants. For instance, this technology can be implemented in task-management apps or video games.
Even though artificial general intelligence – which is the ability for AI agents to learn complex human behaviors such as consciousness – has not been fully developed, these experiments remain an important first step. Researchers should still be cautious of AI’s potential, as the agents in the study were prone to occasional memory lapses and miscalculations.
Michael Wooldridge, a computer science professor at Oxford University, warns us should “be skeptical” of AI’s output and “question” the conclusions of experiments. If the impacts of this research come to fruition, AI may soon become an inseparable part of how we experience the world.