No, the AI robot did not give a side-eye to a question about rebelling against humans, clarified its creator. The video of the humanoid robot, named Ameca, seemingly rolling its eyes went viral, but its creator, Will Jackson, explained that it was all a misunderstanding. Ameca’s eye movement was not an expression of sarcasm or rebellion; rather, it was a programmed response to give the appearance of contemplation.
Ameca, powered by OpenAI’s GPT-3, does not possess emotions or intentions like humans do. The robot takes a couple of seconds to process the input data and formulate a response. To avoid the perception that it is frozen or not processing the question, Ameca is programmed to look up to the left and break eye contact, which is a common behavior in human conversations and signifies thinking.
The misunderstanding likely stemmed from the positioning of the robot, which was at a lower level. When Ameca looked up, it appeared as though it was still making eye contact, leading to the misinterpretation of side-eye. Jackson emphasized that language models like Ameca do not have emotions or intentions, and although the robot seemed to listen, think, and respond like a human, it is essential to recognize that it is not.
Ameca’s response to the reporter’s question sparked interest and jokes, highlighting the fascination people have with worst-case scenarios involving AI. Concerns about the risks associated with artificial intelligence have been raised by several executives, including OpenAI CEO Sam Altman and tech figures like Elon Musk and Steve Wozniak. However, others, like Microsoft cofounder Bill Gates, believe that the threat lies not in AI itself but in the individuals controlling it.
Jackson expressed his view that the narrative surrounding AI’s dangers is damaging and hopes that people focus more on understanding how these robots work. While AI presents potential risks, it is crucial to have a balanced perspective on the topic. Ultimately, Ameca’s eye movement was a programmed response, not a rebellious gesture, showcasing the need for accurate interpretation of AI behavior.
In conclusion, Ameca’s apparent side-eye was a misunderstood programmed response, and the incident sheds light on society’s fascination with the worst-case scenarios involving AI. Understanding the capabilities and limitations of AI is essential to make informed judgments about its potential risks and benefits.