OpenAI Deliberately Restricts GPT-4’s Facial Recognition Abilities
OpenAI, the renowned artificial intelligence (AI) research lab, has taken a proactive step to limit the widespread use of facial recognition capabilities in its AI model, GPT-4. In addition to its proficiency in text-based interactions, GPT-4 can also describe images, making it an exceptional tool for visually impaired individuals like Jonathan Mosen. However, Mosen recently discovered that the app no longer provided him with detailed information about people’s faces. OpenAI made a deliberate change to GPT-4’s facial recognition capabilities, ensuring that it only identifies well-known public figures.
For Mosen, GPT-4’s ability to interrogate images has been nothing short of extraordinary. He recounted an incident where a social media image was described as a woman with blond hair looking happy. However, when analyzed by ChatGPT, the AI determined that the woman was actually wearing a dark blue shirt and taking a selfie in front of a full-length mirror. This allowed Mosen to ask further questions about the woman’s shoes and other visible details in the image.
OpenAI’s decision to obscure people’s faces within the visual analysis feature is a direct response to privacy concerns raised by many individuals. While GPT-4 can identify predominantly public figures, such as those with extensive Wikipedia pages, its capabilities are not as advanced as controversial tools like Clearview AI and PimEyes, which are specifically designed for wide-scale facial recognition. Publicly available facial recognition would challenge the established practices of U.S. technology companies and could potentially lead to legal issues in countries with strict requirements for biometric information consent. OpenAI and its major investor, Microsoft, are already facing a lawsuit under Illinois’s Biometric Information Privacy Act (BIPA) due to concerns about their collection of biometric data from the internet for AI training.
Another significant factor that has influenced OpenAI’s decision is the potential for the tool to make inaccurate or inappropriate assessments regarding people’s faces, including misidentifying gender or emotional state. OpenAI acknowledges these safety concerns and actively seeks public input to responsibly deploy its AI technology.
OpenAI recognizes that the development of visual analysis was an expected outcome, given that their model was trained using images and text gathered from the internet. They acknowledge the existence of similar facial recognition software, such as Google’s tool, which offers the option for well-known individuals to opt out. OpenAI is considering similar approaches to protect privacy and ensure responsible usage of its AI tools.
Apart from privacy and misreading concerns, OpenAI must also address the issue of hallucinations. These are false assertions that AI tools, like ChatGPT, have occasionally made. These instances have also occurred within GPT-4’s visual analysis capabilities, such as misidentifying a famous tech CEO or inaccurately indicating the presence of buttons on a remote control to visually impaired users.
Microsoft, as OpenAI’s major investor, has implemented a face-blurring tool for its Bing chatbot, which utilizes OpenAI’s technology. This feature prevents the chatbot from identifying individuals in photos uploaded by users to the platform.
In conclusion, OpenAI’s decision to restrict GPT-4’s facial recognition capabilities represents a conscientious response to privacy concerns. By safeguarding users’ privacy and addressing safety concerns regarding accuracy and appropriateness, OpenAI aims to responsibly deploy its AI technology.