ChatGPT’s Facial Recognition Potential Raises Concerns for OpenAI
OpenAI, the organization responsible for the development of ChatGPT, an artificial intelligence (AI)-powered chatbot, has expressed reservations about incorporating facial recognition capabilities into the platform. While ChatGPT’s advanced version, GPT-4, has the ability to analyze images and identify people’s faces, OpenAI has decided not to make facial recognition or analysis features available for public use. The concerns primarily stem from potential legal issues related to consent requirements for using biometric data in certain jurisdictions.
OpenAI’s AI policy researcher, Sandhini Agarwal, explained to the New York Times that while the chatbot can identify public figures like individuals with Wikipedia pages, it does not match faces to images sourced from the internet, unlike tools developed by companies like Clearview AI and PimEyes. OpenAI aims to avoid controversies surrounding data collection practices, which have plagued both Clearview AI and ChatGPT, albeit for non-biometric data in the latter’s case.
Apart from legal concerns, OpenAI is uncertain about ChatGPT’s ability to provide inappropriate or inaccurate assessments of people’s faces, including their gender or emotional states. There is also a fear that the chatbot’s visual analysis feature could generate hallucinations and create misleading or false results, including inventing a person’s name.
OpenAI previously discovered gender and age biases in CLIP (Contrastive Language-Image Pre-training), its computer vision model. Consequently, the company concluded that CLIP was unsuitable for tasks like facial recognition.
While image analysis remains unavailable to the public, there have been limited trials of GPT-4’s image analysis feature. Notably, New Zealand-based podcaster Jonathan Mosen, who is blind, had the opportunity to try out the advanced version of the chatbot through collaboration with Be My Eyes, a Danish mobile platform that connects visually impaired individuals with sighted volunteers and companies. Mosen detailed his experience in his podcast, Living Blindfully.
The visual analysis feature was also made available to certain users of Microsoft’s AI-powered Bing chatbot. However, in this case, pictures of faces were automatically blurred for privacy reasons, as reported by the Times. OpenAI envisions that this technology could help users resolve issues by simply uploading images, such as fixing a car engine or identifying a skin rash.
While OpenAI’s Agarwal does not specifically mention the potential for spoof attacks using material generated by ChatGPT, Jumio Chief of Digital Identity Philipp Pointner alludes to this possibility in a recent guest post on Biometric Update.
In contrast, some companies like Sensory are exploring incorporating voice-enabled consumer electronics with ChatGPT’s text-based capabilities.
OpenAI’s cautious approach toward introducing facial recognition features demonstrates its commitment to avoiding legal complications associated with biometric data usage. Additionally, concerns surrounding the chatbot’s accuracy and potential for generating misleading information remain paramount. For now, OpenAI has decided to prioritize these issues and hold off on deploying facial recognition capabilities publicly.
As the AI industry continues to grapple with the challenges posed by facial recognition technology, it is vital that organizations like OpenAI strike a balance between innovation and responsible implementation. With ongoing discussions about privacy, consent, and ethical considerations, the development and adoption of facial recognition features will likely continue to be met with scrutiny and caution.