LinkedIn’s ability to detect and remove fake profiles has come under scrutiny after an AI-generated consultant exposed flaws in the system. The profile of Ada Richard, an AI-generated individual claiming to be a passionate and results-oriented consultant at Boston Venture Studio, went undetected until someone reported their concerns to LinkedIn. The only telltale sign that Richard wasn’t real was the claim of being home-schooled by Sam Altman, the founder of ChatGPT and Dall-E, leading AI systems.
The incident raises concerns about the future inundation of AI-generated content and people like Ada Richard. Will AI eventually dominate the digital landscape, leaving human creators in the background? How will we distinguish between human-made and AI-produced content?
Joshua English, the person who reported Ada Richard’s profile, questioned LinkedIn’s ability to identify fake accounts. He notified LinkedIn that this account is not a real person, prompting a request from the platform for Richard to upload a copy of a government ID. English’s actions ultimately led to the removal of Ada Richard’s profile.
This incident shines a light on the increasing presence of AI in our daily lives, from emails and content to music created by AI systems. As a content creator, English, who donated $5 million to create an Applied AI Institute, is invested in understanding the potential of AI and its drawbacks.
English also addressed LinkedIn’s role as the go-to platform for employers looking to validate degrees and work experience. He suggested that LinkedIn should create a better system for universities and employers to verify credentials. Additionally, he proposed a crazy experiment where companies could interact with AI employees labeled as such. These AI employees could potentially answer questions about workplace culture and benefits.
LinkedIn, owned by Microsoft, claims to block 99.7% of fake profiles before they are reported. They are working with academic researchers to identify AI-generated profile photos. However, the ease with which the AI-generated fake profile went undetected raises questions about the effectiveness of their detection methods.
In a test of LinkedIn’s fake detection capabilities, English created a new profile with a fictional name and an AI-generated hunky profile picture. Despite fabricating work experience and using AI to answer a software development skill test, the profile was not flagged by LinkedIn.
The incident highlights the need for improved fake profile detection and verification systems on platforms like LinkedIn. As AI continues to advance, there is a growing need for transparency and accountability in distinguishing between human-generated and AI-generated content and profiles.