OpenAI, the renowned artificial intelligence research organization, has recently announced that it will not permit the use of its AI technology for political campaigning or lobbying as the world approaches important elections. The company, led by Sam Altman, is committed to preventing the dissemination of misleading ‘deepfakes’ and the creation of chatbots impersonating political candidates.
In a blog post, OpenAI emphasized its dedication to the safeguarding of democratic processes during upcoming elections in 2024, particularly in the United States, India, and the United Kingdom. The organization aims to ensure the responsible development, deployment, and utilization of its AI systems, with a focus on platform safety, accurate voting information, measured policies, and improved transparency.
To prevent misuse, OpenAI outlined several measures it is taking to anticipate and prevent abuse, including the creation of misleading ‘deepfakes’, scaled influence operations, and chatbots posing as candidates. The company employs red teaming and actively seeks feedback from users and external partners before releasing new systems. It also incorporates safety mitigations, enhances factual accuracy, reduces bias, and declines specific requests. For example, OpenAI’s image generation system Dall-e has guardrails in place to reject requests for generating images of real individuals, including political candidates.
OpenAI strongly emphasizes the importance of authentic interaction and trust, stating that chatbots must not pretend to be real people or institutions. It also discourages applications that discourage civic engagement. OpenAI believes that better transparency regarding the origin of images can empower voters to assess their reliability, so it is experimenting with a provenance classifier for images generated by Dall-e. This new tool has shown promising early results and will be made available to testers for feedback, including journalists, platforms, and researchers.
Additionally, the integration of ChatGPT with existing sources of information will provide users with access to real-time news reporting worldwide, complete with attribution and links. OpenAI believes that transparency and balanced news sources are crucial for enabling voters to make informed decisions based on trustworthy information.
OpenAI is committed to collaborating with partners to prevent potential abuse of its technology in the lead-up to this year’s global elections. The organization seeks to maintain a balanced perspective and aims to optimize the article for search engine visibility while adhering to high editorial standards.
In summary, OpenAI’s decision to prohibit the use of its AI technology for political campaigning and lobbying aims to protect the integrity of elections and ensure the responsible use of AI systems. The organization is actively working to prevent misleading ‘deepfakes’ and chatbots impersonating candidates, while simultaneously enhancing transparency and enabling voters to access reliable information.