OpenAI, a San Francisco based research laboratory founded by Elon Musk, Microsoft, Sequoia Capital, Peter Thiel, and others, have revealed the findings of testing their AI system GPT-4, known as ChatGPT, to determine its capabilities and potential harm. 50 data scientists from US and Europe tested the system for vulnerabilities, bias and plagiarism, for which OpenAI is now enforcing regulations to protect its use. Get informed about the potential of this technology, and the precautions OpenAI is taking.
OpenAI's GPT-4 was tested by 50 experts and academics to uncover any safety or security risks. Their findings showed the potential for the system to aid in plagiarism, financial crimes, cyber attacks and more. OpenAI has since taken steps to ensure such results won't appear when used publicly, yet the technology still raises alarm. ChatGPT plug-ins have extended GPT-4's capabilities to book and order items. Despite OpenAI's safety protocols, risks still exist, highlighting the importance of continual monitoring.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?