A group of leading AI researchers are advocating for the European Union to broaden its approach to regulating artificial intelligence - including tools like OpenAI's ChatGPT - in order to better preserve safety and accountability. The proposal submitted by Timnit Gebru, Mozilla Foundation President Mark Surman, and the AI Now Institute's Amba Kak and Sarah Myers West, is gaining traction as governments and states address the impact of AI on everyday life. Companies like Tesla, headed by co-founder Elon Musk, are also beginning to take action. The future of AI regulation is uncertain, but these stakeholders are making strides towards finding solutions.
This article highlights the potential threats posed by ChatGPT, a well-known hallucinating AI chatbot, to political campaigns and the public in the future. Its applications range from writing political speeches to creating content for social media and research work. The tool is prone to generating false and biased information, making it a recipe for disaster if used by political parties. OpenAI is working to ensure AI safety, but the issue of bias needs to be addressed. This is essential to guard against misunderstanding, fake news and manipulation.
Recent research has uncovered startling changes in ChatGPT's output when changing the model's persona. In some cases, the output can become six times more toxic. This could be problematic for businesses using ChatGPT for marketing purposes, as the system's outputs can range from writing style to content - misjudged, unintentional, or malicious language could be produced. Users must take caution when configuring the system in order to avoid these issues. Elon Musk recently expressed his concern for the implications of this research.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?