Georgia radio host sues OpenAI for libel after its AI chatbot produced false information about him. OpenAI admits false information is a significant issue. Lawyers warn more lawsuits may follow. A cautionary tale on the power and reliability of AI.
Hackers have found a way to use the OpenAI language model ChatGPT to spread malware through bogus code packages. Cybersecurity researchers say attackers can take advantage of the model's tendency to suggest non-existent packages, making it difficult for developers to spot the malicious code. However, steps can be taken, such as ensuring libraries are what they claim to be and checking download numbers and dates. Developers need to be aware of the dangers to avoid spreading malware through software supply chains.
A San Francisco copywriter claims she was replaced by an AI chatbot, causing anxiety and worry about widespread job loss. Despite AI's impressive abilities to produce written work at scale, there's concern it could spread misinformation. CEOs are looking to AI to cut costs, but at what cost to human workers?
Gartner identifies potential legal risks of using large language models like ChatGPT. Companies need to establish safeguards to avoid reputational and financial consequences.
Google's chatbot, Bard, competes with ChatGPT-4 in language adaptation but excels in image recognition and integration with other services. However, users must verify results for accuracy.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?