OpenAI's new GPT marketplace raises concerns about quality standards, but offers a no-code approach to democratize AI development. Find out more in this article.
A Northwestern University study reveals a security vulnerability in custom GPT programs created by OpenAI, potentially leading to data leaks. The research highlights the risks of prompt extraction and file leakage, with a high success rate in exploiting the vulnerability. Prompt injection attacks have also become a growing concern. The study hopes to prompt the AI community to develop stronger safeguards to balance innovation and security in AI technologies.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?