Discover how the Comptroller and Auditor General of India emphasizes the importance of audit institutions in ensuring responsible AI use in good governance. Learn how they plan to cope with technological advancements.
EU lawmakers have approved the Artificial Intelligence Act, which proposes categorizing AI systems into levels of risk and setting rules for developers. The objective is to address ethical and societal concerns of AI, while still driving innovation and competitiveness in the technology sector.
Discover the potential of AI technology in the United Nations operations with insights from OpenAI's chatbot, ChatGPT. Benefits and challenges explored.
The NITDA is developing a code of practice for AI, including ChatGPT, to ensure responsible and ethical deployment. The policies will address transparency, data privacy, bias, accountability, and fake news. This move follows Italy's temporary ban on ChatGPT and the EU's draft legislation to regulate AI. Collaboration between policymakers and AI developers is key to preventing spread of misinformation and ethical issues. The NITDA's initiative highlights the need for effective regulations to balance innovation with public safety.
American radio host, Mark Walters, files a defamation lawsuit against OpenAI claiming that its AI chatbot, ChatGPT, invented false legal accusations about him. This raises significant questions about AI accountability and the need for increased regulations.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?