Google Refuses Shareholder Request for More Transparency on AI Algorithms

Date:

Google’s parent company, Alphabet, has opposed a shareholder proposal calling for increased transparency around its algorithms. Trillium Asset Management put forward the proposal during the company’s annual stockholder meeting, raising concerns about how artificial intelligence (AI) can produce harmful outcomes in fields such as criminal justice and medicine, and citing the role of algorithms in radicalising the gunman behind New Zealand’s 2019 Christchurch shooting. Google argued that it already provides meaningful disclosures around its algorithms, and that the specific algorithms are foundational to its business and could be misused in the wrong hands. Trillium asked for more information on errors rates, targeting, and impacts on user speech and experience. The investor also urged the company to look at standards for algorithm and ad transparency proposed by the Mozilla Foundation and research groups.

Trillium Asset Management is one of the leading firms for socially responsible investing in North America. The company manages assets of more than $4 billion. It focuses on impact investing and on corporate responsibility for environmental, social and governance considerations. The firm has holdings in Alphabet and has been a vocal shareholder, calling for increased transparency and participation in company decisions in the past.

Geoffrey Hinton is a prominent computer scientist whose research focuses on machine learning, deep learning, neural networks, and artificial intelligence. He worked for various tech companies throughout the years, including Google’s parent company Alphabet. In 2019, he warned of the potential harm of new AI-based chatbots, saying the potential is quite scary. His remarks followed the release of a new and impressive language model by OpenAI, which could write paragraphs that were difficult to distinguish from a human’s. He retired from Google in 2019 to focus on his work at the University of Toronto and Google Brain. His warnings proved to be true as language models continue to produce inappropriate and even harmful responses.

See also  Dubai Unveils AI-Powered Facial Recognition Robot for Bicycle Violations

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

AI Revolutionizing Software Engineering: Industry Insights Revealed

Discover how AI is revolutionizing software engineering with industry insights. Learn how AI agents are transforming coding and development processes.

AI Virus Leveraging ChatGPT Spreading Through Human-Like Emails

Stay informed about the AI Virus leveraging ChatGPT to spread through human-like emails and the impact on cybersecurity defenses.

OpenAI’s ChatGPT Mac App Update Ensures Privacy with Encrypted Chats

Stay protected with OpenAI's ChatGPT Mac app update that encrypts chats to enhance user privacy and security. Get the latest version now!

The Rise of AI in Ukraine’s War: A Threat to Human Control

The rise of AI in Ukraine's war poses a threat to human control as drones advance towards fully autonomous weapons.