Experts emphasize the need to regulate AI technology to prevent bias from being embedded, as revealed in recent studies. For instance, African Americans were more likely to see ads for people with criminal records when googling their names, while women were less likely to be shown executive job ads than men.
The Australian government has taken a proactive stance by developing broad guidelines to govern AI technology’s use in the country. Melbourne University law professor Jeannie Marie Paterson, a member of the government’s expert panel on AI, stresses the importance of responsible AI in innovation, advocating for a comprehensive regulatory framework covering technology, training, social inclusion, and law to address the challenges posed by generative AI.
Following a thorough AI review, the government plans to adopt a risk-based approach, similar to the EU’s regulations on high-risk AI systems, to ensure quality assurance. Toby Walsh, an AI expert with UNSW’s AI Institute, highlights the necessity of enforcing existing regulations and developing new ones to mitigate emerging risks like misinformation and deepfakes.
Experts caution against advancing with AI technology without adequate regulation, warning of unintended consequences like perpetuating racism or sexism. To address this, the government aims to repurpose existing laws and implement new rules under them, rather than creating entirely new legislation. Various Australian laws, including those concerning privacy, copyright, online competition, anti-misinformation, and cybersecurity, will be examined to regulate AI effectively.
In addition to regulatory measures, tech companies must adopt a responsible approach to AI. Petar Bielovich from Atturra emphasizes the importance of critical thinking in software development, urging human intervention in assessing data sets used to train machine learning models. Salesforce, for example, has an Office of Ethical Use overseeing the development of AI products to ensure ethical and unbiased technology deployment.
One of the primary challenges with AI technology is training algorithms on flawed or incomplete data sets, potentially reinforcing bias and discrimination. Uri Gal from the University of Sydney Business School suggests creating synthetic data to train models, mitigating the risk of skewed representations.
The key takeaway is the necessity of swift yet responsible regulatory action to govern AI technology effectively. By treading cautiously and conservatively, regulators can address concerns surrounding biased AI deployment and work towards a more inclusive and ethical approach to technological innovation.