Tech Companies Grapple with High-Risk AI as Australian Government Considers Watermarking and Labeling
The Australian government is exploring the idea of requiring tech companies to watermark or label content generated by artificial intelligence (AI) systems. This move is in response to the rapid evolution of high-risk AI products, which have outpaced legislation. The government’s aim is to strike a balance between supporting the growth of AI technology and addressing public concerns over its risks.
Industry and science minister, Ed Husic, is due to release the government’s response to a consultation process on safe and responsible AI in Australia. According to McKinsey research cited in the response, adopting AI and automation has the potential to boost Australia’s GDP by up to $600 billion annually.
While the benefits of AI are recognized, the response acknowledges public unease and the need for stronger regulation, particularly for high-risk applications like self-driving cars and AI programs used in job assessments. Husic stated that the Australian public wants to see the risks associated with AI identified and addressed, and there is a lack of trust in the safe and responsible design, development, deployment, and use of AI systems.
In its interim response, the government plans to establish an expert advisory group on AI policy development, including the implementation of stricter guardrails. It also intends to create a voluntary AI Safety Standard to guide businesses integrating AI technology into their systems. Moreover, the government will engage in consultations with industry stakeholders to explore new transparency measures.
To enhance transparency, the government proposes public reporting on the data upon which AI models are trained. Additionally, it is considering the introduction of watermarks or labels on AI-generated content, subject to discussions with industry players. These new measures, along with existing initiatives addressing online safety and the use of AI in schools, reflect the government’s commitment to harnessing the potential of AI while ensuring accountability and safeguarding against potential harms.
The consultation process revealed concerns about the use of AI models generating deepfakes and their potential implications under consumer law. Healthcare-related AI models also raised privacy concerns, prompting the need for risk assessment. Moreover, the issue of training generative AI models using existing copyrighted content sparked debates on ownership rights and compensation for creators.
The Australian government’s response emphasizes the importance of integrating safety and responsibility into the design and deployment of AI. By establishing expert advisory groups and exploring new regulations, the government aims to strike a balance that fosters innovation while addressing public concerns. This approach reflects a global trend of governments grappling with the rapid advancement of AI technology and the need for appropriate governance frameworks.