The Australian Federal Police (AFP) and Monash University are teaming up to develop artificial intelligence (AI) technology that can identify pictures of children on the dark web. In an appeal to the public, the AFP is asking for childhood photos to train an AI system called My Picture Matters. The goal is to combat the increasing prevalence of child sexual abuse and the generation of abusive material using AI technology.
AI advancements have given rise to doctored images that make it difficult for authorities to locate and rescue exploited children. The My Picture Matters project requires at least 10,000 images of children to program the system to identify potential child-related images on the dark web and seized devices in criminal investigations. These images will be combined with developed algorithms to detect sexual and violent content.
To address privacy concerns, the AFP assures the public that the dataset of images will be stored and managed by Monash University, not the police. Once the project is complete, the dataset will not be used for any other purpose. Furthermore, those who donate their childhood photos will have the option to withdraw consent at any time.
While it is logical for police to enhance their intelligence capabilities using AI, questions arise about the potential misuse of the technology once in the hands of law enforcement. An example of such concerns is the AFP’s meeting with facial recognition company Clearview AI, which breached privacy regulations in Australia.
Facial recognition technology is already utilized by Australian authorities for intelligence purposes, often scanning faces from CCTV and matching them to a database of images. Critics worry about potential misidentification and infringements on the presumption of innocence. Additionally, there is a lack of adequate protections within existing privacy laws regarding facial recognition technology.
The Australian Human Rights Commission (AHRC) has called for a suspension of all facial recognition technology usage until appropriate oversight bodies and technical standards are established. However, progress in protecting individuals’ rights has been slow despite the rapid advancement and widespread use of the technology.
Privacy concerns have led companies like Kmart and Bunnings to suspend the use of facial recognition technology, while major stadiums in Australia have implemented it without customers’ knowledge or consent.
To ensure the responsible and ethical use of the technology, it is crucial for robust privacy laws and oversight bodies to be established. Only through transparent regulations and comprehensive safeguards can the potential benefits of AI and facial recognition technology be reaped while protecting individual rights and privacy.