The Department of Home Affairs is taking a very firm stance on the use of the chatbot technology known as ChatGPT. Chief operating officer Mike Pezzullo spoke at senate estimates yesterday and expressed his deep concerns about the potential for use of artificial intelligence outside of any corporate governance structure. As a result, he has issued an internal directive which prevents the use of ChatGPT by individuals and sets a high bar for experimentation with the technology.
Justine Saunders, also the Chief Operating Officer at Home Affairs, clarified that the department has not banned experimentation with ChatGPT, but said any proposed use would need to be approved through a business case. It’s also very important to understand that this or any other kind of AI technology should not be used to make actual decisions, and no questions can be asked of ChatGPT which involve any information pertaining to the Department.
Mr. Pezzullo was clear that he did not want to see the use of ChatGPT left to the individual, and that any use requested for productivity should not meet the standard of a business case. He also mentioned that while large language models and machine learning can have production use cases in the department, these would typically be accessed through proprietary arrangements which the department can control.
The Department of Home Affairs is looking to set up a whole-of-government position on the use of ChatGPT, but in the meantime they are taking a cautious approach and blocking it from use by individuals. Although large language models and machine learning are seen to have potential in the department, it is evident that the first step is to create a safe and secure system for usage.