Recently, a group of influential individuals in the industry signed an open letter proposing a pause of six months on the development of AI that could pose potential risks. The document, published by the Future of Life Institute, places an emphasis on the potential harms of artificial intelligence, rather than its positives. Despite the good intentions behind this proposal, the moratorium on AI research is not the solution—we need to increase the transparency and accountability of AI systems while developing guidelines to better use and deploy them.
The Future of Life Institute is a non-profit research organization that advocates for responsible use of artificial intelligence. It was founded in 2015 by Jaan Tallinn, a computer programmer and early creator of Skype, and philosopher Nick Bostrom. While its mission is laudable, a moratorium on AI research is not possible. Many organizations—from private companies to universities to Kaggle competitions—are researching AI across a wide variety of topics. AI innovation brings with it tremendous potential and potential risks, and slowing down progress would limit big advances.
Equally as significant are the risks that AI is already presenting. AI systems already face criticism for algorithmic discrimination, predictive policing and other practices that disproportionately affect minority communities. These broad, current issues do not garner the same rhetoric as potential longer-term risks—such as robot uprisings and other AI-related catastrophes.
Rather than enforcing a moratorium, developers and users must be held responsible with clear guidelines about the ethical use and deployment of AI. To this end, the US Senate has submitted the Algorithmic Accountability Act to determine ways to address fairness and transparency of AI technology. Similar attempts have also been made in Europe and Canada, but more work needs to be done to make sure that the systems are safe and fair. This is especially true if we are to trust AI systems with more responsibility and sensitive data.
Moreover, AI designers should embrace a “slow AI” philosophy, a term coined by Google AI research scientist Timnit Gebru, which prioritizes ethical considerations when designing AI. It involves taking a systemic view of AI development and calls for further collaboration and scrutiny between all actors involved in its development. Guidelines, consensus and regulation created by the community also support this, such as the NeurIPS Code of Ethics that I co-chair, which was developed to provide a commitment to ethical AI practice.
While the open letter is a step in the right direction for raising AI safety concerns, it does not take into account the realities of AI research and the likelihood of an evolving technology. To ensure these systems are responsible and ethical, we must prioritize transparency and accountability, create tangible and implementable guidelines, and listen to the many who advocate for ethical AI. Only with all this can we develop systems that move us closer to the sustainable and responsible development of artificial intelligence.