The discourse around artificial intelligence has hit a fever pitch recently, with some of the top tech voices warning of a future where robots make all the decisions, including Elon Musk and Steve Wozniak. This has sparked the issue of responsible AI, leading to leaders such as Musk calling for a temporary pause in developing new AI models.
However, the key to this responsible AI is not a secret, something experts have known for years. To guarantee that AI systems are acting as expected, a human must be part of the process. Oded Netzer, a professor at Columbia Business School says, “We need to have humans involved in the training, and even code ahead of time, how machines should act in times of a moral quandary”. To avoid system wide discrimination or acting in a bad manner, industry professionals must make sure that AI is programmed with the desired behavior from the beginning and they must keep track of it as it develops.
Unfortunately, many big tech companies such as Microsoft, Google, and Meta have cut back on their AI ethicists, as reported by the Washington Post, making this the worst possible time to do so. AI ethicists do not just sit around and talk about philosophy but instead make sure that the AI does not have a negative impact in the world. They give the algorithm context and can notice inaccuracies or biased results.
Kriti Sharma, chief product officer of legal tech at Thomson Reuters is an example of this. She said that a combination of people from different backgrounds is necessary to create a successful digital world. Additionally, having an AI expert involved in the development process allows for the full understanding of the process and for the model to follow a certain social standard.
Overall, responsible AI is achievable but requires humans to be involved in the process. By having AI ethicists, we can ensure AI systems act ethically, fairly, and without discrimination. Cutting back on AI ethics teams, like many tech companies did, is a mistake which could bring serious implications in the future. The truth is that having an expert in the loop is the key to making sure AI operates safely and in the limits of expected behavior.