UN Secretary-General Antonio Guterres has expressed serious concern over reports that Israel is utilizing artificial intelligence (AI) to identify targets in Gaza. The use of AI in targeting decisions has raised alarms due to the potential for high civilian casualties, particularly in densely populated residential areas.
According to a report from Israeli magazine +972, the Israeli military has employed AI technology to pinpoint targets in Gaza with minimal human oversight, sometimes in as little as 20 seconds. This approach has drawn criticism for delegating life and death decisions to algorithms, with little consideration for the impact on families and civilians.
The report alleges that the Israeli army has designated tens of thousands of Gazans as suspects for assassination using an AI targeting system, resulting in casualties due to the lack of human intervention and a permissive policy towards collateral damage.
In a rare admission of fault, Israel acknowledged mistakes and violations in the deaths of seven aid workers in Gaza, admitting that they had misidentified the individuals as armed Hamas operatives.
The use of AI in targeting decisions has sparked debate over the ethical implications and potential consequences for civilian lives. The UN chief’s concerns reflect broader unease over the increasing reliance on technology in military operations and the need for greater accountability and safeguards to protect innocent lives in conflict zones.