New technology advancements continue to push the boundaries of what is possible with AI-powered drones. Recently, entrepreneurs and engineers, Luis Wenus and Robert Lukoszko, demonstrated how easy it is to create AI drones capable of hunting and potentially harming individuals.
Wenus and Lukoszko successfully configured a small drone to use facial recognition and artificial intelligence to track people. They managed to build what they called an AI-steered homing/killer drone in just a few hours, highlighting the speed at which such technology can be developed.
The potential implications of this technology are unsettling, especially considering the lack of anti-drone systems in place to prevent misuse. Wenus cautioned that these drones could be weaponized by attaching explosives to them, making them a potential threat in public spaces and events.
While Ukraine has already begun using similar technology for military purposes, the ease of building such drones raises concerns about the possibility of future terror attacks using this type of tech. Wenus emphasized that while currently some technical knowledge is required to build these drones, it is becoming increasingly accessible to individuals with malicious intent.
Despite advocating for open-source sharing of code, Wenus chose not to release the code for this particular drone due to its potential dangers. He emphasized that while the technology is straightforward to code, enabling its distribution could pose significant risks.
As the accessibility of AI drone technology continues to increase, it is crucial for policymakers and authorities to address the potential security threats posed by these developments. With the ability to easily track and target individuals, these drones represent a new challenge that must be met with appropriate regulations and safeguards to ensure public safety.