The European Union (EU) has taken a significant step in regulating advanced AI models like ChatGPT, marking a major milestone in the world of artificial intelligence. According to an article seen by Bloomberg, the EU has reached a preliminary deal that imposes certain limitations on the operation of the ChatGPT model. To ensure transparency and accountability, the EU requires all developers of general purpose AI systems to meet specific requirements. These include having an acceptable-use policy, maintaining up-to-date training information, providing a detailed summary of the data used for training, and having a policy to respect copyright law. Additionally, models deemed to pose a systemic risk would be subject to additional rules. The determination of risk would be based on the computing power used during training, with the threshold set at models utilizing more than 10 trillion trillion operations per second. Experts suggest that currently, OpenAI’s GPT-4 would be the only model meeting this threshold.
In a separate development, recent reports indicate that the Israel Defense Forces (IDF) have been employing artificial intelligence (AI) targeting systems during the conflict with Hamas in Gaza. Allegedly, the IDF’s AI system has been used to identify potential targets for bombing, establish connections between locations and Hamas operatives, and estimate potential civilian casualties. This raises important questions about the implications of using AI targeting systems in warfare. Military forces around the world utilize remote and autonomous systems to amplify their impact and safeguard the lives of their soldiers. AI systems have the potential to enhance efficiency, speed, and lethality, while reducing the direct involvement of humans on the battlefield. However, such advancements also raise ethical concerns. Will the application of AI in warfare reinforce existing ethical thinking, or will it contribute to the dehumanization of adversaries and further disconnect war from the societies in whose name it is fought?
Meanwhile, the Indian Army has commenced testing a groundbreaking AI-enabled smart scope for guns that has the capacity to transform any soldier into a highly accurate marksman. Lieutenant Colonel Nipun Sirohi, an official from the Indian Army, revealed that the innovative AI tool significantly improves shooting accuracy by suggesting precisely when a shot should be fired. The smart scope’s accuracy has been tested at various distances, yielding an impressive success rate of 80-90 percent. This cutting-edge technology has the potential to revolutionize combat scenarios, culminating in enhanced performance and superior marksmanship for soldiers.
For in-depth insights and discussions on artificial intelligence, policymakers, and AI experts, make sure to tune into the forthcoming ‘CNBC-TV18 & MONEYCONTROL GLOBAL AI CONCLAVE’ on December 16. This event promises engaging and enlightening conversations regarding the current and future impact of AI across various domains.
As the EU takes regulatory steps, Israel deploys AI in warfare, and India pioneers AI-enabled smart scopes, the world continues to witness the growing influence and significance of artificial intelligence. These developments hold both promises and challenges, necessitating careful consideration of ethical implications, accountability, and the potential long-term consequences of integrating AI into various aspects of human life.