Microsoft has recently partnered with Tech Against Terrorism to develop an artificial intelligence (AI) tool that aims to detect and combat extremist content online. Tech Against Terrorism, a non-profit organization launched by the United Nations in 2016, will collaborate with Microsoft to create an AI-powered tool that can identify potentially harmful content, which will then be reviewed by humans.
The initial application of this tool will be to enhance Tech Against Terrorism’s Terrorist Content Analytics Platform (TCAP), which serves as a repository for verified terrorist content from designated extremist organizations. By using TCAP, Microsoft’s Azure AI Content Safety (ACS) service will further enhance its ability to flag potential terrorist content across various media formats, including text, images, and videos.
During the pilot phase of this initiative, Tech Against Terrorism and Microsoft will focus on establishing an evaluation framework to ensure the accuracy of the tool’s content detection. They will analyze whether the tool correctly identifies flagged content without perpetuating bias, as well as determine if it has any shortcomings in terms of under-detecting or generating false positives. If successful, the aim is to make this tool available to smaller platforms and non-profit organizations.
Adam Hadley, the Executive Director of Tech Against Terrorism, stated that this project intends to explore how AI technologies can revolutionize the approach to digital safety risks while respecting human rights. The use of reliable and trustworthy AI systems could significantly improve the detection of harmful content, including content generated by AI itself. This nuanced and globally scalable approach would then enable more effective human review processes for such content.
Tech Against Terrorism has already archived over 5,000 instances of AI-generated extremist content shared in terrorist and violent extremist spaces. The organization continuously uncovers a significant amount of additional content each year. With the rise of generative AI, there is concern that it will become a substantial medium to long-term threat. Tech Against Terrorism has uncovered various instances where extremist groups exploit AI tools to amplify their propaganda, including translating leadership messages and creating propaganda posters.
One of the emerging challenges faced by Tech Against Terrorism is the evasiveness of hash-based detection tools. The rapid evolution of AI technology poses the risk of rendering these detection methods obsolete, as AI-generated variations of existing and new content become more prevalent.
Brad Smith, Vice Chair and President at Microsoft, emphasized the urgency of combating the spread of extremist content on digital platforms. He believes that by combining Tech Against Terrorism’s expertise with AI capabilities, it will be possible to create a safer online environment.
This collaboration between Microsoft and Tech Against Terrorism represents a significant step forward in the ongoing battle against online extremism. By leveraging AI technology and human review processes, the ultimate goal is to mitigate the dissemination of harmful content and protect individuals from its influence.