The Next Step for Military AI: Integration into Work Computers

Date:

The next frontier for military AI is spying on employees using surveillance techniques that are familiar to authoritarian dictatorships. Various companies have emerged in the past decade selling subscriptions to employers for services such as “open source intelligence,” “reputation management,” and “insider threat assessment.” These tools, which were originally developed by defense contractors for intelligence purposes, have become dramatically more sophisticated as deep learning and new data sources have made them more powerful. Employers can now use them to identify labor organizing, internal leakers, and critics of the company using advanced data analytics. The expansion and normalization of tools used to track workers has attracted little comment, despite their ominous origins. Military-grade AI was intended to target national enemies, nominally under the control of elected democratic governments. We should all be concerned that the same systems can now be widely deployable by anyone able to pay.

For example, FiveCast was originally an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement. They can use them to collect and analyze all kinds of publicly available data, including social media posts. FiveCast bragged that its “commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, such as strikes, for corporations. Network analysis tools developed to identify terrorist cells can now be used to identify key labor organizers so that employers can illegally fire them before a union is formed. These tools may prompt employers to avoid hiring such organizers in the first place during recruitment. Additionally, quantitative risk assessment strategies conceived to warn against impending attacks can inform investment decisions, such as whether to divest from areas and suppliers that are estimated to have a high capacity for labor organizing.

See also  Iceberg Financial Launches ICE-Watch V2.0.0, Revolutionizing Data Analytics and AI in Finance

The capabilities of these tools are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system. However, it is not clear whether these tools can live up to their hype. Network analysis methods assign risk by association, which means that individuals could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. Additionally, some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions.

Moreover, these subscription services work even if they do not work. It may not matter if an employee tarred as a troublemaker is truly disgruntled. Executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgments of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. The mere presence of these systems likely has a chilling effect on legally protected behaviors, including labor organizing. Although unionization is already monitored by some big companies like Amazon, the transfer of these informational munitions into private hands should prompt a public conversation about their wisdom and the need to protect workers’ privacy.

See also  Elon Musk's xAI Challenges ChatGPT with Ambiguous Accuracy

Frequently Asked Questions (FAQs) Related to the Above News

What are some of the tools and services that companies are using to spy on their employees?

Companies are using services such as open source intelligence, reputation management, and insider threat assessment which were originally developed by defense contractors for intelligence purposes.

What kind of data can these tools collect and analyze?

These tools can collect and analyze all kinds of publicly available data, including social media posts. They can identify networks of people, read text inside images, detect objects, images, logos, emotions, and concepts inside multimedia content.

How are these tools being used to monitor employees and what are some of the potential consequences?

Employers are using these tools to identify labor organizing, internal leakers, and critics of the company. They can use them to fire key labor organizers, avoid hiring them in the first place, or make investment decisions based on their capacity for labor organizing. These tools may also have a chilling effect on legally protected behaviors, including labor organizing, and can lead to unfair retaliation against employees.

Is there a risk that these tools could be used to violate workers' privacy?

Yes, the use of these tools can raise concerns about workers' privacy, as well as potential bias and faulty assumptions in their analysis methods. The mere presence of these systems could have a chilling effect on legally protected behaviors, such as labor organizing, or lead to unfair retaliation against employees.

What is the next generation of surveillance tools expected to include?

The next generation of surveillance tools is expected to include next-generation AI technologies that make exploring varied data sources easier through prompting. The ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system.

What should be done to protect workers' privacy and prevent the misuse of these tools?

The transfer of these informational tools into private hands should prompt a public conversation about their wisdom and the need to protect workers' privacy. The use of these tools should be subject to regulation and oversight to prevent misuse and protect workers' rights.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Advait Gupta
Advait Gupta
Advait is our expert writer and manager for the Artificial Intelligence category. His passion for AI research and its advancements drives him to deliver in-depth articles that explore the frontiers of this rapidly evolving field. Advait's articles delve into the latest breakthroughs, trends, and ethical considerations, keeping readers at the forefront of AI knowledge.

Share post:

Subscribe

Popular

More like this
Related

Global Data Center Market Projected to Reach $430 Billion by 2028

Global data center market to hit $430 billion by 2028, driven by surging demand for data solutions and tech innovations.

Legal Showdown: OpenAI and GitHub Escape Claims in AI Code Debate

OpenAI and GitHub avoid copyright claims in AI code debate, showcasing the importance of compliance in tech innovation.

Cloudflare Introduces Anti-Crawler Tool to Safeguard Websites from AI Bots

Protect your website from AI bots with Cloudflare's new anti-crawler tool. Safeguard your content and prevent revenue loss.

Paytm Founder Praises Indian Government’s Support for Startup Growth

Paytm founder praises Indian government for fostering startup growth under PM Modi's leadership. Learn how initiatives are driving innovation.