The next frontier for military AI is spying on employees using surveillance techniques that are familiar to authoritarian dictatorships. Various companies have emerged in the past decade selling subscriptions to employers for services such as “open source intelligence,” “reputation management,” and “insider threat assessment.” These tools, which were originally developed by defense contractors for intelligence purposes, have become dramatically more sophisticated as deep learning and new data sources have made them more powerful. Employers can now use them to identify labor organizing, internal leakers, and critics of the company using advanced data analytics. The expansion and normalization of tools used to track workers has attracted little comment, despite their ominous origins. Military-grade AI was intended to target national enemies, nominally under the control of elected democratic governments. We should all be concerned that the same systems can now be widely deployable by anyone able to pay.
For example, FiveCast was originally an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement. They can use them to collect and analyze all kinds of publicly available data, including social media posts. FiveCast bragged that its “commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, such as strikes, for corporations. Network analysis tools developed to identify terrorist cells can now be used to identify key labor organizers so that employers can illegally fire them before a union is formed. These tools may prompt employers to avoid hiring such organizers in the first place during recruitment. Additionally, quantitative risk assessment strategies conceived to warn against impending attacks can inform investment decisions, such as whether to divest from areas and suppliers that are estimated to have a high capacity for labor organizing.
The capabilities of these tools are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system. However, it is not clear whether these tools can live up to their hype. Network analysis methods assign risk by association, which means that individuals could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. Additionally, some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions.
Moreover, these subscription services work even if they do not work. It may not matter if an employee tarred as a troublemaker is truly disgruntled. Executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgments of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. The mere presence of these systems likely has a chilling effect on legally protected behaviors, including labor organizing. Although unionization is already monitored by some big companies like Amazon, the transfer of these informational munitions into private hands should prompt a public conversation about their wisdom and the need to protect workers’ privacy.