INTELBRIEF
November 5, 2025
As Data Centers Proliferate, Anti-AI Resistance Has the Potential to Turn Violent
Bottom Line Up Front
- Online threats to physically sabotage AI data centers, which house the servers necessary to train, deploy, and deliver AI services, have proliferated over the past year according to surveying by The Soufan Center.
- Anti-AI resistance is not ideologically uniform, and instead has been driven by ethical, environmental, economic, and religious concerns.
- Contributing factors to potential future violent anti-AI acts and physical sabotage include concerns about its effect on employment and quality of life.
- Anti-AI resistance should be considered alongside heightened anti-corporate sentiment among younger generations and the politicization of the major AI companies.
Resistance to new technologies turned extreme or violent has a long pedigree, but negative public attitudes about rapid AI adoption across sectors, and the boom in infrastructure construction that facilitates it, could give rise to a new violent strand of extremism. According to Gallup data, 61 percent of Americans believe that AI technology will destroy more jobs than it creates. At the same time, Pew Research indicates that 57 percent of Americans rate the societal risks of AI as high. Across the United States, community groups in areas where large data centers are being built to house servers for training, deploying, and delivering AI services have organized campaigns to stop construction, citing noise, environmental degradation, and concerns about electricity rates.
Online surveying by The Soufan Center of commentary related to new data center construction shows that threats to physically sabotage these critical nodes of AI infrastructure have proliferated over the past year. While resistance to rapid AI adoption need not manifest violently, it is prescient to consider what this may look like as the AI boom is set to reconfigure labor markets and confront environmental and quality-of-life concerns that animate citizens globally. Potential violent anti-AI acts should also be considered in the context of heightened anti-corporate sentiment among younger generations and the intense politicization of some major AI companies, products, and executives. The current threat landscape is characterized by converging ideological, political, and socio-economic dynamics that may motivate individuals to target the AI sector in the near future.
Discussions about national security and AI have been anchored in the ongoing strategic competition between the U.S. and the People’s Republic of China (PRC), as well as in the vulnerability of data centers to sabotage by hostile nation-states. An additional threat vector that deserves further attention is the growing wave of anti-AI resistance and the potential for violent splinters in the near future. Mauro Lubrano, author of Stop the Machines: The Rise of Anti-Technology Extremism, identifies three core themes in the modern history of anti-tech resistance: first, concerns about its impact on material security including employment and livelihoods; second, threats to ontological security, as technologies are seen to erode social cohesion and our connection to the natural environment; and third, fears for existence itself, rooted in the belief that certain technologies are fundamentally incompatible with humanity.
The current skepticism surrounding widespread AI adoption is not ideologically homogeneous, and opposition has been political, religious, environmental, and economic. In commentary surveyed online calling for violent action to resist AI and construction of AI infrastructure, there is often overlap with other, existing forms of extremism, including accelerationism and eco-extremism, and personal grievances. Ethical concerns about AI misuse — including surveillance, deepfakes, and the production of terrorist propaganda — have also featured in resistance against AI adoption. Potential future violent manifestations of anti-AI resistance will thus likely not be ideologically uniform or clear-cut — much like the current landscape of political violence and terrorism — and instead be driven by a range of personal grievances, socioeconomic dynamics, and ideological persuasions that may seem incongruous.
Firstly, while there is disagreement on the full impact of AI on the labor market, certain occupations are already being (partly) automated. Researchers at the Stanford Digital Economy Lab find that since the widespread adoption of generative AI, workers in the 22 to 25 age group in the most AI-exposed occupations, such as software development and customer service, have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks. How this decline in employment may ripple out in future economic opportunities for this generation is disquieting, as graduating into a weak labor market has been correlated with lower earnings and worse employment outcomes for some period afterwards. Already, opposition to the construction of data centers in the United States, including threats to sabotage, has been framed through the lens of the effects of widespread automation on material security.
Employment changes driven by AI adoption should be weighed against a host of other factors preoccupying Generation Z, such as socio-economic disparities, corruption, and high unemployment rates, which have led to recent protests in Kenya, Madagascar, and Nepal. In Nepal, demonstrators set fire to the Satungal data center in Kathmandu, disrupting internet access nationwide. While not an AI data center, such infrastructure could become a symbolic target in the near future if AI is perceived as exacerbating unemployment and socioeconomic inequality. The perceived impact of AI on employment may prove more consequential than its actual effects and could become a driver of violence. Among the most common narratives identified by TSC in anti-data center discussions were calls to destroy the centers before they destroyed people’s livelihoods.
Environmental concerns have also mobilized citizens to oppose the construction of AI data centers in their communities. These facilities are often built in areas where land is inexpensive or where companies benefit from tax incentives offered by counties seeking to attract jobs to economically disadvantaged regions. Local residents from Georgia to Michigan, however, have voiced growing alarm over the substantial water consumption required to cool data center infrastructure and remain unconvinced that the quality-of-life and environmental costs of the projects outweigh the benefits, with data centers often requiring only limited labor once fully operational. At county town halls across the country where data centers have been proposed, opposition from residents and local environmental groups has centered on the impact of data centers' water and energy needs on local communities, often framed as an existential issue. In Virginia, which hosts the most data centers globally, these facilities consume up to 26 percent of the state's electricity. Analysis by Bloomberg News found that consumer electricity rates have increased by 267 percent in areas near data centers compared to five years ago. According to The Economist, data centers in the United States now account for five percent of all electricity used, up from two percent a decade ago. Noise from cooling systems has also led to widespread protests by residents living near data centers, giving rise to the latest iteration of the “not in my backyard” phenomenon. This phenomenon aligns with the fact that while most Americans view AI's effects negatively across a range of factors, roughly 79 percent say it is important for the U.S. to have more advanced AI than other countries, according to Gallup.
The economic, environmental, and quality-of-life concerns about AI are colliding amid eroding trust in big corporations and tech companies among the American public, particularly younger generations. After the highly mediatized arrest of Luigi Mangione, the assassin of UnitedHealthcare CEO Brian Thompson, an Emerson College Poll found that 41 percent of respondents between 18 and 29 years of age said the killing was acceptable or somewhat acceptable. As of 2025, according to Gallup data, large technology companies and big businesses are among the least trusted institutions among Americans. Online, this has been accompanied by a growing volume of condoning violence against executives at large companies. Online commentary surveyed by The Soufan Center on the construction of new data centers shows the extent to which threats of physical sabotage and resistance are anti-corporate in nature, often citing CEOs by name and referencing their relative wealth. Tech titans Mark Zuckerberg and Elon Musk are frequently the focus of online vitriol. The real or perceived unethical conduct of companies has often been cited to condone the assassinations of executives.
Low trust in tech companies cannot be disentangled from the broader political landscape. The companies driving data center expansion, including Amazon, Microsoft, Google, and Meta, have become increasingly politicized in recent years. Chatbots are accused of ideological bias, executives maintain close relations with people in political office, and the techno-libertarian politics that have long been associated with Silicon Valley appear increasingly absent as leaders align with political parties. Unsurprisingly, anti-AI resistance has already become framed politically. In one such instance, protests against a new data center in Memphis by xAI, a company founded by Elon Musk, were motivated not only by pollution concerns but also by the perceived political leanings of its founder after he became more actively involved with donating to and shaping policy in the Trump administration. One of the community organizations, Tigers Against Pollution, for example, called the Memphis Chamber of Commerce and other institutions involved in greenlighting the project “fascist sympathizers” on Instagram. AI infrastructure will continue to be viewed through the lens of the perceived politics of its executives.
With threats of sabotage against data centers and AI companies increasingly common online, we may soon see them materialize offline. As we safeguard our AI infrastructure and tradecraft against hostile nation states and strategic competitors, we must also recognize that these assets could become targets for domestic actors and that possible synergies between extremists and foreign adversaries could amplify the threat.