Search for AI Courses, Tech News and, Blogs

ChatGPT app deletions jump 295% in wake of U.S. Defense Department deal

by Jon Weatherhead | 1 month ago | 13 min read

ChatGPT is facing its fiercest consumer backlash yet after news of a high‑stakes deal with the U.S. Department of Defense, now rebranded as the Department of War, triggered a 295% spike in app uninstalls in the United States over a single day. The sharp reaction has ignited a global debate over how far big AI labs should go in working with the military and whether users are willing to stay on board when those red lines are crossed.

A 295% uninstall spike: what the data shows

According to mobile intelligence firm Sensor Tower, U.S. uninstalls of the ChatGPT app on Saturday, February 28, jumped 295% day‑over‑day, compared with an average daily uninstall rate of around 9% over the previous month. This means that, relative to a typical day, nearly four times as many users removed the app from their phones in the immediate aftermath of the defense deal becoming public.

The backlash also hit new installations. After a 14% day‑over‑day rise in U.S. downloads on Friday, February 27, ChatGPT’s download growth flipped into reverse as the deal made headlines. On Saturday, U.S. downloads fell 13% day‑over‑day, followed by another 5% drop on Sunday, indicating that not only were existing users leaving, but new users were hesitating to come on board.

Ratings data underscores the shift in sentiment. One‑star reviews for the ChatGPT app surged 775% on Saturday and then doubled again on Sunday, while five‑star reviews dropped by around 50% over the same period, according to Sensor Tower figures cited by multiple outlets. The pattern suggests that uninstall behavior was driven less by product performance and more by a moral and political reaction to OpenAI’s decision to formally align part of its technology stack with the U.S. military apparatus.

Key numbers at a glance

Metric (U.S.)Pre‑deal trend (Feb 27)After deal (Feb 28–Mar 1)
Uninstall rate (day‑over‑day)~9% baseline+295% on Feb 28
Downloads (Feb 27 → Feb 28)+14%−13% (Feb 28)
Downloads (Feb 29)−5% day‑over‑day
One‑star reviews (Feb 28)Baseline+775%
One‑star reviews (Mar 1)+100% vs previous day
Five‑star reviews (weekend)Baseline−50%

Inside the OpenAI–Pentagon deal

The uninstall wave followed OpenAI’s confirmation that it had reached a long‑anticipated agreement with the Pentagon formally referred to by the Trump administration as the Department of War to deploy its models inside classified government networks.

In a widely shared post on X, OpenAI CEO Sam Altman framed the deal as both a security and safety milestone. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman wrote. “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”

Altman stressed that two of OpenAI’s “most important safety principles” formed the core of the agreement:

● “Prohibitions on domestic mass surveillance.”

● “Human responsibility for the use of force, including for autonomous weapon systems.”

“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” he added, positioning the deal as a template rather than an exception.

The New York Times reported that after a weekend of criticism, OpenAI amended the agreement to include “additional protections to prevent the use of the company’s technology for the mass surveillance of Americans,” a concession clearly aimed at addressing one of the most explosive concerns raised by civil society groups and rival Anthropic.

What the Pentagon gets

While key technical details remain classified, reporting from U.S. media suggests the Defense Department will gain access to OpenAI’s frontier models within secure, classified environments for tasks ranging from intelligence analysis to logistics planning and war‑gaming. Officials have argued that AI is essential to maintaining a strategic edge against adversaries and that refusing to work with leading labs would amount to unilateral disarmament in a fast‑moving technological arms race.

OpenAI has emphasized that its systems will not be used for fully autonomous weapons that operate without human oversight, nor for domestic dragnet surveillance of U.S. citizens, pointing to both legal constraints and its own internal policies. However, critics note that both categories are notoriously hard to define in practice, especially once AI systems are integrated into sprawling, classified defense workflows.

Anthropic’s refusal and Claude’s surge

The fallout around ChatGPT cannot be understood without looking at Anthropic, the startup behind Claude, which has taken a very different public line on the Pentagon’s demands. In the week leading up to OpenAI’s deal, Anthropic CEO Dario Amodei said his company “cannot in good conscience accede” to a revised Pentagon framework that, in his view, would have enabled mass surveillance and fully autonomous weapons despite ostensibly narrow safeguards.

Anthropic has consistently pushed two bright‑line limits in its talks with defense officials:

● No use of its AI systems for mass surveillance of Americans.

● No deployment in fully autonomous weapons that operate without human involvement.

In a detailed statement, Amodei argued that while AI can “play a critical role in defending democratic nations and countering authoritarian threats,” today’s frontier systems are “simply not reliable enough to power fully autonomous weapons” and could undermine democratic principles if used for indiscriminate surveillance. He warned that newly proposed Pentagon language, “framed as compromise,” contained legal carve‑outs that would allow those safeguards “to be disregarded at will.”

That stance resonated with many consumers. After Anthropic publicly confirmed that it had walked away from the Pentagon’s latest terms, Claude’s U.S. downloads jumped sharply. Sensor Tower data, cited by TechCrunch and others, shows:

● Claude downloads in the U.S. rose 37% day‑over‑day on Friday, February 27.

● They climbed another 51% on Saturday, February 28, as the ChatGPT–DoW deal went public.

By March 2, Claude had reached the No. 1 spot on the U.S. App Store, leaping more than 20 places in about a week, and had become the top free iPhone app in several other countries, including Canada, Germany, Norway and Switzerland, according to third‑party estimates. Appfigures and Similarweb both recorded that Claude’s U.S. daily downloads surpassed ChatGPT’s for the first time, with Claude’s February figures running around 20 times higher than in January.

Parallel trajectories in the app stores

AppIndicator (U.S.)Change after DoW news
ChatGPTUninstalls+295% (Feb 28 vs prev. day)
ChatGPTDownloads−13% (Sat), −5% (Sun)
ChatGPTOne‑star reviews+775% Sat, +100% Sun
ClaudeDownloads (Feb 27)+37%
ClaudeDownloads (Feb 28)+51%
ClaudeU.S. App Store rankClimbs to No. 1 by Mar 2
ClaudeGlobal iOS free‑app rankingNo. 1 in multiple countries

The combined effect is a real‑time case study in how ethics positioning, as much as feature sets, can move the needle in the consumer AI market. Users did not just leave ChatGPT; many appear to have actively chosen Claude as the “ethical alternative,” even though the underlying functionality overlaps.

Consumer backlash and online protest

On social platforms, users documented their decisions to delete ChatGPT and switch to rival apps, often framing the move as a protest against the normalization of AI‑driven warfare and potential domestic surveillance. Many critics argued that a company that once cast itself as a cautious steward of “AI for all humanity” appeared increasingly comfortable trading that image for access to lucrative, secretive defense contracts.

App store reviews reflected this anger. New one‑star reviews mentioned terms like “war machine,” “surveillance,” and “military AI,” according to snippets collated by app‑analytics firms. Some users wrote that while they accepted limited cooperation between AI firms and governments, the explicit embrace of a “Department of War” brand, a rebranding favored by the current administration had crossed an emotional line.

Civil liberties advocates also weighed in. Digital rights groups raised alarms that even with written safeguards, the combination of classified deployments and broad “lawful purpose” language in defense contracts could, over time, erode the very protections that OpenAI and Anthropic say they are trying to enshrine. They warned that because many of the key systems are opaque, independent verification of how these models are used in practice will be extremely difficult.

At the same time, a smaller but vocal camp defended OpenAI’s decision, arguing that advanced AI will inevitably play a central role in national security and that it is better for relatively safety‑focused firms to be at the table than to leave the field to less constrained actors. This group often cited the risk of authoritarian regimes deploying similar or more powerful models without any of the public debate currently roiling the U.S. tech sector.

OpenAI under pressure to revise terms

The weekend’s uninstall numbers and ratings plunge added commercial urgency to what had already been a high‑stakes political and ethical fight. Within days, OpenAI quietly moved to adjust its agreement with the Pentagon. The New York Times reported that the company amended the pact to include “enhanced safeguards” explicitly aimed at preventing misuse of its AI for mass surveillance of Americans, tightening language that critics said was previously too permissive.

OpenAI has not published the full text of the revised agreement, but Altman reiterated in public comments that the company’s red lines include:

● No use of its systems for domestic mass surveillance.

● Human decision‑making remaining central in any use of force.

“In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome,” Altman said, presenting the Pentagon as willing to align with these boundaries. He argued that the company remains committed to “AI safety and wide distribution of benefits” and is seeking to de‑escalate tensions “away from legal and governmental actions and towards reasonable agreements.”

However, the revisions have not fully quelled skepticism. Critics ask how enforceable such clauses are in the context of classified operations, and whether they can realistically be monitored by external stakeholders, including OpenAI’s own employees. The episode has intensified calls for more transparent, democratically accountable guardrails around how cutting‑edge AI is integrated into military and intelligence workflows.

What this means for the AI industry

The ChatGPT uninstall surge is more than a short‑term reputational hit; it may be an early signal of how public opinion will shape the next phase of the AI race. For years, ethical AI debates have unfolded largely in policy circles and research labs. Now, app store rankings, uninstall charts and ratings spikes show that ordinary consumers are voting with their thumbs.

Several trends stand out. Ethics is emerging as a real market differentiator: Anthropic’s refusal to accept Pentagon terms it viewed as inconsistent with its safety principles appears to have given Claude a short-term growth boost. This suggests that clearly demonstrated safety commitments can translate into competitive advantage.

At the same time, the balance between national security partnerships and user trust is proving delicate. OpenAI’s closer alignment with defense institutions reflects a belief that such engagement is inevitable for frontier labs, yet recent numbers show how quickly trust can erode if safeguards are seen as weak. Efforts to standardize safety terms across AI contracts may shape future deals, but if those standards are viewed as insufficient, rivals can continue positioning themselves as the more principled alternative.

At the same time, the Pentagon now finds itself in the crosshairs of a public relations battle it did not fully control. Its clash with Anthropic, detailed in reporting that described ultimatums and threats to invoke the Defense Production Act, painted a picture of an institution eager to expand AI access even at the cost of alienating some of its most safety‑conscious suppliers. That could complicate future efforts to present U.S. defense AI policy as both robust and rights‑respecting.

The road ahead

For OpenAI, the immediate question is whether the uninstall spike represents a short‑lived flare‑up or the start of a sustained erosion in consumer loyalty. Historical experience from other tech controversies suggests that some portion of users will quietly return if the product remains indispensable, but others may permanently migrate to alternatives, especially if those alternatives are able to match or surpass ChatGPT on quality and ease of use.

More broadly, the episode may force all major AI labs including OpenAI, Anthropic and rivals like xAI to clarify where they draw the line on military, intelligence and law‑enforcement work, and how those boundaries are communicated to the public. As AI systems become more capable and more tightly woven into state power, the costs of ambiguity will only grow.

The surge in ChatGPT uninstalls after the DoW deal is thus both a statistic and a warning: in the new era of consumer AI, ethical red lines are no longer an abstract talking point, they may be a core driver of who wins and who loses in the marketplace.