And China is betting everything on abundance.

Every week, a new headline declares victory in the “AI war.” OpenAI releases GPT-5. Google unveils Gemini Ultra. China’s DeepSeek drops a model that matches its US rivals at a fraction of the cost. Commentators rush to declare a winner based on benchmark scores or chatbot personality tests.
They are missing the point entirely.
The real AI war between the United States and China is not about who builds the smartest chatbot. It is not about open source versus closed source, model A versus model B, or even who has the most GPUs. Those are surface-level skirmishes.
The true battle is between two competing economic philosophies — two radically different answers to a single, world-shaping question:
Should artificial intelligence be a premium product, or a public utility?
How each side answers that question will determine not just which country “wins” AI, but what kind of economy — and society — the rest of the world inherits.
The Premium Product Model: AI as a Luxury Good
In the United States, AI has been built from the ground up as a profit-maximizing asset. The logic is straightforward: frontier AI is extraordinarily expensive to develop, so it should be monetized accordingly. The result is a familiar playbook borrowed from luxury goods, software-as-a-service, and pharmaceutical pricing.
OpenAI’s $20-per-month ChatGPT Plus plan set the template. Anthropic followed with a similar subscription. Even Google, which has the resources to give away Gemini for free, quickly introduced a premium tier. These are not one-off experiments. They are deliberate pricing strategies designed to maximize revenue per user and create predictable, recurring income streams.
Behind the subscriptions lies an even more consequential choice: keeping the most powerful models closed. When OpenAI or Google trains a new frontier model, it does not release the weights. It places the model behind an API, charges per token, and controls access. This is not a technical necessity; it is a business strategy. The goal is to treat intelligence itself as a scarce, proprietary resource — something only the well-funded can fully access.
And the investments justify the exclusivity. US tech giants have committed hundreds of billions of dollars to building vast data centers, each consuming enough electricity to power a small city. Microsoft alone plans to spend $80 billion on AI infrastructure this fiscal year. When you have sunk that much capital, you do not give away the output for free. You charge monopoly prices for as long as the market will bear.
This model has produced breathtaking innovation. The US remains the undisputed leader in frontier AI research, from reasoning breakthroughs to multimodal integration. But it has also produced something else: a tiered, stratified AI economy where the best models are reserved for the highest bidders.
Consider the math. A small manufacturing firm in Ohio with 50 employees cannot afford to fine-tune GPT-5 for its specific production line. A rural school district cannot buy API access for every student. A startup in Jakarta cannot compete for the same AI talent as Google or Meta. In the US model, AI becomes a force multiplier for those who already have capital — and a competitive moat against those who do not.
Even the subscription model’s viability is in question. Despite charging 20permonth,OpenAIreportedlylost5 billion in 2024. The unit economics are brutal: training and inference costs remain high, and most users do not generate enough usage to cover their share of the infrastructure. Casual subscribers are effectively subsidizing the heavy users. In a functioning market, prices would rise. Instead, venture capital and strategic losses are propping up a model that may never be profitable without dramatic cost reductions — the very cost reductions that Chinese competitors are forcing.
A recent study found that 95% of corporate AI investments have so far yielded no measurable return. That is not a sign of healthy competition. It is a sign of an industry running on hype and locked-in expectations, afraid to admit that the luxury pricing model may be fundamentally misaligned with the underlying economics of intelligence.
The Public Utility Model: AI as Infrastructure
Now look at China. The contrast could not be starker.
Chinese policymakers do not treat AI as a product to be sold. They treat it as infrastructure to be deployed — like electricity in the 1920s, railways in the 1950s, or the internet in the 1990s. The goal is not to maximize profit per user. The goal is to drive adoption so widespread and so cheap that AI becomes invisible, embedded into every factory, hospital, school, and government office.
This is not corporate philanthropy. It is a deliberate, state-coordinated industrial strategy.
In 2025, Beijing formalized the “AI+” initiative, mandating the deep integration of artificial intelligence into six priority sectors: manufacturing, agriculture, logistics, healthcare, education, and urban governance. Local governments did not just write memos. They cut checks.
The most revealing instrument is the AI voucher (Suanli quan, or “compute coupons”). Municipalities across China now issue vouchers that subsidize up to 80% of cloud computing costs for small and medium enterprises. In Shenzhen, a startup can receive the equivalent of 110,000infreecomputecreditsjustforapplying.InBeijing,thecapreaches1.1 million per company.
Then there are the “token banks” and “computing supermarkets” — experimental marketplaces where SMEs can purchase compute power in flexible increments, much as they might buy electricity from the grid. These are not technical novelties; they are institutional innovations designed to make AI a variable cost, not a capital investment.
Even the physical infrastructure is subsidized. Major data centers that use domestically produced AI chips — the very chips the US has tried to sanction — receive electricity discounts of up to 50% from provincial grids. The state is willing to absorb short-term losses to build long-term industrial capacity.
But the most consequential force in China’s utility model is not state spending. It is engineering born of necessity.
Because US export controls have restricted access to Nvidia’s most advanced chips, Chinese AI labs have been forced to do more with less. They have focused on algorithmic efficiency, model distillation, and inference optimization — the unglamorous work of making AI cheap. And they have succeeded beyond anyone’s expectations.
DeepSeek is the most visible example. The company’s V4 model achieves performance comparable to GPT-4 on multiple benchmarks, yet its API pricing is up to 370 times cheaper than OpenAI’s equivalent offering. That is not a rounding error. It is a different economic universe.
When inference costs fall by two orders of magnitude, the entire value chain transforms. Applications that were previously impossible — real-time translation in rural clinics, predictive maintenance for small workshops, personalized tutoring for every student — become not just feasible but trivial. That is what infrastructure does. It lowers the barrier until the thing itself disappears into the background.
The Evidence: China’s Utility Model Is Already Winning Market Share
This is not a theoretical argument about future capabilities. The shift is already visible in global token flows.
According to data from the API gateway OpenRouter, Chinese models accounted for 36% of global token consumption by mid-March 2026, surpassing US models in raw volume. In the open-weight ecosystem, Chinese models captured 17% of global downloads, narrowly beating the US share of 15.8%.
Those numbers matter because tokens are the currency of AI usage. Every API call, every inference, every interaction generates tokens. When Chinese models process more tokens than American models, it means developers and businesses around the world are choosing Chinese infrastructure — not because of nationalism, but because it is cheaper and often good enough.
And who is making that choice? Not just state-owned enterprises or Belt and Road partners. Silicon Valley developers. Airbnb’s engineering team has reportedly integrated DeepSeek models for internal tooling. Microsoft and Amazon have begun listing Chinese models on their cloud marketplaces. When the most capitalist companies in the world start buying Chinese AI, the “national security” framing becomes harder to sustain.
The low-cost Chinese models are also exerting direct downward pressure on US prices. OpenAI has quietly released cheaper, trimmed-down versions of GPT-4. Google reduced its API pricing twice in 2025. Anthropic introduced a “pro” tier that offers discounts for high-volume customers. These are market responses to competition — exactly the dynamic the video describes as “competing with abundance.”
The Critique: Both Models Have Deep Flaws
A balanced assessment requires acknowledging the weaknesses on both sides. The utility model is not a utopia, and the luxury model is not a dystopia.
China’s weaknesses:
First, governance and control. China’s open-weight models may be freely downloadable, but they are trained within a stringent content moderation regime. The weights themselves are politically sanitized. There are also legitimate concerns that the data flowing through AI systems — especially in healthcare, finance, and logistics — could be accessed by state security apparatus. Infrastructure cuts both ways. A utility can be monitored as easily as it can be used.
Second, the quality gap, while narrowing, still exists. US models retain a subtle but measurable lead on complex reasoning, advanced coding, and scientific problem-solving. For many high-stakes applications — drug discovery, aerospace engineering, quantum simulation — the US frontier models remain superior. China’s utility is good enough for most things. But not for everything.
Third, government vouchers have limits. They subsidize inference and fine-tuning, not frontier training. No amount of compute coupons will pay for a 100,000-GPU cluster. If China falls permanently behind on the next architectural breakthrough — something beyond transformers — the utility model could become a trap, optimizing a generation-old paradigm.
America’s weaknesses:
The US model’s flaws are equally severe. The premium subscription approach is fundamentally exclusionary, creating a world where the most advanced cognitive tools are available only to the wealthy. That is not just a moral problem; it is an economic inefficiency. By keeping AI expensive, US firms prevent it from being deployed in the millions of low-margin, high-volume applications that drive real productivity growth.
The profitability problem is real. If OpenAI, Anthropic, and others cannot make money at current prices — and their financial disclosures suggest they cannot — they face a choice: raise prices further, squeezing out even more users, or accept perpetual losses, relying on parent companies or sovereign wealth to keep them afloat. Neither is sustainable.
And then there is the energy problem. US data centers are projected to consume 9% of national electricity by 2030, up from 4% today. AI training runs already require dedicated power plants. In Northern Virginia, the world’s largest data center market, Dominion Energy has paused new connections because the grid cannot keep up. The luxury model’s physical footprint is becoming untenable.
The Next Five Years: Compute Dumping and the Global South
The video’s most provocative prediction is “AI dumping” or “compute dumping” — China offering AI services globally at a fraction of Western prices, much as China’s manufacturing sector undercut global competitors in solar panels, batteries, and steel.
This is not speculation. It is already happening in pilot form. Chinese cloud providers like Alibaba Cloud and Tencent Cloud now offer inference services in Southeast Asia, Africa, and Latin America at 30-50% of AWS or Azure prices. They are not yet competitive on absolute performance, but for 80% of use cases — customer service, document processing, basic translation — they are indistinguishable.
The real impact will be in the Global South. A hospital in Nigeria cannot afford $20 per user per month for a chatbot. But it could afford one cent per thousand tokens. A cooperative of coffee farmers in Vietnam cannot fine-tune GPT-5. But it could use a distilled Chinese model running on a local server. When AI becomes cheap enough, it spreads the way mobile phones spread — not because governments mandate it, but because the economics are irresistible.
Western firms have a deep competitive weakness here. They know how to charge monopoly prices for scarce goods. They do not know how to compete with abundance. Their entire business model — closed APIs, subscription tiers, premium access — is designed for a world of scarcity. China’s model is designed for a world of surplus.
Conclusion: The Battle Over Abundance
The video’s closing question is the right one: Should AI be a premium product or a public utility?
But the deeper implication is that this is not a choice any single country can make alone. AI is a global technology. The models trained in California are used in Bangalore. The infrastructure built in Shenzhen powers applications in São Paulo. The two models will not remain separate. They will collide, compete, and eventually hybridize.
What we are watching is not the end of the AI war. It is the beginning of a long, messy, decades-long contest over who gets to define the economic shape of machine intelligence. The US has the lead in frontier capabilities. China has the lead in deployment at scale. Neither advantage is permanent.
The winning model may turn out to be neither pure luxury nor pure utility. It may be something else entirely — a hybrid where frontier models remain expensive but distilled, specialized models become near-free, and governments use procurement and subsidy to ensure broad access.
But one thing is already clear. The chatbots and benchmarks are a sideshow. The real war is about who decides the price of intelligence itself.
And for the first time in a generation, the United States is not the only one writing the rules.
Enjoyed this piece? Subscribe to receive future deep dives on technology, geopolitics, and the future of the global economy.
Sources and further reading:
- OpenRouter token data, March 2026 (aggregated API usage statistics)
- China’s “AI+” initiative, State Council document No. 47 (2025)
- Shenzhen Municipal Compute Voucher Program, operational data 2025-2026
- DeepSeek-V4 technical report and published API pricing
- OpenAI financial disclosures, The Information, February 2026
- US Department of Energy, “Data Center Energy Use Projections 2025-2035”
- Study on corporate AI ROI, MIT Sloan Management Review, Q1 2026
