The OpenAI–Anthropic rivalry is increasingly defined by revenue density, token usage, and enterprise demand rather than raw user counts.
The OpenAI–Anthropic rivalry now has a new scoreboard: revenue run-rate, not user counts.
Reuters reports that Anthropic’s annualized revenue has surpassed $30 billion. OpenAI, meanwhile, said in its recent fundraising materials that it is generating roughly $2 billion a month, or at least $24 billion annualized. On the surface, that looks like a lead change. The more useful read is what those numbers reveal about the shape of AI demand in 2026.
Revenue Is the New Scoreboard
Start with Anthropic’s own framing. In its April 6 announcement about expanding TPU compute capacity with Google and Broadcom, the company said run-rate revenue is now above $30 billion, up from roughly $9 billion at the end of 2025. It also said the number of business customers spending more than $1 million annually has risen above 1,000, double the figure it cited in February.
That kind of growth is hard to explain with casual consumer usage alone. It points to something more valuable: concentrated enterprise demand.
Why Tokens Matter More Than User Counts
Reuters puts a name on the real mechanism behind that growth: token-intensive coding workloads.
The report points to the popularity of Anthropic’s coding agents, including Claude Code, and notes that the key metric for generating revenue is not the number of users, where ChatGPT still dwarfs Claude. It is the volume of tokens consumed. A developer asking a model to absorb a large codebase, run tests, and iterate through a refactor can generate far more revenue than thousands of lightweight consumer prompts.
That is what makes this race more interesting than a simple user-growth contest. In frontier AI, workload mix is becoming strategy.
The model with the biggest consumer footprint may not be the one with the strongest enterprise margins. A smaller customer base can produce larger numbers if those customers are paying to burn tokens all day.
The Numbers Are Not Fully Comparable
That also explains why the revenue comparison is messier than it first appears.
Reuters quotes Khosla Ventures partner Ethan Choi saying that comparing the companies’ self-reported figures is “apples to oranges.” The report says Anthropic may be counting revenue on a gross basis, without subtracting the share paid to third-party platforms, in a way OpenAI does not.
In other words, run-rate is not a clean accounting line. It is a narrative metric. It can be useful, but only if you understand what sits underneath it.
That does not make the numbers meaningless. It makes them strategic.
Compute Still Sits Under Everything
OpenAI’s own March 31 disclosure reinforces the same basic story from the other side. The company describes a flywheel in which better compute enables better products, which drives broader consumer and enterprise usage, which in turn funds more compute. It also states plainly: “We are now generating $2B in revenue per month.”
That is OpenAI’s version of the same argument. Scale matters. Monetization matters. But compute remains the underlying moat.
Zoom out, and the competitive landscape looks more constrained, not less. Reuters reports that at least 110 gigawatts of AI data center capacity is now in the planning stage through 2030. It also cites Nvidia CEO Jensen Huang’s estimate that costs could range from $60 billion to $80 billion per gigawatt. That implies as much as $6.6 trillion to $8.8 trillion in required outlay, even before additional projects are added.
Reuters also cites an estimate that available funding totals roughly $7.5 trillion when combining projected operating cash flows for Alphabet, Amazon, Meta, Microsoft, and Oracle with estimates of available debt and investments. In plain terms, the buildout math is tight.
IPO Timing Makes Revenue Density More Important
That funding pressure matters for IPO narratives.
Reuters notes that SpaceX is expected to lead IPO season and cites PitchBook analyst Kyle Stanford’s warning that demand for a mega-offering could push the broader IPO reopening into 2027. In that environment, both OpenAI and Anthropic have a strong incentive to show revenue density, not just growth.
Public markets will be far less forgiving if that growth appears to be purchased entirely with capex. Investors will want evidence that frontier AI companies can turn massive compute bills into durable, high-value demand.
That is why these run-rate numbers matter. They are not just signals of growth. They are signals of whether the underlying economics can hold.
The Real Meaning of the Run-Rate Race
This is the deeper lesson inside the OpenAI–Anthropic revenue story: frontier AI is turning into a business where workload mix determines business quality.
Consumer scale still matters. Brand still matters. Distribution still matters. But the market is increasingly rewarding something more specific: enterprise demand, coding relevance, and the ability to scale compute without letting the economics collapse under the weight of success.
None of this guarantees a stable ranking. But it does clarify what matters now. The next phase of the AI race will not be decided by who has the most users alone. It will be shaped by who can convert the most valuable workloads into revenue while surviving the capital intensity required to serve them.
Sources: Reuters · OpenAI Funding Update · Anthropic Compute Partnership
