- Wall St Engine
- Posts
- Meta, Google TPUs, and What Actually Changed for Nvidia
Meta, Google TPUs, and What Actually Changed for Nvidia
Sponsored by edgeful

Happy Saturday, Crew!
Hope you all are having a nice long holiday weekend. Today I will cover what Meta’s reported TPU talks with Google actually mean for Nvidia, Google, and the broader AI infra trade.
Meta’s reported discussions with Google over TPUs are basically a snapshot of where AI infrastructure sits right now. Nvidia is still at the center of the stack, but the biggest buyers are clearly testing alternatives and using that in negotiations.
According to The Information and follow-up coverage, Meta is in talks to spend billions of dollars on Google’s tensor processing units for its own data centers starting around 2027, and could begin renting TPUs through Google Cloud as soon as next year. Alphabet ripped higher on the headlines, while Nvidia gave back a few percent as investors tried to reprice the risk that one of Nvidia’s largest customers might diversify its accelerator mix.Sponsored
Sometimes a setup looks great, but you want to know if the odds are actually on your side. Edgeful helps you check the history behind the move without overcomplicating your process.
You can quickly see how similar price patterns played out in the past, how often breakouts held, or whether volume and trend behavior support the idea. It works across stocks, futures, forex, and crypto.
It is not about guessing the future. It is about using simple stats to decide if a trade makes sense or if patience is better.
Heads up for my readers: Edgeful doesn't usually offer free trials, but they've hooked us up with a 7-day one. If it sounds useful, check it out via the link below—no strings attached, and sometimes passing is the best call.
Once the Meta and TPU headlines hit, Nvidia put out a carefully worded public note saying it is “delighted” with Google’s success and reminded everyone that Nvidia is still the only platform that can run essentially every major AI model. The timing, right after the Meta coverage, made it obvious that management understood why the stock moved.
This sits on top of a second trend that has been building all year. Google’s TPU stack is moving from an internal advantage to a real external product. Anthropic has already signed a multibillion dollar deal that gives it access to more than one gigawatt of TPU compute, on the order of a million TPU devices, starting in 2026. Anthropic has been clear that it is doing this for price performance and efficiency, not as a side science project.
Google also formally launched its seventh generation TPU, Ironwood, and is pitching it as its most scalable and efficient accelerator so far. It is already used internally for models like Gemini and is aimed at large scale training and inference. On the sell side, the Anthropic contract is being treated as the template for how Google could onboard more big external customers over the next few years.
A lot of people still assume Google TPUs are miles behind Nvidia’s Blackwell stack. SemiAnalysis basically argues that is not the case once you look at real numbers. On paper, Nvidia still shows higher peak FLOPs, especially for GB200 and GB300. But once you haircut those headline numbers for what you can actually sustain under real power and DVFS limits, the gap shrinks a lot. In stress tests, Hopper only reached around eighty percent of peak, Blackwell landed in the seventies, and AMD’s MI300 series in the fifties to sixties. That is why spec sheets overstate the distance.
TPUv7 closes most of the throughput and memory gap to GB200, then leans on total cost of ownership. From Google’s side, the all in cost per chip at rack scale is meaningfully lower than a GB200 or GB300 system because they are not paying Nvidia’s full system margin. SemiAnalysis models that for a large, sophisticated user like Anthropic, TPUs can deliver roughly thirty to fifty percent lower cost per effective PFLOP than GB300 if you put in the compiler work and reach solid model FLOP utilization. Even if you only get around half the utilization of a GB300, you can still end up roughly even on cost.
The Anthropic deal is the proof point. Roughly one million TPUv7s tied to a one gigawatt plus buildout is not a side bet. It is a top lab deciding that TPUs are good enough on performance and better on economics for frontier training and high volume inference. SemiAnalysis also lays out how that credible alternative has already fed back into Nvidia pricing. OpenAI has not even deployed TPUs yet, but just having Google in the mix has helped them negotiate a lower effective rate on their Nvidia fleet.
On Nvidia’s side, the other big story is how it is using its balance sheet. In September, Nvidia outlined a framework to invest up to roughly one hundred billion dollars in OpenAI, with the size of the check linked to how many Nvidia systems OpenAI commits to for future data centers. Separately, Nvidia and Microsoft agreed to provide up to fifteen billion dollars in funding to Anthropic, alongside a commitment by Anthropic to spend about thirty billion dollars on Azure compute that is powered by Nvidia hardware.
Those structures have triggered a steady run of “circular financing” criticism. Skeptics argue Nvidia is effectively funding orders for its own chips through equity checks and creative partnerships. Nvidia has pushed back in memos and public comments, saying it is not running vendor financing schemes and that these deals are about long term alignment with key customers rather than artificially inflating demand.
Put together, you have three things happening at once. Nvidia is still the default accelerator vendor for most AI workloads and is tying itself even closer to the big labs through investment plus supply deals. Google is scaling TPUs from an internal tool to a merchant product, with Anthropic already on board and Meta now openly evaluating a similar path. At the same time, investors and regulators are paying more attention to how tightly the money flows are looped between chipmakers, hyperscalers, and model labs.
Inside that setup, the Meta and Google leak looks less like random noise and more like a clean way to shift the bargaining landscape.
A reasonable read is that making the TPU talks public gives Meta extra leverage in its next round of GPU negotiations. If Nvidia believes Meta is prepared to run a meaningful slice of future models on Google silicon, that strengthens Meta’s hand when it asks for better economics, better allocation, or even more direct capital support. Meta can try to move part of its AI infrastructure bill off its own cash flow and onto a mix of Nvidia investment, improved pricing, and Google capacity, while also covering some of its long term compute gap if the TPU rollout hits the timelines being discussed.
It is also a low risk move for Meta. Even if it stays mostly on Nvidia for core training, having a credible alternative that has already been validated at Anthropic gives Meta options on timing, architecture, and cost. And if a TPU deployment does show up at scale in Meta data centers around 2027, that would confirm TPUs as a true second source for other large buyers, not just an internal Google project plus one Anthropic outlier.
Wall Street Isn’t Warning You, But This Chart Might
Vanguard just projected public markets may return only 5% annually over the next decade. In a 2024 report, Goldman Sachs forecasted the S&P 500 may return just 3% annually for the same time frame—stats that put current valuations in the 7th percentile of history.
Translation? The gains we’ve seen over the past few years might not continue for quite a while.
Meanwhile, another asset class—almost entirely uncorrelated to the S&P 500 historically—has overall outpaced it for decades (1995-2024), according to Masterworks data.
Masterworks lets everyday investors invest in shares of multimillion-dollar artworks by legends like Banksy, Basquiat, and Picasso.
And they’re not just buying. They’re exiting—with net annualized returns like 17.6%, 17.8%, and 21.5% among their 23 sales.*
Wall Street won’t talk about this. But the wealthy already are. Shares in new offerings can sell quickly but…
*Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd.
For Nvidia, the near term impact is more about perception and pricing than about immediate unit loss. The company still dominates AI accelerator shipments, still has the most mature software ecosystem, and still sits at the center of most public AI infra roadmaps. That does not change off one potential Meta contract. What does change is that at least two large customers, Anthropic and now potentially Meta, have shown they are willing to institutionalize a non Nvidia rail in exchange for better economics and diversification.
For Google and its partners, the Meta discussions land on top of the Anthropic contract and signal that TPUs are now a strategic product line in their own right. Broadcom, which co designs and manufactures the chips, is already being flagged as a beneficiary of rising TPU orders into 2026 and beyond.
So the takeaway for this weekend is simple. This is not a clean winner versus loser story. Nvidia remains the primary supplier of AI compute. Google is turning TPUs into a serious alternative for a small group of very large, engineering heavy customers. Meta is using that alternative to improve its own position and potentially reduce how much of the AI bill it has to pay purely from its own cash.

