Sponsored by edgeful

Happy Wednesday!
Nvidia held its annual GTC developers' conference in San Jose this week with over 30,000+ attendees. CEO Jensen Huang delivered a keynote on Monday, followed by technical sessions and a financial analyst Q&A on Tuesday. Here's a breakdown of what was announced, what it means, and what's worth paying attention to.
Groq Integration Is Moving Fast
The biggest substantive news was the progress on Groq, the AI inference startup Nvidia effectively acquired in late December for roughly $20 billion through a combination of key hires and technology licensing.
Nvidia has already designed a new server rack called Groq 3 LPX, which integrates Groq's inference chips with Nvidia's Vera Rubin AI servers. It's expected to ship in the second half of this year, likely by Q3. That's a three-month turnaround from deal close to product design — notably fast for hardware.
The technical rationale is straightforward. Groq's chips were designed from the ground up for inference using a compiler-scheduled architecture, meaning compute and data arrive simultaneously. That makes them well-suited for workloads where low latency matters — coding assistants, voice interfaces, real-time applications. Vera Rubin handles broader, heavier compute. The two are complementary.
The numbers Jensen shared: Vera Rubin alone maxes out around 400 tokens per second. With Groq integrated, the system reaches up to 1,000 tokens per second. That speed difference enables customers to charge more for latency-sensitive applications, which Jensen said roughly doubles the revenue potential on those racks. He estimated Groq would make up about 25% of a typical data center deployment, with Vera Rubin remaining the core.
Groq founder Jonathan Ross, now at Nvidia as Chief Software Architect, described the origin of the deal. Groq's COO Sunny Madra reached out to Jensen early in 2025 about connecting to NVLink. They got a cross-chip prototype working. Jensen saw it, called three days later proposing deeper collaboration, and the deal closed within three weeks. Ross started at Nvidia on Christmas Day.
During the analyst Q&A, management indicated Groq could be a material revenue contributor — with the dollar value of a Groq rack potentially matching the Vera Rubin rack it accompanies. Stifel estimated the Groq opportunity could add up to $150 billion to Nvidia's cumulative order pipeline over the relevant period.
Sponsored
Sometimes a setup looks clean, but you still want to know if the odds have actually been there before. Edgeful lets you sanity-check the history behind a move without turning your process into a science project.
You can see how similar price patterns played out in the past, how often breakouts held, and whether volume and trend behavior line up with the idea. It works across stocks, futures, forex, and crypto.
It’s not about predicting the future. It’s about using simple stats to decide if a trade makes sense or if waiting is the smarter move.

The $1 Trillion Order Visibility Figure
Jensen updated last year's order visibility disclosure. At GTC 2025, he said Nvidia had $500 billion in order visibility for Blackwell and Rubin across 2025–2026. This year, the number is over $1 trillion for Blackwell and Rubin across 2025–2027.
The stock initially spiked, then reversed. Some investors argued the number wasn't far above existing consensus estimates for data center revenue over the same period. But there are a few details worth noting.
First, Nvidia's CFO Colette Kress confirmed to Bernstein analyst Stacy Rasgon that the $1 trillion includes only Blackwell and Rubin — and associated networking. It does not include Groq LPUs, CPX products, standalone Vera CPUs, or Rubin Ultra. Those are all separate and additive.
Second, the figure is a current snapshot with seven quarters remaining before the end of calendar year 2027. Last year's $500 billion figure was similarly a point-in-time number that continued growing. Jensen's slide noted it was "still growing."
Third, Jensen indicated during the Q&A that the Vera Rubin generation represents a roughly 50% larger opportunity than Grace Blackwell on a generation-over-generation basis, given significant architectural changes.
Rosenblatt raised its price target to $325, modeling $550 billion in fiscal year 2028 revenue. Wolfe Research noted that even under conservative interpretations, the math suggests meaningful upside to calendar year 2027 consensus.
Inference Demand
A consistent theme across the conference — in keynotes, technical sessions, and informal conversations — was that inference compute demand is growing rapidly.
Jensen described it as "off the charts," driven by applications like coding assistants that consume tokens at much higher rates than simple chat interfaces. The pattern makes sense: agentic AI tools don't just respond to a single prompt. They execute multi-step tasks, iterating through chains of model calls, each one consuming tokens.
Ross offered a practical example from inside Nvidia. Some engineers on the LPU team have connected coding assistants (either Codex or Claude Code) to their phones and direct the AI entirely by voice. They're not typing code anymore — they speak instructions and the AI handles implementation. If that workflow scales, the token consumption per developer goes up substantially.
Jensen also spoke positively about OpenClaw, an open-source tool for autonomous computer use, calling it "the next ChatGPT" in a CNBC interview. Nvidia launched its own tooling for OpenClaw and autonomous agents during the conference. Chinese AI stocks rallied on the comments, with MiniMax and Zhipu surging 16% and 10% respectively.
A Shift in How Nvidia Talks About Its Business
One of the more notable things from the analyst sessions was Nvidia's push to reframe how its business is understood. Management's message, as Stifel's Ruben Roy described it, was a shift from "chips and sockets" to "manufacturing and tokenomics."
The framing: a data center is a factory. Its product is tokens. Nvidia's value proposition is improving the efficiency of that factory — more tokens per second per watt. If the company keeps delivering on that metric, average selling prices can continue rising because customers are earning more from each unit of compute.
It's a framework designed to address margin and pricing sustainability concerns. Whether it holds up depends on whether inference demand continues growing and whether Nvidia maintains its performance lead. But it's a deliberate repositioning worth tracking.
China Reopens
Nvidia announced it's restarting H200 chip manufacturing for the Chinese market. Beijing has approved sales to multiple Chinese customers, and Nvidia has received purchase orders. Jensen said this changed materially in the last two to three weeks, and the supply chain is ramping back up.
China once accounted for 13% of Nvidia's total revenue before export restrictions created a prolonged disruption. The company is also reportedly preparing a China-compliant version of the Groq chip, suggesting this is intended as a sustained commercial reopening rather than a one-off.
Jensen did note at a press conference that moving 40% of Taiwan's chip production to the U.S. would be "very difficult" — a grounded acknowledgment given Nvidia's prior $500 billion U.S. manufacturing commitment.
Copper and Optical Interconnects
Jensen confirmed that both copper and optical cabling will be supported for current-generation and next-generation (Feynman) AI servers. Feynman will also include co-packaged optics.
The takeaway from Barclays' Tom O'Malley: copper remains the in-rack scale-up connection through Feynman, with co-packaged optics handling switch-to-switch connections across racks. Copper has a longer runway than many in the market expected.
Despite this being broadly neutral-to-positive news for copper interconnect companies, Credo Technology still fell 11% on the day. Astera Labs and Amphenol each declined about 1%. Among optical names, Lumentum rose 3%, Coherent fell 1%, and Corning declined 2%.
Jensen said Nvidia plans to return 50% of its free cash flow through buybacks and dividends in 2026.
On risk, he offered a simple framework rather than naming a specific threat: "Don't get fired by your customers. Don't get bored so the company stops performing. Don't go out of business." On the culture front, multiple sources confirmed Nvidia employees are still working long hours, and Jensen continues reading escalation emails from lower-level staff — a practice he's maintained for years.
He pushed back on the idea that AI will displace software tool companies like Synopsys and Cadence, arguing that agentic AI engineers will use the same design tools humans use. Chip design requires certainty — you can't ship chips that "probably work" — and agent proliferation is more likely to expand tool usage than replace it.

