THE LATEST NEWS
Marvell Eyeing Connectivity as the Next Big Thing in AI

At this year’s Marvell Industry Analyst Day, held on Dec. 9, Marvell Technology’s President and chief operating officer Chris Koopmans called the company an end-to-end connectivity company. That underscores how connectivity is rapidly emerging as the next big thing in the AI bandwagon.

In his opening address, Koopmans said that the industry has entered an era where the primary bottleneck is no longer compute capacity but connectivity. Why? Because AI compute is now abundant; moreover, modern AI workloads must be partitioned across many processors. As a result, high-speed I/O, SerDes, die-to-die links, memory attach and optical interconnects become essential to overall system performance.

That, in turn, makes advanced packaging, chiplets, XPU attach silicon, retimers, active electrical cables and long-haul optical engines fundamental to scaling AI clusters. Koopmans added that the Celestial AI acquisition bolsters Marvell’s ability to deliver dense optical chiplets optimized for XPU fabrics.

Marvell is positioning itself to support connectivity from chip-level communication to data-center-scale optical transport, according to Koopmans. (Source: Marvell)

He also talked about AI systems moving beyond today’s directly connected GPU/XPU clusters, and here, Marvell would support switching over both Ethernet and UALink. However, as Ron Westfall, VP and Practicing Lead for Networking and Infrastructure at HyperFRAME Research, noted, CXL was one of the day’s major highlights.

CXL compute accelerators

CXL, and Marvell’s commitment to this cache-coherent interconnect standard, was a prominent feature of the company’s Industry Analyst Day. CXL addresses bandwidth bottlenecks in data centers by enabling memory pooling and sharing.

Marvell’s Will Chu outlined CXL-driven performance gains for AI and data-center workloads.(Source: Marvell)

CXL is backed by hyperscalers like Google, Meta and Microsoft; processor vendors like Arm, AMD, Intel and Nvidia; and memory suppliers, such as SK hynix and Samsung. The adoption of CXL is rapidly accelerating because it directly addresses the critical bottleneck of memory bandwidth and capacity for data-intensive AI workloads. Its ability to scale compute and memory independently via disaggregation maximizes performance in data center environments.

Earlier, at OCP 2025, Marvell demonstrated substantial performance gains achievable with its CXL compute accelerators in collaboration with Liqid and Samsung Electronics. The Liqid EX5410C, a CXL memory pooling and sharing appliance, scaled up to 20 TB of additional memory by using the company’s software that manages the CXL fabric.

The system, built around a Marvell Structera A board, was jointly designed by Marvell and Samsung. Marvell’s Structera A memory accelerator features 16 Arm Neoverse cores and can support up to 4 TB of additional DDR5 memory. Marvell claims it achieves a 5.3× performance increase in vector search queries over a standard CXL memory pooling device.

Celestial AI synergy

Earlier this month, Marvell announced it would acquire Celestial AI in a deal worth nearly $3.25 billion. Celestial AI, featured in this year’s Silicon 100 report published by EE Times, develops AI computing platforms using its proprietary Photonic Fabric technology to integrate photonics directly onto silicon chips.

The deal allows Marvell to extend its connectivity portfolio to the short-reach optical domain that AI systems increasingly require as copper reaches limits in bandwidth, latency, power and thermal reach. Celestial AI’s Photonic Fabric chiplets integrate electrical and optical components in a compact form factor and can be co-packaged directly with XPUs and scale-up switches.

Its co-founder and CEO, Dave Lazovsky, made a surprise appearance at Marvell’s Industry Analyst Day to announce the Santa Clara, California-based startup’s design win from a top-tier hyperscaler. He told the attendees that a major hyperscaler is employing his company’s photonic connectivity in its AI processors. According to Lazovsky, this marks a large-scale transition from copper to optical interconnect inside AI designs.

Dave Lazovsky claimed engagements with hyperscalers for transitioning high-speed XPU links from copper to optics. (Source: Marvell)

He also resonated the connectivity premise advocated by Marvell managers: while the AI era is fundamentally changing data center architecture, connectivity (not compute capacity) is the primary bottleneck. “AI workloads are evolving faster than the data-center infrastructure designed to support them.”

Lazovsky listed bandwidth, latency, power and cost as primary challenges faced by hyperscalers. He added that the AI data center connectivity fabrics demand sub-200 ns link latency and at least a 4–5× reduction in power consumption. Lazovsky then claimed that Celestial AI’s Photonic Fabric matches these requirements by delivering switch-class bandwidth of 16 Tbps per direction per chiplet or 64 Tbps bidirectional in a single package.

Besides the shift from electrical traces to optical connectivity, he also talked about Celestial AI’s optical HBM-pooling technology, which extends memory to tens of meters from compute while maintaining the latency budgets required for large-scale training workloads. This capability enables optical links to originate from the center of the die while freeing die-edge area for additional HBM capacity.

Connectivity is a strategic AI focus

While AI workloads are evolving faster than the overall data-center infrastructure buildout, compute resources like GPUs and XPUs are no longer the chokepoint. Instead, hyperscalers are struggling on the design frontiers of bandwidth density, latency and energy efficiency. So, AI system connectivity, encompassing bandwidth and latency, emerged as a clear focus at Marvell’s Industry Analyst Day.

Marvell’s recent acquisition of Celestial AI is a testament to this strategic focus in the fast-moving AI world. The upstart’s platform facilitates early migration from copper to optical connectivity in AI systems. Add Marvell’s CXL near-memory accelerator to this, and the picture becomes clear.

Besides custom silicon for AI applications, Marvell is thinking big on reshaping the connectivity in the AI design landscape. CXL, chiplets and optical connectivity are major parts of this AI-centric design blueprint.

From EETimes

Back
Marvell Eyeing Connectivity as the Next Big Thing in AI
At this year’s Marvell Industry Analyst Day, held on Dec. 9, Marvell Technology’s President and chief operating officer Chris K...
More info
Race to Find the Next Nvidia in Quantum Computing
Qilimanjaro CEO Marta Estarellas talks about the current boom of QC and its future.The global quantum computing (QC) sector is ex...
More info
Race to Find the Next Nvidia in Quantum Computing
Qilimanjaro CEO Marta Estarellas talks about the current boom of QC and its future.The global quantum computing (QC) sector is ex...
More info
0.1058s