作者:By Ashkan Seyedi
As artificial intelligence redefines the computing landscape, the network has become the critical backbone shaping the data center of the future. Large language model training performance is determined not only by compute resources but by the agility, capacity, and intelligence of the underlying network. The industry is witnessing the evolution from traditional, CPU-centric infrastructures toward tightly-coupled, GPU-driven, network-defined AI factories.
NVIDIA built a comprehensive suite of networking solutions to handle the quick-burst, high-bandwidth, and low-latency demands of modern AI training and inferencing at scale. This includes Spectrum-X Ethernet, NVIDIA Quantum InfiniBand, and BlueField platforms. By orchestrating compute and communication together, the NVIDIA networking portfolio lays the foundation for scalable, efficient, and resilient AI data centers, where the network is the central nervous system empowering the future of AI innovation.
In this blog, we’ll explore how NVIDIA networking innovations have enabled co-packaged optics to deliver massive power efficiency and resiliency improvements for large-scale AI factories.
In traditional enterprise data centers, Tier 1 switches are integrated within each server’s rack, allowing direct copper connections to servers and minimizing both power and component complexity. This architecture sufficed for CPU-centric workloads with modest networking demands.
In contrast, modern AI factories pioneered by NVIDIA feature ultra-dense compute racks and thousands of GPUs that are architected to work together on a single job. These require max bandwidth and minimum latency across the entire data center, which lead to new topologies where the Tier 1 switch is relocated to the end of the row. This configuration dramatically increases the distance between servers and switches, making optical networking essential. As a result, power consumption and the number of optical components rise significantly, with optics now required for both NIC-to-switch and switch-to-switch connections.
This evolution, illustrated in Figure 1 below, reflects the substantial shift in topology and technology needed to meet the high-bandwidth, low-latency requirements of large-scale AI workloads. It fundamentally reshapes the physical and energy profile of the data center.
Traditional network switches that utilize pluggable transceivers rely on multiple electrical interfaces. In these architectures, the data signal must traverse long electrical paths from the switch ASIC to the PCB, connectors and finally into the external transceiver before being converted to an optical signal. This segmented journey incurs substantial electrical loss, up to 22 dB for 200 gigabit-per-second channels, as illustrated in Figure 2 below. This amplifies the need for complex digital signal processing and multiple active components.
The result is a higher power draw (often 30W per interface), increased heat output, and a proliferation of potential failure points. The abundance of discrete modules and connections not only drives up system power and component count but directly undermines link reliability, creating ongoing operational challenges as AI deployments scale. Typical power consumption of components is shown below in Figure 3.
In contrast, switches with co-packaged optics (CPO) integrate the electro-optical conversion directly onto the switch package. Fiber connects directly with the optical engine that sits beside the ASIC, reducing electrical loss to only ~4 dB and slashing power use to as low as 9W. By streamlining the signal path and eliminating unnecessary interfaces, this design dramatically improves signal integrity, reliability, and energy efficiency. This is precisely what’s required for high-density, high-performance AI factories.
NVIDIA has designed CPO-based systems to meet unprecedented AI factory demands. By integrating optical engines directly onto the switch ASIC, the new NVIDIA Quantum-X Photonics and Spectrum-X Photonics (shown in Figure 4 below) will replace legacy pluggable transceivers. The new offerings streamline the signal path for enhanced performance, efficiency, and reliability. These innovations not only set new records in bandwidth and port density but also fundamentally alter the economics and physical design of AI data centers.
With the introduction of NVIDIA Quantum-X InfiniBand Photonics, NVIDIA propels InfiniBand switch technology to new heights. This platform features:
NVIDIA Quantum-X leverages integrated silicon photonics to achieve unmatched bandwidth, ultra-low latency, and operational resilience. The co-packaged optical design reduces power consumption, improves reliability, enables rapid deployment, and supports the massive interconnect requirements of agentic AI workloads.
Expanding the CPO revolution into Ethernet, NVIDIA Spectrum-X Photonics switches are specifically designed for generative AI and large-scale LLM training and inference tasks. The new Spectrum-X Photonics offerings include two liquid-cooled chasses based on the Spectrum-6 ASIC:
Both platforms are powered by NVIDIA silicon photonics, drastically reducing the number of discrete components and electrical interfaces. The result is a 3.5x leap in power efficiency compared to previous architectures, and a 10x improvement in resiliency by reducing the number of overall optical components that may fail. Technicians benefit from improved serviceability, while AI operators see 1.3x faster time-to-turn-on and enhanced time-to-first-token.
NVIDIA’s co-packaged optics are enabled by a robust ecosystem of partners. This cross-industry collaboration ensures not only technical performance but also manufacturing scalability and reliability needed for large-scale global AI infrastructure deployments.
The advantages of co-packaged optics are clear:
The switch systems achieve industry-leading bandwidth (up to 409.6 Tb/s and 512 ports at 800 Gb/s), all supported by efficient liquid cooling to handle dense, high-wattage environments. Figure 5 (below) shows NVIDIA Quantum-X Photonics Q3450, and two variants of Spectrum-X Photonics—single-ASIC SN6810 and quad-ASIC SN6800 with integrated fiber shuffle.
Together, these products underpin a transformation in network architecture, meeting the insatiable bandwidth and ultra-low latency requirements posed by AI workloads. The combination of cutting-edge optical components and robust system-integration partners creates a fabric optimized for present and future scaling needs. As hyperscale data centers demand ever-faster deployment and bulletproof reliability, CPO moves from innovation to necessity.
NVIDIA Quantum-X and Spectrum-X Photonics switches signal a shift to networks purpose-built for the relentless demands of AI at scale. By eliminating bottlenecks of traditional electrical and pluggable architectures, these co-packaged optics systems deliver the performance, power efficiency, and reliability required by modern AI factories. With commercial availability for NVIDIA Quantum-X InfiniBand switches set for early 2026 and Spectrum-X Ethernet switches in the second half of 2026, NVIDIA is setting the standard for optimized networking in the age of agentic AI.
Stay tuned for the second part of this blog, where we take a look under the hood of these groundbreaking platforms. We’ll dive into the architecture and operation of the silicon photonics engines powering NVIDIA Quantum-X Photonics and Spectrum-X Photonics, shedding light on the core innovations and engineering breakthroughs that make next-generation optical connectivity possible. From advances in on-chip integration to novel modulation schemes, the next installment will unravel the technologies that set these photonics engines apart in the world of AI networking.
To learn more about NVIDIA Photonics, visit this page.