Tomahawk 6, delivering switching capacity of 102.4 Tbps doubles the bandwidth of existing Ethernet switches currently available.
Broadcom has announced the general availability of its Tomahawk 6 switch series, marking a significant milestone in networking technology. The Tomahawk 6 delivers an unprecedented 102.4 Terabits per second (Tbps) of switching capacity within a single chip. This effectively doubles the bandwidth of existing Ethernet switches currently available, positioning it as a key enabler for next-generation AI infrastructure.
The Tomahawk 6 is engineered for robust scaling and energy efficiency, incorporating AI-optimized features. It targets the evolving demands of scale-up and scale-out AI networks, offering flexibility through support for both 100G/200G SerDes and co-packaged optics (CPO). The switch integrates comprehensive AI routing capabilities and interconnect options, specifically designed to support AI clusters with more than one million XPUs.
Key Benefits of the Tomahawk 6 Series:
- 102.4 Tbps of Ethernet switching capacity in a single chip
- Scale-up cluster size of 512 XPUs
- 100,000+ XPUs in a two-tier scale-out network at 200 Gbps/link
- 200G or 100G PAM4 SerDes with support for long-reach passive copper
- Option for co-packaged optics
- Cognitive Routing 2.0
- Optimized power and system efficiency for AI training and inference
- Compatibility with any NIC or XPU Ethernet endpoint
- Support for arbitrary topologies, including scale-up, Clos, rail-only, rail-optimized, and torus
- Compliance with Ultra Ethernet Consortium specifications
Flexible Connectivity and Co-Packaged Optics
The Tomahawk 6 extends beyond raw chip performance, delivering system-level power efficiency and cost savings through Broadcom’s SerDes and optics ecosystem. Its 200G SerDes provides extended reach for passive copper interconnects, facilitating efficient, low-latency system designs with high reliability and reduced total cost of ownership (TCO). The Tomahawk 6 family offers an option for 1,024 100G SerDes on a single chip, enabling the deployment of AI clusters with extended copper reach and efficient utilization of XPUs and optics via native 100G interfaces.
For systems requiring optical connectivity, the Tomahawk 6 will also be available with co-packaged optics (CPO). CPO solutions offer reduced power consumption and latency, while also minimizing link flaps and improving long-term reliability. This CPO implementation builds upon Broadcom’s experience with CPO versions of the Tomahawk 4 and 5.
AI-Optimized Routing for Scalable AI Networks
The Tomahawk 6 architecture supports unified networks designed for large-scale AI training and inference. Its Cognitive Routing 2.0 incorporates advanced telemetry, dynamic congestion control, rapid failure detection, and packet trimming. These features facilitate global load balancing and adaptive flow control, specifically tailored for AI workloads such as mixture-of-experts, fine-tuning, reinforcement learning, and reasoning models.
The switch supports both scale-out and scale-up networking topologies for XPU clusters ranging from 100,000 to one million devices. By leveraging Ethernet for all interfaces, network operators benefit from a unified technology stack and consistent tools across the AI fabric. This allows for dynamic allocation of XPU resources, optimizing configurations for diverse customer workloads.
Broadcom emphasizes the growing momentum behind the Tomahawk 6 and the adoption of Ethernet for backend networking. Multiple deployments are planned, involving over 100,000 XPUs utilizing the Tomahawk 6 for both scale-out and scale-up interconnects.
Open Scale-Up Innovation with the Scale Up Ethernet (SUE) Framework
The Tomahawk 6 integrates into an open scale-up Ethernet ecosystem, with Broadcom providing open specifications for efficient XPU and NIC interfaces. The Scale Up Ethernet (SUE) Framework, announced at OCP Dublin in April 2025, is publicly accessible and will be shared with standards organizations, such as OCP.
Comprehensive Platform for AI Infrastructure
Broadcom’s end-to-end Ethernet AI platform includes the Tomahawk and Jericho switch families, Thor NICs, Agera retimers, Sian optical DSPs, co-packaged optics, and software development kits. This provides a comprehensive solution for building next-generation AI infrastructure.
The Tomahawk 6 complies with the Ultra Ethernet Consortium specifications and supports modern AI transports, congestion signaling, and telemetry for large, distributed training environments. It also supports various network topologies, including scale-up, Clos, rail-only, rail-optimized, and torus.
Broadcom Tomahawk 6 vs. Nvidia Spectrum Photonic ASICs
Product Comparison
The Tomahawk 6 has NVIDIA squarely in its sights as it competes against NVIDIA’s push into Broadcom’s chip development space.
Bandwidth and Architecture
Broadcom’s Tomahawk 6 provides an impressive 102.4 Tbps of switching capacity per chip. It supports configurations of up to 512 ports at 200 Gbps or 1,024 ports at 100 Gbps, with options for both copper and co-packaged optics (CPO). The chip utilizes a chiplet architecture, which separates the SerDes (Serializer/Deserializer) from the main processing die. This design is tailored for both scale-up and scale-out AI clusters, enabling support for up to one million XPUs within a unified Ethernet fabric.
On the other hand, Nvidia’s upcoming Spectrum Photonic ASICs, set to launch in 2026, will also deliver 102.4 Tbps per ASIC. However, Nvidia plans to innovate further with next-generation silicon photonics platforms. Their roadmap anticipates switches capable of achieving up to 400 Tbps of aggregate bandwidth, with 1.6 Tbps per port. Configurations will support 128 ports at 800 Gbps or 512 ports at 200 Gbps. Higher-end models are expected to handle 512 ports at 800 Gbps or a staggering 2,048 ports at 200 Gbps, specifically designed for exascale AI clusters that will use millions of GPUs.
Optics and Power Efficiency
Tomahawk 6 switch is available in both traditional and CPO (Cost Per Ownership) variants. The CPO variant reduces power consumption and latency, increases port density, and lowers the total cost of ownership. Broadcom’s implementation of CPO is well-developed, building on the technology used in Tomahawk 4 and 5, and it is already being shipped to partners.
Meanwhile, Nvidia’s Spectrum Photonic switches are designed specifically for silicon photonics, utilizing TSMC’s COUPE platform. These switches offer improved energy efficiency—up to 3.5 times greater—along with enhanced reliability and signal integrity. They also feature liquid cooling for high-density deployments. Nvidia’s approach is more vertically integrated, centered around a proprietary photonics ecosystem and focused on eliminating copper bottlenecks at scale.
Ecosystem and Compatibility
Broadcom is promoting Tomahawk 6 as an open, standards-based solution that complies with the Ultra Ethernet Consortium and supports any modern network interface card (NIC) or XPU endpoint. It is designed for both hyperscale and enterprise AI data centers, offering flexible topologies such as Clos, torus, and scale-up. Additionally, it features open specifications for easy integration.
Nvidia is utilizing its comprehensive stack, which includes ConnectX superNICs, BlueField data processing units (DPUs), and proprietary software, to deliver InfiniBand-like performance over Ethernet. The Spectrum-X technology is closely integrated with Nvidia’s AI platforms, and the company is establishing a strong photonics supply chain in collaboration with partners like Coherent, Corning, and Lumentum.
Deployment and Maturity
Tomahawk 6 is currently being shipped, with CPO-based switches anticipated to be available in significant volume in the first half of 2026. Initial deployments are already scheduled for clusters containing over 100,000 XPUs.
Nvidia Spectrum Photonic switches are expected to launch in 2026, while Quantum-X InfiniBand photonic switches are set to arrive in late 2025. Nvidia’s higher-end appliances, such as SN6800, will provide up to 409.6 Tbps in a single system, though these are multi-ASIC solutions.
Key Differentiators
Broadcom Tomahawk 6 features an open ecosystem, standards-based design, flexible topologies, a mature CPO, and compatibility with any NIC/XPU.
Nvidia Spectrum Photonic boasts deep vertical integration, proprietary photonics, higher aggregate bandwidth in future models, and close integration with Nvidia’s AI and DPU stack.
Feature |
Broadcom Tomahawk 6 |
NVIDIA Spectrum Photonic ASICs |
Max Bandwidth (per ASIC) | 102.4 Tbps | 102.4 Tbps (SN6810), up to 400 Tbps (future) |
Ports | 512 x 200Gbps, 1024 x 100Gbps | 128 x 800Gbps, 512 x 200Gbps, up to 2048 x 200Gbps |
Optics | Copper, CPO (shipping 2025/26) | Silicon photonics, CPO (2026) |
Ecosystem | Open, standards-based, any NIC/XPU | NVIDIA-centric, ConnectX/BlueField DPUs |
Topologies | Clos, torus, scale-up, scale-out | Clos, torus, scale-out |
Availability | Shipping now, CPO in 2026 | 2026 (Ethernet), late 2025 (InfiniBand) |
Power Efficiency | High, CPO reduces TCO | 3.5x better (claimed), liquid cooling |
Broadcom’s Tomahawk 6 is currently the most advanced open Ethernet switch ASIC available. It features a mature CPO and broad compatibility. On the other hand, NVIDIA’s Spectrum Photonic ASICs, expected to be released in 2026, will further enhance bandwidth and photonic integration. However, these ASICs are more vertically integrated and closely linked to NVIDIA’s AI ecosystem. Both options are poised to shape the future of AI networking, with the choice likely depending on ecosystem preferences, deployment timelines, and integration needs.
Engage with StorageReview
Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed