Semiconductor company Nvidia has unveiled the Spectrum-XGS Ethernet, a networking technology designed to connect distributed data centers and support large-scale artificial intelligence operations.
As demand for AI grows, data centers are facing power and capacity limits within a single facility. Expanding beyond one site has been difficult due to the limitations of traditional Ethernet infrastructure, which often results in latency, jitter, and inconsistent performance.
“The AI industrial revolution is here, and giant-scale AI factories are the essential infrastructure,” said Jensen Huang, founder and CEO of Nvidia, in a media release. “With Nvidia Spectrum-XGS Ethernet, we add scale-across to scale-up and scale-out capabilities to link data centers across cities, nations and continents into vast, giga-scale AI super-factories.”
The Spectrum-XGS Ethernet builds on the company’s Spectrum-X platform by adding scale-across infrastructure. It is designed to extend the platform’s performance to interconnect multiple data centers, allowing them to function as a single AI facility.
Nvidia said the new system uses algorithms that automatically adjust to the distance between sites. It also features distance congestion control, latency management, and telemetry to improve reliability. According to the company, the technology nearly doubles the performance of the Nvidia Collective Communications Library, which supports multi-GPU and multi-node communication across distributed AI clusters.
By integrating these features, Spectrum-XGS Ethernet enables data centers in different locations to work together as one AI super-factory optimized for long-distance connections.
The Spectrum-X networking platform also provides higher bandwidth density than standard Ethernet, using Nvidia Spectrum-X switches and ConnectX-8 SuperNICs. Nvidia said this supports scalability and low latency for hyperscale AI operations, including some of the largest AI supercomputers in use today.