Rongchai Wang
Aug 22, 2025 05:13
NVIDIA’s NVLink and NVLink Fusion technologies are redefining AI inference performance with enhanced scalability and flexibility to meet the exponential growth in AI model complexity.
The rapid advancement in artificial intelligence (AI) model complexity has significantly increased parameter counts from millions to trillions, necessitating unprecedented computational resources. This evolution demands clusters of GPUs to manage the load, as highlighted by Joe DeLaere in a recent NVIDIA blog post.
NVLink’s Evolution and Impact
NVIDIA introduced NVLink in 2016 to surpass the limitations of PCIe in high-performance computing and AI workloads, facilitating faster GPU-to-GPU communication and unified memory space. The NVLink technology has evolved significantly, with the introduction of NVLink Switch in 2018 achieving 300 GB/s all-to-all bandwidth in an 8-GPU topology, paving the way for scale-up compute fabrics.
The fifth-generation NVLink, released in 2024, supports 72 GPUs with all-to-all communication at 1,800 GB/s, offering an aggregate bandwidth of 130 TB/s—800 times more than the first generation. This continuous advancement aligns with the growing complexity of AI models and their computational demands.
NVLink Fusion: Customization and Flexibility
NVLink Fusion is designed to provide hyperscalers with access to NVLink’s scale-up technologies, allowing custom silicon integration with NVIDIA’s architecture for semi-custom AI infrastructure deployment. The technology encompasses NVLink SERDES, chiplets, switches, and rack-scale architecture, offering a modular Open Compute Project (OCP) MGX rack solution for integration flexibility.
NVLink Fusion supports custom CPU and XPU configurations using Universal Chiplet Interconnect Express (UCIe) IP and interface, providing customers with flexibility for their XPU integration needs across platforms. For custom CPU setups, integrating NVIDIA NVLink-C2C IP is recommended for optimal GPU connectivity and performance.
Maximizing AI Factory Revenue
The NVLink scale-up fabric significantly enhances AI factory productivity by optimizing the balance between throughput per watt and latency. NVIDIA’s 72-GPU rack architecture plays a crucial role in meeting AI compute needs, enabling optimal inference performance across various use cases. The technology’s ability to scale up configurations maximizes revenue and performance, even when NVLink speed is constant.
A Robust Partner Ecosystem
NVLink Fusion benefits from an extensive silicon ecosystem, including partners for custom silicon, CPUs, and IP technology, ensuring broad support and rapid design-in capabilities. The system partner network and data center infrastructure component providers are already building NVIDIA GB200 NVL72 and GB300 NVL72 systems, accelerating adopters’ time to market.
Advancements in AI Reasoning
NVLink represents a significant leap in addressing compute demand in the era of AI reasoning. By leveraging a decade of expertise in NVLink technologies and the open standards of the OCP MGX rack architecture, NVLink Fusion empowers hyperscalers with exceptional performance and customization options.
Image source: Shutterstock
Source: https://blockchain.news/news/nvidia-nvlink-fusion-ai-inference-performance