NVIDIA’s dUPF Paves the Way for 6G and AI-Native Networks



Terrill Dicki
Oct 16, 2025 00:57

NVIDIA introduces distributed User Plane Function (dUPF) to enhance 6G networks with AI capabilities, offering ultra-low latency and energy efficiency.



NVIDIA's dUPF Paves the Way for 6G and AI-Native Networks

The telecommunications industry is on the brink of a significant transformation as it moves towards 6G networks, with NVIDIA playing a crucial role in this evolution. The company has introduced an accelerated and distributed User Plane Function (dUPF) that is set to enhance AI-native Radio Access Networks (AI-RAN) and AI-Core, according to NVIDIA.

Understanding dUPF and Its Importance

dUPF is a vital component in the 5G core network, now being adapted for 6G. It manages user plane packet processing at distributed locations, bringing computation closer to the network edge. This reduces latency and optimizes network resources, making it essential for real-time applications and AI traffic management. By moving data processing closer to users and radio nodes, dUPF enables ultra-low latency operations, a critical requirement for next-generation applications like autonomous vehicles and remote surgeries.

Architectural Advantages of dUPF

NVIDIA’s implementation of dUPF leverages their DOCA Flow technology to enable hardware-accelerated packet steering and processing. This results in energy-efficient, low-latency operations, reinforcing the role of dUPF in the 6G AI-Native Wireless Networks Initiative (AI-WIN). The AI-WIN initiative, a collaboration between industry leaders like T-Mobile and Cisco, aims to build AI-native network stacks for 6G.

Benefits of dUPF on NVIDIA’s Platform

The NVIDIA AI Aerial platform, a suite of accelerated computing platforms and services, supports dUPF deployment. Key benefits include:

  • Ultra-low latency with zero packet loss, enhancing user experience for edge AI inferencing.
  • Cost reduction through distributed processing, lowering transport costs.
  • Energy efficiency via hardware acceleration, reducing CPU usage and power consumption.
  • New revenue models from AI-native services requiring real-time edge data processing.
  • Improved network performance and scalability for AI and RAN traffic.

Real-World Use Cases and Implementation

dUPF’s capabilities are particularly beneficial for applications demanding immediate responsiveness, such as AR/VR, gaming, and industrial automation. By hosting dUPF functions at the network edge, data can be processed locally, eliminating backhaul delays. This localized processing also enhances data privacy and security.

In practical terms, NVIDIA’s reference implementation of dUPF has been validated in lab settings, demonstrating 100 Gbps throughput with zero packet loss. This showcases the potential of dUPF in handling AI traffic efficiently, using only minimal CPU resources.

Industry Adoption and Future Prospects

Cisco has embraced the dUPF architecture, accelerated by NVIDIA’s platform, as a cornerstone for AI-centric networks. This collaboration aims to enable telecom operators to deploy high-performance, energy-efficient dUPF solutions, paving the way for applications such as video search, agentic AI, and ultra-responsive services.

As the telecommunications sector continues to evolve, NVIDIA’s dUPF stands out as a pivotal technology in the transition towards 6G networks, promising to deliver the necessary infrastructure for future AI-centric applications.

Image source: Shutterstock


Source: https://blockchain.news/news/nvidia-dupfs-6g-ai-native-networks