F5 has announced the integration and validation of its BIG-IP Next for Kubernetes, a component of the F5 Application Delivery and Security Platform, with NVIDIA RTX PRO 6000 Blackwell Server Edition and BlueField-4 data processing units (DPUs). According to F5, this combination aims to optimize AI workloads in enterprise data centers by increasing performance, efficiency, scalability, and security for large-scale AI applications and AI factories.
The solution brings together F5’s traffic management and application security technologies with NVIDIA’s AI infrastructure, providing technical benefits including a reported minimum of 30 percent increase in token generation speed and Time To First Token, along with load balancing enhancements for large language model (LLM) operations. The technology stack targets high-performance, AI-accelerated applications that require low network latency and robust security in enterprise environments.
F5 notes that its BIG-IP Next for Kubernetes includes network-layer (L4–L7) firewall capabilities, distributed denial-of-service (DDoS) protection, intrusion prevention, and programmable Model Context Protocol (MCP) security. Integration of NVIDIA’s DOCA Argus enables real-time threat detection designed for AI-specific workloads.
The combined platform is intended for energy-conscious data centers, maximizing AI throughput such as token generation per watt, thus supporting greater scalability without requiring infrastructure replacement. F5 states that the offering provides enterprises with a path toward large-scale AI adoption while minimizing operational costs.
F5 BIG-IP Next for Kubernetes on NVIDIA RTX PRO 6000 Blackwell Server Edition is now available for order through NVIDIA’s channel and original equipment manufacturer partners.
Source: F5







