Check Point has published an AI Factory Security Architecture Blueprint, a reference architecture aimed at securing private AI infrastructure end to end, from GPU server networking up through application-layer and LLM-facing endpoints. The company frames it as a “vendor-tested” blueprint that combines Check Point firewall and AI security products with NVIDIA BlueField DPU capabilities to push security controls closer to the AI fabric.
The blueprint lays out a layered model across four areas: perimeter, application/LLM, AI infrastructure, and workload/container. For data center teams building private AI environments, the practical value is the attempt to connect familiar controls—segmentation, policy enforcement, and inspection—to AI-specific traffic paths like inference APIs and east-west Kubernetes flows. But blueprints only help if they’re implementable in your architecture and don’t turn into a new performance bottleneck in front of GPU clusters.
Perimeter and fabric entry controls
At the perimeter layer, the design calls for Check Point Maestro Hyperscale Firewall to handle north-south traffic into the AI environment. The blueprint describes capabilities including Zero Trust Network Access (ZTNA), virtual security group segmentation, and scalable policy enforcement at the entry point to the AI fabric for traffic coming from external users, internet sites, and enterprise networks.
Application and LLM endpoint protections
For the application and LLM layer, the blueprint centers on Check Point AI Agent Security, described as protecting inference APIs and LLM endpoints against prompt injection, data exfiltration, adversarial queries, and API abuse. Check Point states this capability integrates into Check Point Firewalls (cloud, virtual, and appliance form factors), Check Point WAF, and Check Point AI Factory Firewall.
Inline security on NVIDIA BlueField DPUs
At the AI infrastructure layer, Check Point describes an integration that embeds firewall and threat prevention functions directly into NVIDIA BlueField DPUs using the NVIDIA DOCA software platform. The stated goal is hardware-accelerated, inline inspection of ingress and egress traffic without consuming CPU or GPU cycles, along with tenant segmentation and runtime threat detection through DOCA Argus on BlueField.
Kubernetes east-west segmentation
For the workload and container layer, the blueprint describes integration with third-party microsegmentation tools to enable micro-segmentation and east-west traffic control inside Kubernetes clusters. The intent is to limit lateral movement between inference namespaces and contain compromised containers.
“AI infrastructure has become one of the most valuable and vulnerable assets in the enterprise,” said Nataly Kremer, Chief Product Officer at Check Point. “The AI Factory Security Blueprint is how we help organizations protect those investments—not as an afterthought, but from the ground up, through every layer of the stack.”
Source: Check Point













