Q.ANT deploys Gen 2 photonic AI accelerators at LRZ HPC for evaluation

Q.ANT has deployed its second-generation photonic processors—Gen 2 Native Processing Units (NPUs)—into a high-performance computing (HPC) environment at the Leibniz Supercomputing Center (LRZ). The company says the installation moves photonic co-processing into production evaluation, with a focus on energy and performance constraints in AI and scientific simulation workloads.

Q.ANT says its Gen 2 NPUs are installed via standard PCIe interfaces and run alongside CPUs and GPUs in existing heterogeneous HPC systems at LRZ. The company positions the hardware as a photonic AI accelerator, aiming for higher computational throughput and improved energy efficiency versus its first-generation system previously deployed at LRZ.

In benchmark evaluations at LRZ, Q.ANT reports the following results for its Gen 2 architecture compared with its first-generation NPUs:More than 50x higher throughput of matrix multiplications, 25x faster inference on a ResNet-18 convolutional neural network, and 6x lower energy consumption for typical workloads.

Q.ANT also cites “enhanced analog units optimized for nonlinear functions,” which it says reduce parameter counts and training depth, and states that accuracy is sufficient to support state-of-the-art AI applications.

At the architecture level, Q.ANT says its photonic NPUs execute mathematical operations directly in the optical domain using Thin-Film Lithium Niobate (TFLN) photonic integrated circuits, rather than relying on transistor switching. The company claims this approach eliminates on-chip heat generation and cooling requirements.

“What makes this deployment significant is that it moves photonic co-processing beyond proof-of-concept and into production HPC environments,” said Bob Sorensen, Senior Vice President for Research at Hyperion Research. “Demonstrating measurable energy reduction and performance gains under real-world workloads signals that alternative architectures like photonics are becoming a practical path forward for scaling AI infrastructure.”

Q.ANT CEO Dr. Michael Förtsch framed the deployment as an integration test under real workloads: “At LRZ, we’re proving that light-based co-processing can integrate with today’s infrastructure and deliver measurable efficiency gains under real workloads.”

LRZ says it will evaluate performance, precision, and energy efficiency under production workloads and operational requirements within heterogeneous HPC architectures. “Our evaluation is conducted under real production workloads and operational requirements,” said Prof. Dr. Dieter Kranzlmüller, Chairman of the Board of Directors of LRZ.

Source: Q.ANT

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
Near-Packaged-Optics--Rethinking-the-AI-Data-Center-Interconnect
Near-Packaged Optics: Rethinking the AI Data Center Interconnect
30cf-data-center-pr
Carrier launches AquaEdge 30CF chiller to boost data center cooling reliability and uptime
Mosaic-4148-2000x1333_1_1
300mm silicon carbide wafers pitched for AI data center packaging by 2030
shine 的複本 的複本 - 36
GenerMotor launches stackable HVDC generator modules for AI data center power

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: