3 E Network details modular AI data center architecture for high-density racks and liquid cooling

3 E Network Technology Group has announced the core technical architecture for a high-performance AI data center project in Mikkeli, Finland, following a land lease agreement signed with the City of Mikkeli in December 2025. The company says the design targets large-scale AI model training, inference, and other high-intensity workloads that can exceed the limits of traditional data center architectures.

The project is based on a modular, prefabricated design. 3 E Network says the data center is decomposed into interchangeable standard units including power distribution, IT equipment, cooling, and networking modules. These modules are prefabricated and tested off-site, then shipped for on-site assembly. The company says individual modules can be updated or replaced without impacting overall system operation, enabling “zero-downtime” upgrades and expansion.

For compute density, 3 E Network says it is designing for 20 kW-or-higher rack power density, compared with “typically 3-5kW” in traditional data centers, to support AI cluster power requirements. On the interconnect side, it says the architecture uses InfiniBand Architecture or RDMA over Converged Ethernet (RoCE) with Remote Direct Memory Access (RDMA) to enable direct interconnection between GPU memories and reduce CPU-processing bottlenecks. The company also specifies a “dual-device hot backup design for core equipment” to support high availability.

On thermal management, 3 E Network says it is using an “Air-Liquid Hybrid” approach intended to support both general-purpose servers and 20 kW-plus liquid-cooled racks. It links this to higher chip power draw, citing NVIDIA H100/H200 and Blackwell-architecture products exceeding 700 W per chip. The company claims the hybrid approach lowers Power Usage Effectiveness (PUE), reduces noise, and avoids redundant infrastructure investment during rapid hardware iteration.

For operations, 3 E Network says it has a self-developed AI smart cooling system that collects and analyzes IT load, cooling system status, and external environmental parameters in real time to optimize cooling strategies, and it says this is expected to reduce annual PUE by 8% to 15%. It also describes using Internet of Things (IoT) sensors with Artificial Intelligence for IT Operations (AIOps) models for predictive maintenance, plus remote centralized control intended to reduce unplanned downtime and lower operations and maintenance costs. The company also describes an AI-based security capability that analyzes logs and network traffic to detect anomalous behaviors such as abnormal logins, data exfiltration, and potential internal threats.

Source: 3 E Network Technology Group

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Molex Incorporated CPO-Solution
Molex debuts high-radix optical circuit switch for AI cluster fabrics
Elvis-Leka,-New-Product-Development-Engineer-—-Parker,-Sporlan-Division
From air to two-phase liquid: how rack cooling options compare on density and risk
Delta-Electronics-Americas
Delta demos 800 VDC AI data center power racks and 2.4MW CDUs at GTC
TI-800-VDC-power-architecture
TI shows 800 VDC power architecture for NVIDIA AI data centers
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: