How to burn-in test megawatt power equipment without drawing a megawatt from the grid

Burn-in testing exists because electronics fail early.

Many failures concentrate in the first hours of operation, said Mike Nolan of PPST Solutions, with one rule of thumb being that most defects show up within the first 12 hours. That’s why powered equipment gets run at full load for extended periods before it ships: catch the infant mortality at the factory, not inside a live data center.

The problem is what “full load” means at today’s power levels. If you’re validating a one megawatt UPS, AC-DC converter, or solid state transformer, you’re drawing a megawatt from the utility for hours at a time. That’s expensive. In places like California, during peak-demand windows, facilities on interruptible or demand-response tariffs may be called upon, incentivized, or contractually required to curtail load during peak or emergency events. Which means you can’t even run the test when you need to.

You can’t just plug it in

Before you get to burn-in, there’s a more basic question: why not just test with utility power?

Out of a one megawatt, we’re only utilizing a hundred kilowatts of energy. The rest of the energy is just circulating. So our energy consumption drops down by 90%

Nolan described the standard approach as testing the operating corners: high and low input voltage against high and low load, then adding dynamic tests for fast changes, faults, and abnormal line conditions.

You need to prove the product works correctly at every combination, and you need to know what happens when conditions change fast. “To have repeatability and tests to ensure you understand where your product is and how your product performs under those conditions needs to be done in a very precise and accurate manner,” said Nolan in a recent interview with The Data Center Engineer.

Watch the full interview

Utility power can’t do that. Grid voltage drifts. Frequency varies. You can’t command a fault condition from the wall outlet. Programmable AC and DC sources let engineers set exact conditions, repeat them, and simulate faults on purpose.

For example, a feeder fault, switching event, lightning strike, reclose operation, or damaged utility pole can create sags, swells, interruptions, or transients at the AC input. Programmable sources and grid simulators let engineers intentionally reproduce those conditions. Can your equipment survive that without shorting out? You need to know before it’s installed.

Recirculate, don’t dissipate

The regenerative approach solves the burn-in cost problem by sending the energy in a loop. Power flows from a source through the unit under test, then into a regenerative load that converts it and feeds it back to the utility. In steady state, the net utility draw can fall to roughly the combined losses of the device under test (DUT) and test equipment, plus auxiliaries, assuming the regenerated energy can be reused by the facility or accepted by the utility connection.

“Out of a one megawatt, we’re only utilizing a hundred kilowatts of energy,” Nolan said. “The rest of the energy is just circulating. So our energy consumption drops down by 90%.” The exact number depends on the DUT efficiency, source/load efficiency, and facility interconnection, but the point is that the test bay no longer has to buy and reject the full megawatt as heat.

That’s not just an electricity savings. A megawatt of heat dumped into a test facility requires serious cooling infrastructure, and the workspace gets uncomfortable fast. Reducing the thermal load by 90% shrinks the HVAC requirement and keeps the environment within occupational safety limits.

Scaling by module

Not every test is a megawatt test. Many conventional data-center racks still sit below 10 kW, while current AI/HPC rack-scale systems are pushing into the 100–150 kW range. Component suppliers and system vendors are already designing power architectures for future racks approaching or exceeding 1 MW.

PPST’s approach is modular: AC and DC sources that can be paralleled from 5 kW up to megawatt-plus on the AC side, and from 5 kW to over 3 MW on the DC side. All of those modules can operate in regenerative mode.

The GPU side adds another layer. At the rack input, AI workloads can create millisecond-scale swings, while the voltage regulators feeding accelerators must handle much faster microsecond-scale load steps at very low voltage and very high current. That is a different test problem from megawatt burn-in. Regenerative loads are useful for energy recovery, but very fast VRM and point-of-load transient validation often requires linear loads, FET-slammers, onboard transient generators, or socket-level test fixtures to avoid cabling and control-loop limitations. At the PDN level, engineers may also care about nanosecond-scale edge behavior, which is a signal-integrity and decoupling problem.

Getting the test equipment response time wrong means you’re not actually validating the control loop under realistic conditions.

For more info on testing solutions, visit ppstsolutions.com.

Get Data Center Engineering News In Your Inbox:

Popular Posts:

mike-nolan-PPST
How to burn-in test megawatt power equipment without drawing a megawatt from the grid
Xendee
Bring your own power: Why data center microgrids are shifting from backup posture to strategy
Why your data center needs 135% cooling capacity- Munters
Why your data center needs 135% cooling capacity
Elvis-Leka,-New-Product-Development-Engineer-—-Parker,-Sporlan-Division
From air to two-phase liquid: how rack cooling options compare on density and risk
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: