“The appetite to have the success of a project be fundamentally tied to how well the utility treats you is untenable.”
That line, from XENDEE CEO Adib Naslé, captures a mood we’re hearing more often from data center planners: the grid is no longer the default assumption. Not because engineers suddenly want to play utility—but because project timelines and power availability are increasingly misaligned.
In a recent interview with The Data Center Engineer, Naslé described a recurring planning disconnect: data center customers may be thinking in “12, 18 months,” while “the utilities are thinking 7 to 12 years.” When those schedules collide, “bring your own power” stops sounding like a slogan and starts reading like a requirements document.
This article is a field guide to Naslé’s point of view: what’s forcing microgrid decisions, what engineers must model first, and where projects get burned when they stop at design and never close the loop into operations.
Watch the full interview
The timeline mismatch that forces the architecture
If you want to predict whether a project ends up with onsite generation and storage, Naslé says start with the simplest question: How quickly can you get power to that geography?
In his models, “the first… most important piece is the utility piece,” and it’s often constrained: the grid is “tapped out,” “weak,” or the upgrade timeline is incompatible with what the customer needs.
The number he uses is blunt enough to repeat in a meeting: data center planners may be thinking 12–18 months; utilities may be thinking 7–12 years.
That mismatch drives a design fork early. If there’s capacity available, “that’s the quickest way… you can just plug into it.” If it isn’t, “the decision then is to… augment the onsite power system.”
What “data center microgrids” mean in this conversation
Naslé’s definition is refreshingly direct. Microgrids are onsite power systems—and at data center scale, they “get fairly complex fairly quickly.”
XENDEE’s public positioning for data centers is aligned with that framing: grid constraints and multi‑year upgrade timelines push operators toward co-located distributed energy and microgrids as part of a multi‑year strategy. Read XENDEE’s data center overview.
The part engineers should focus on isn’t the buzzword. It’s the system problem behind it: design a mix of technologies that can deliver power on your timeline, prove the numbers are credible enough for decision-makers, and operate the system in a way that matches the assumptions that justified the investment.
How it works: turning constraints + objectives into a buildable system
A common failure in microgrid conversations is jumping straight to hardware (“Should we do gas? Batteries? Fuel cells?”) without first defining the math the system has to satisfy.
Naslé frames the problem the way an engineer sets up an optimization: you give the algorithm constraints and objectives, including cost, resilience, emissions, and reliability targets like N+1/N+2 postures depending on Tier requirements.
Then the model searches the design space to find the right mix of technologies, their sizes, and how they should be orchestrated together along with the grid.
The important nuance is what he doesn’t want engineers doing: treating economics like a spreadsheet you fill once and forget. He insists on transparent lifecycle cost modeling and “dispatch-aware economics, not just static spreadsheets.”
“Backup posture” is the old mindset
Data centers already have onsite power assets: generators, UPS systems, and the infrastructure to ride through outages. Naslé’s point is that these systems usually exist in a “backup posture.”
But when the grid can’t deliver, the same assets become the starting point for a different architecture: a microgrid that can provide resilience and capacity.
“If you’re already building backup systems… to ride through… not just hours but days,” he argues, then with optimized approaches to configuration and dispatch, “you can achieve… the resilience that you need and fill in that power gap that the utility has left open.”
Why AI loads raise the stakes (12 kW vs 1 kW)
Naslé draws a line between past workloads (traditional enterprise, streaming, communications) and newer AI workloads where power density and duty cycle push the infrastructure harder.
For “newer computational platforms,” he cited “12 kilowatts” versus “maybe… one kilowatt before.” Then he adds the operational constraint that engineers feel immediately: “you can’t just stop it… you gotta let it finish learning.”
That combination—higher power density plus workloads you can’t casually pause—changes the economics of downtime and the design requirements for ride-through. It also pushes planners toward architectures that can scale capacity faster than the utility queue.
The tradeoffs engineers can’t ignore
Naslé’s strongest points are tradeoffs—places where optimizing one variable quietly breaks another.
Grid interaction can matter (peak shaving, export), but he downplays classic demand response for data centers because they “need to be running most of the time all the time.” Instead, he emphasizes peak shaving and selling back to the grid when you have overcapacity you can monetize.
Supply chain and equipment lead times are now part of the design space. He says teams may pay more if a technology cuts lead time from 18 months to eight months, because schedule value can dominate the delta.
If the grid doesn’t have anything available and you need the site running, he argues the system can be designed off-grid now and connected later if the opportunity appears.
Tariffs are a risk story, not just a price story
Engineers tend to treat tariffs as inputs. Naslé treats them as volatility: when you’re using a lot of power, small numbers become big numbers quickly—and utilities can seek rate increases on a regular cycle.
His argument for onsite generation is cost certainty: with your own power system, “you know exactly how much it’s gonna cost you” for “the next 10, 15 years.”
He also flags a political constraint: ratepayers don’t want to subsidize data center load, and rules are evolving. His conclusion is practical: microgrids can reduce exposure to the politics and the whims of utilities and regulators, and you control the timeline.
Planning is not enough: closing the loop into operations
Many teams can produce a plausible feasibility study. Naslé is pushing a stricter standard: the system has to deliver in real operations, in a real-time environment.
His framing is design intent vs operational reality: “what should be built” is one product; “how it should run” is another.
In the interview, he describes an “intelligence layer” that integrates with a site’s energy management system or SCADA to operate assets and verify that promised benefits show up in production. XENDEE positions its OPERATE product as an AI-powered supervisory controller that follows the IEEE 2030.7 microgrid controller standard and provides supervisory controller capabilities through a cloud-based service.
What to bring to your next microgrid modeling session
If you’re the engineer asked to make the microgrid plan real, Naslé’s interview suggests a simple checklist of inputs that decide whether the model is useful or fantasy: utility availability and upgrade timeline; load now vs load growth; reliability targets; tariff structure and volatility; permitted grid interactions; equipment lead times; and an operations integration and validation plan.
What’s next: multi-year strategies and future firm power
Naslé repeatedly comes back to multi-year planning: build what you need to energize now, then phase what you add as load grows and constraints shift.
XENDEE has also published work on phased approaches for powering data centers that begin with distributed energy resources and preserve flexibility to add emerging technologies such as small modular reactors when regulators approve deployments.
Whether or not you buy that roadmap, the near-term engineering reality is already here: power constraints are forcing architecture decisions earlier, and microgrids are increasingly being evaluated not as backup—but as the mechanism to control schedule, cost exposure, and operational risk.













