Energy Future: Powering Tomorrow’s Cleaner World

Data Centers, Nukes, And The Grid

Peter Kelly-Detwiler

AI is hungry for power, but the grid’s most precious resource isn’t just generation—it’s time. We dig into a headline‑grabbing plan to build a 1,500‑megawatt nuclear‑powered data campus in South Texas and unpack why the near‑term reality starts with on‑site gas, not fission. From regulatory approvals and factory build‑outs to fuel and financing, modular nuclear still has miles to go. Meanwhile, data center builders face a thicket of interconnection queues, uneven utility processes, and hardware lead times that stretch to 2030.

We trace the practical playbook emerging across the industry: bridge with co‑located turbines or fuel cells, pursue grid interconnection in parallel, and design for redundancy because machines fail and maintenance windows are inevitable. Even markets famous for speed are hitting constraints. Transmission megaprojects take decades, demand requests are swallowing remaining capacity, and rate pressures are pulling energy costs into the political spotlight. Against that backdrop, the smartest lever may be operational, not infrastructural.

Here’s the pivot: flexible data center loads. With better workload orchestration, curtailment commitments, and virtual power plant contracts, large campuses can shed up to 25% for multi‑hour windows, buy capacity from aggregators, or island temporarily using their own generators. Grid operators are responding in kind, offering accelerated interconnection paths for customers willing to flex, while policy signals in places like Texas clarify that curtailment is part of the deal. The payoff is systemwide—higher load factors, more megawatt hours across the same wires, and a faster route to growth than waiting for the next 765‑kV line.

If you care about how AI, cloud infrastructure, and energy policy collide, this conversation connects the dots between nuclear timelines, gas‑first strategies, interconnection reform, and the rise of demand flexibility and VPPs. Subscribe, share with a colleague who builds data infrastructure, and leave a review with your take on the best path: build more generation, or bend the load?

Support the show

🎙️ About Energy Future: Powering Tomorrow’s Cleaner World

Hosted by Peter Kelly-Detwiler, Energy Future explores the trends, technologies, and policies driving the global clean-energy transition — from the U.S. grid and renewable markets to advanced nuclear, fusion, and EV innovation.

💡 Stay Connected
Subscribe wherever you listen — including Spotify, Apple Podcasts, Amazon Music, and YouTube.

🌎 Learn More
Visit peterkellydetwiler.com
for weekly market insights, in-depth articles, and energy analysis.

SPEAKER_00:

Hi, I'm going to try something new with my weekly video. Rather than providing you with a light touch on a number of stories, I will instead dig in a little deeper on a particular topic, providing you with a bit more context and analysis. I'll be curious to see what you think and we'll see where that takes us. So for the first one, last week AI infrastructure provider Crusoe and modular nuclear company Blue Energy announced a new partnership focused on developing and operating a 1,500 megawatt nuclear-powered data center campus at the Port of Victoria in South Texas. As observers in the space know, it'll be a while before we see the first modular nukes commercialized, as there are still numerous hurdles to overcome. First, designs must be approved and tested, then the factories that make these modular facilities must be built, and they'll have to achieve economies of scale, producing significant volumes to drive marginal unit costs down. That in turn implies that of the dozen plus companies in the space right now, most will have to fail. Then, of course, there are the financing challenges, fuel supply issues, associated security concerns, and the NIMBY issue. If people are fighting wind, solar, and storage in their backyards, just imagine how they will respond in many parts of this country to nuclear. But let's take the leap of faith and assume that Crusoe and Blue Energy can get to yes on this. What will they do to power the data facilities in the interim? Well, they'll look to on-site gas generation, of course. And the two companies position their approach as the world's first gas to nuclear conversion, with the transition to nuclear generation anticipated by 2031. The concept is an interesting derivation of another approach that's increasingly being embraced as it's becoming clear that A, many data centers are chasing the same megawatts of grid power, and B, that process of grid interconnection is taking a long time. Today, the interconnection process is also different across the country and may vary widely from one utility to the next. I had a recent conversation with an unnamed developer, and he commented that from what he sees out there in the landscape, there is no one size fits all approach, and some utilities are remarkably ill-prepared for the requests they are fielding. This individual was thus looking forward to seeing the ultimate outcome of the October 23rd letter from Secretary of Energy Chris Wright to the Federal Energy Regulatory Commission, directing the FERC to develop an interconnection rulemaking for data centers in large loads. A standardized process, if it's a reasonable one, that generally protects all actors, including ratepayers, couldn't hurt, though it's a big lift. Faced with these difficulties and the enormous value to data centers of quickly accessing electrons, other developers are also turning to bridging strategies, but rather than looking to nukes as their end game, they're eyeing the idea of co-located fuel cells or turbines to spin up their facilities even as they pursue longer-term interconnection to the power grid. That begs the question, why not just skip the grid entirely and go with on-site co-located power for the long run? Well, the answer to that is the plants break down on occasion. And when those generators don't fail, they still need to go out for maintenance. GE Vernova, for example, notes that its turbines typically require maintenance every one to three years, depending upon how they're operated. So if you're a data center owner or operator, what do you do when your generation plant is down? If you're not grid connected, you do nothing unless you have other backup generation. Data facilities not tied to the grid may have to carry considerable reserve margins of their own. Take, for example, the meta-subsidiary that received approval in June for a 200 megawatt data center in Ohio. It's not being supported by a single generator, but rather by a fleet of about 30 machines, including smaller combustion turbines and quick start reciprocating engines. The total capacity is about 320 megawatts of generation, presumably to address that outage and maintenance issue. The other challenge for direct supply is generator lead times. If you haven't ordered generators yet, get in line. And you can expect to both wait and pay, as the turbine manufacturers now require deposits as far out as 2030. So for many, direct supply won't be easy. Thus, most developers in the end will likely try to connect to the grid for years to come. That process will start with those already connected or well into the interconnection queues, but those applicants will rapidly fill up the existing transmission capabilities of today's grid. Then we'll hit a roadblock, since new transmission projects take longer than designing a new rocket and getting it into space. Take, for example, the 3.5 gigawatt Sunzia project from New Mexico to Arizona. That endeavor took 17 years to permit and complete, with review by 10 federal agencies. The Green Belt Express from Iowa to Indiana, Illinois, that one that just lost its federal loan guarantee, that's been in the works for 15 years. By contrast, SpaceX started on design of its Falcon 1 rocket right after its founding in 2002, with the first launch attempt in 2006 and successfully reaching orbit in 2008. So half the time it takes to get these transmission projects built. Texas can build faster than most, since most of its lines can and will be built without crossing state jurisdictions. But even in Texas, there are limits. That grid peaked out at 85,500 megawatts of demand in the summer of 2023, but it now has over 200,000 megawatts of large loads, mostly data centers, seeking interconnection. The grid is eyeing a transmission upgrade, including nearly 2,500 miles of 765 KVI voltage lines, the biggest lines currently employed in the U.S., at a cost of over$32 billion. Those lines are intended to meet a 2030 demand forecast of 150,000 megawatts, so perhaps 65,000 megawatts above recent peaks. If large loads need much new capacity beyond that threshold, they won't get it from the grid right away if they expect to have their demand met during system peaks. And that transmission project won't be built out for years. But here's where today's inefficient grid works in everybody's favor. Right now, the grid operates at roughly a 53% load factor, which means it's overbuilt to serve system peak plus reserve margins, and it's highly underutilized. But what if new data loads could exhibit more flexibility and reduce consumption during those periods of peak demand? A Duke University Nicholas Institute study suggests that with flexibility, one could interconnect tens of thousands of additional megawatts. This could be done by changing the way data centers operate. And Chipmaker NVIDIA just announced that with software vendor Emerald AI, its data centers may be able to demonstrate more flexibility than has previously been supposed. A new project going into Virginia called Aurora will be programmed to do just that. And tests have shown the ability to reduce demand by up to 25% for as long as three hours. Or data center companies could bring their own generation. So if the grid operator calls on the data center, they can disconnect from the grid and temporarily self-generate. Southwest PowerPool is planning to accelerate its interconnection process to just 90 days if data centers can commit to being curtailed. And Texas Senate Bill 6 also looks at curtailment, explicitly telling data centers that ERCOT has the right to use a kill switch that will employ if it needs to. And those data centers will have to accept that fact if they want to be interconnected. Or perhaps data centers could buy capacity from somebody else that can provide it more cost-effectively. That's what DR provider Voltas is offering, a virtual power plant arbitrage opportunity. If a data center needs 500 megawatts of capacity, they could buy that from somebody else who could shed that load when it's required. Other virtual power plant developers are nurturing the same concept. So flexibility, whether directly in the data center load or bought from other customers, can help. It can push down the peaks and push up the shoulders. That will increase load factors so we can flow more megawatt hours across the same infrastructure. And with transmission distribution costs increasing even faster in recent years than generation costs, this approach seems logical, necessary, and urgent, especially since utility rates are increasingly in the crosshairs of the political conversation. Bridging strategies and nukes may help some actors such as Crusoe, but for many others, flexible load may finally have its day in the sun.