Energy Future: Powering Tomorrow’s Cleaner World
Energy Future: Powering Tomorrow's Cleaner World" invites listeners on a journey through the dynamic realm of energy transformation and sustainability. Delve into the latest innovations, trends, and challenges reshaping the global energy landscape as we strive for a cleaner, more sustainable tomorrow. From renewable energy sources like solar and wind to cutting-edge technologies such as energy storage and smart grids, this podcast explores the diverse pathways toward a greener future. Join industry experts, thought leaders, and advocates as they share insights, perspectives, and strategies driving the transition to a more sustainable energy paradigm. Whether discussing policy initiatives, technological advancements, or community-driven initiatives, this podcast illuminates the opportunities and complexities of powering a cleaner, brighter world for future generations. Tune in to discover how we can collectively shape the energy future and pave the way for a cleaner, more sustainable world.
Energy Future: Powering Tomorrow’s Cleaner World
Nvidia's 100 GW Promise: Can Flexible AI Data Centers Fix the Grid?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week's energy market update, we explore a major announcement from leading AI chipmaker Nvidia, software company Emerald AI, and major energy players like Constellation to power a new class of "flexible AI factories". By utilizing Nvidia's latest Vera Rubin chip and Emerald AI's conductor platform to modulate compute demand in real-time, Nvidia claims this approach could unlock up to 100 gigawatts of capacity across the US power system.
With the US grid staring at expected peak demands that existing infrastructure simply cannot accommodate in the next three to five years, flexibility is becoming critical. For energy professionals tracking massive load growth, this video unpacks what this flexible architecture actually means for the grid:
The Grid Bottleneck & Souring Costs: Why adding inflexible data centers pushes up peak demand and exacerbates supply scarcity. We look at PJM's capacity market, where prices have soared seven or eightfold, costing ratepayers an estimated $23 billion over the last three auctions.
The Economic Power of Flexibility: How modulating compute loads during grid scarcity could allow massive new demand to connect without requiring billions in new infrastructure. We highlight recent Duke University studies suggesting that avoiding just 1% to 2% of peak hours could reduce utilities' new natural gas construction costs by 10 to 15%.
Real-World Testing: A look at the limited empirical data we have so far, including a peer-reviewed test at an Emerald AI data center in Arizona that successfully reduced power consumption by 25% during peak hours. We also discuss Google's recent milestone of surpassing 1 gigawatt of data center demand response.
Regulatory Skepticism & Risk: Why PJM's Independent Market Monitor (IMM) is pushing back hard against treating data centers as paid demand response assets. We discuss the immense financial risk to ratepayers if a data center fails to curtail power during an emergency, and the argument that flexibility should simply be a mandatory precondition for interconnection.
While the economic incentives and technical concepts are incredibly promising, the industry still needs to prove that this combination of silicon and electrons can be predictably and repeatedly flexible at scale. Join us as we unpack the 100 GW claim and discuss why significant caution is still warranted
🎙️ About Energy Future: Powering Tomorrow’s Cleaner World
Hosted by Peter Kelly-Detwiler, Energy Future explores the trends, technologies, and policies driving the global clean-energy transition — from the U.S. grid and renewable markets to advanced nuclear, fusion, and EV innovation.
💡 Stay Connected
Subscribe wherever you listen — including Spotify, Apple Podcasts, Amazon Music, and YouTube.
🌎 Learn More
Visit peterkellydetwiler.com
for weekly market insights, in-depth articles, and energy analysis.
The 100 Gigawatt Claim Explained<br>
Why Grid Flexibility Matters Now<br>
Research And Early Flex Results<br>
Incentives Risks And Market Rules<br>
What We Still Do Not Know
SPEAKER_00This week, leading AI chipmaker NVIDIA and software company Emerald AI announced they'd be working with AES, Constellation, Invenergy, Next Era Energy, NScale Energy and Power, and Vistra to quote, power and advance a new class of AI factories, unquote. These data centers are intended to be able to connect to the grid faster and operate as flexible energy assets that can support the grid. A press release from NVIDIA indicated that the parties would use a new reference design utilizing NVIDIA's latest Vera Rubin chip and its DSX software to help manage power use in real time, modulating demand and coordinating flexible load. Where speed to market is a must, NVIDIA envisions factories using on-site co-located generation and storage as a bridge before they eventually connect to the grid. At that point, the on-site assets would be used to flexibly support that grid. The company also notes that its DSX reference architecture can support flexible AI data centers and help them more quickly connect by virtue of that flexibility. So where does Emerald AI come into play? Well, its conductor platform is intended to orchestrate computational flexibility that would be combined with on-site resources to deliver that power flexibility to the grid. As NVIDIA notes, in addition to facilitating faster grid interconnections, the approach could reduce the need for infrastructure size to meet demand peaks. The other companies listed, AAS, Constellation, and Venerity, and so on, will they build the related energy infrastructure? Power flexible AI factories, NVIDIA claims, can help unlock up to 100 gigawatts of capacity across the U.S. power system. That's a big claim, so let's unpack this thing and put it in perspective. Per the Energy Information Administration, the U.S. had an all-time peak of 759,000 megawatts, 759 gigawatts last July. It has about 1,300 gigawatts of installed generating capacity. So in context, a hundred gigawatt claim is a pretty heady number. Why is data center flexibility so important? Well, first there's limited transmission and supply available on the grid. In our house, when one is overcommitted, we frequently use the phrase that you're trying to put six pounds of potatoes in a five-pound bag. In the case of the power grid, it's more like seven or eight pounds. Some utilities are staring at multiples of existing load, and there's no way the infrastructure of the existing grid, or that which can be built in the next three to five years, is going to accommodate the expected peak demands. Then there's the fact that the grid is poorly utilized. With an estimated load factor, that's the percentage of energy we use versus the amount we could have used if we ran the grid at 100% 24-7. We use about 60% of that total. In the power grid until now, everybody's gotten the capacity they needed at a relatively affordable price, and nobody gets turned away. If we ran an airline that way, anybody who wanted a flight could show up at the ticket counter whenever they wanted to, even three days before Thanksgiving, and hop on a plane anywhere in the US, probably paying less than$300. The planes would be full for a few days a year, but a lot of the time we'd be able to stretch out across three seats and sleep until the airline went out of business. You can't run many businesses the way we run the grid, not with those inefficiencies. So if you build new data centers and they're not flexible, they end up pushing up that peak demand and you have to build new stuff until you run into the fact you simply can't build new power plants and transmission lines fast enough. And supply capacity gets really expensive. Exhibit A is PGM's capacity market, where prices have solared seven or eightfold over average historical numbers with existing and forecasted day loads costing ratepayers an estimated$23 billion over the past three auctions. The amount of new demand that can be connected is limited. But if you create more flexibility during these periods of grid scarcity, both by modulating compute loads and tapping on-site supply assets, now you can add more demand to the grid without creating the need to build all that new infrastructure. That approach also flows more gigawatt hours across the same grid, creating economic efficiencies and lowering per unit delivery prices. Two recent Duke University Nichols Institute studies on flexibility suggests that these types of operations can greatly increase the ability to add load and result in economic efficiencies. The first study in 2025 calculated that 76 gigawatts of new load could be integrated with an average annual curtailment rate of 0.25%. That is reducing demand for 0.25% of maximum uptime, and 98 gigawatts of new load could be integrated at a curtailment rate of 0.5%. A second 2026 study commented that flexible operation can feasibly reduce system costs by tens of billions of dollars over the next decade and lower electricity prices for all participants. Avoiding just one or two percent of the peak hours could reduce utilities' new natural gas combined cycle construction costs by 10 to 15%. So conceptually, there's a there there, but the available information we have doesn't really tell us all that much yet. We don't, for example, know how flexible the various compute functions that NVIDIA talks about will be. We don't know it for the large language training models that often run for weeks and months as they build out their neural networks and virtual understanding of our world. Nor do we know the potential inherent flexibility in the growing inference function where these models then perform on demand to undertake the tasks we put in front of them. We have limited empirical data with which to make our assumptions. For example, we know about an Emerald AI data center in Arizona that is part of the Electric Power Research Institute's EPRIS DC Flex program, aimed at understanding these issues and imparting that knowledge to grid operators and utilities. In that case, a data center in July of last year in a peer-reviewed test scenario reduced power consumption by 25% during three hours of peak grid demand, while maintaining AI compute quality in real time and orchestrated clusters of NVIDIA GPUs on a job-by-job basis to achieve those results. And as of late March 2026, MWLDAI confirmed it successfully demonstrated power flexibility capabilities at five different commercial data centers around the world. The company also announced an initiative with PGM Interconnection, Digital Realty, NVIDIA, and EPRI, in which they'll deploy their software at NVIDIA's 96 megawatt Aurora AI factory in Virginia to be commissioned by mid-2026. But again, the actual performance numbers are limited, so we don't really know how much flexibility grid planners can count on. And Google announced just this past week it has surpassed one gigawatt of data center demand response with its utility partners. But again, the press release is maddeningly scarce with the details that matter, such as what percentage of load does that constitute? If it's 25 or 30%, it's a big deal. If it's 5%, please excuse me if I yawn. One other important note of issue Bitcoin miners in ERCOT, for example, got paid millions for participating in demand response programs. During an August 2023 heat wave, ERCOT paid Riot almost$32 million for cutting demand, but that was something it probably would have done anyway in response to high prices. That kind of giveaway costs other ratepayers and benefits nobody but the participating load. If the AA data centers are so anxious to connect to the grid with a goal of making billions, then flexibility and ability to curtail should be a sine qua non. There's also planning risk. PGM's independent market monitor, the IMM, has thus far pushed back hard on the idea of data centers being treated as reliable demand response assets, particularly since PGM doesn't have the capability to enforce precise real-time load curtailments for individual data centers, and in their view, the risk is too large. If a small percentage, say 10%, of forecasted data loads don't curtail power during a grid emergency, the resulting capacity shortfall could cost other ratepayers billions. In its recent annual state of market report, the IMM commented that the market solution is to make data load bring its own new generation, which could then speed up interconnection. Absent that, it says, PGM should require new data centers that don't bring their own generation to be curtailable before other current demand side customers, and they shouldn't be paid for so doing. This should simply be a precondition for the ability to interconnect. So to summarize, the NVIDIA Emerald AI energy supplier announcement concerning flexible load is interesting, and so is the one from Google. The economic incentive and technical potential may be there, but we don't know how it will work, at what scale, and for how long. And until that combination of silicon and electrons proves to be predictably and repeatedly flexible, significant skepticism and caution is warranted. Well, thanks for watching, and we'll see you again soon.