
Suppose you believe everything you read on LinkedIn. In that case, AI is about to replace half your workforce, reinvent your go-to-market strategy, cure indecision, and do it all while writing perfect emails with zero typos. And don’t get me wrong, I’m an optimist when it comes to AI. I’ve seen firsthand how it can cut noise, speed up decisions, and automate the things we all hate doing.
But there’s one tiny, inconvenient problem no one seems to be talking about. The power gap. As in, literal electricity.
The AI Hype Train Has a Flat Tire (I know, trains don’t have tires. Just go with it.)
We’ve been here before. New tech shows up, the early adopters build cool stuff, the consultants descend with acronyms and bold predictions, and someone announces that “the spreadsheet is dead.” But this time, there’s a twist. AI isn’t just software. It’s a software that eats hardware for breakfast. And that hardware eats a lot of electricity.
Let’s math!
In 2023, U.S. data centers consumed roughly 176 terawatt-hours (TWh) of electricity. For perspective, that’s about 4.4% of the entire U.S. power grid just for the data centers.
Now fast forward to projections for 2028: that number could triple to between 325 and 580 TWh, or up to 12% of the grid. Much of that spike is coming from AI workloads.
This isn’t theoretical. This is what happens when you train and run massive language models, fine-tune vertical instances, and let them spin 24/7 across global operations. Every time you ask an AI to “summarize this document,” or “generate 20 variations of ad copy,” it hits a GPU in a warehouse that’s pulling more juice than your average Walmart.
And that, folks, is where the revolution slows down.
Why Power Is the New Bottleneck
We’ve been trained to think about AI limitations in terms of accuracy, bias, explainability, or hallucinations. Fair points, all of them. But the real choke point? It’s much less philosophical and way more operational. It’s the infrastructure.
Let me give you a field-level view.
Utility companies across the U.S. are now getting so many data center requests, each demanding energy on par with a small city, that they’re overwhelmed. The largest power grid in the country, PJM Interconnection, has seen data center demand spike so fast that they’re having to deny or delay new connections. Not because they don’t want to help tech companies, but because there’s no room on the grid to do it.
PJM Interconnection has significantly increased its annual load growth forecast to 2.4%, primarily due to the rapid expansion of data centers and electrification efforts, up from the previous forecast of 0.8%. A study by Synapse Energy Economics projects that data center electricity consumption within PJM’s territory will escalate from 50 TWh in 2023 to 350 TWh by 2040. This would represent an increase from 6% to 24% of PJM’s total load.
The rapid development of AI data centers is intensifying concerns about the U.S. electrical grid’s capacity. PJM Interconnection, covering 13 states and the District of Columbia, is experiencing significant pressures, especially in Virginia, where a large concentration of data centers is located. PJM’s recent capacity auction saw prices increase by over 800%, reflecting rising demand and shrinking supply.
So while tech leaders are busy talking about how AI will “run the company of the future,” they might want to talk to their facilities team. You can’t run LLMs without juice. And the juice is running dry.
This Isn’t Just a Tech Problem
If you’re reading this from HR, Ops, or Talent, you might be thinking, “That’s interesting, but that’s not my problem.” But yeah, it is.
If you’re responsible for implementing AI into business workflows, like recruiting, onboarding, workforce planning, scheduling, or training, then this limitation is yours, too. You’re betting on systems that assume infinite scale, when the back-end infrastructure is telling a different story.
Take employer branding or recruiting automation. AI makes it easy to analyze a hundred thousand résumés, generate job posts, screen candidates, and schedule interviews. But multiply that across industries, companies, and geographies, and the load isn’t “automated,” it’s just outsourced to a server that’s burning fossil fuel at an alarming rate.
If your vendor says their product is AI-powered, ask them two things:
- Where’s the model running?
- What’s the compute cost per transaction?
Because if they’re scaling without energy awareness, you may be setting your systems up for a very slow fail.
The False Promise of “Fully Automated”
Here’s the other issue no one wants to say out loud.
The vision of “AI doing everything for you” isn’t just flawed because of the computational limits. It’s flawed because AI systems need human input to stay useful. They drift. They degrade. They hallucinate. And they do all this faster as you scale them.
Combine that with the power demands, and you’ve got a recipe for disaster if you try to fully automate complex operations without a plan for monitoring, validation, or resource availability.
In other words:
- – AI won’t run ops without power.
- – Ops can’t run AI without oversight.
- – No one seems to be budgeting for either one.
So, What Do You Do?
I’m not here to tell you to stop using AI. In many ways, it has replaced the way we used to research and problem-solve, so it is too useful a tool to shed. But I’m suggesting that it’s time for operational leaders to:
1. Rethink “Scale”
Ask whether your AI projects need to run everywhere all the time. Some use cases don’t need real-time answers. Some tasks don’t need to be run through a 70-billion parameter model. Simpler models, on-device compute, and scheduled batch jobs are all ways to reduce your footprint and your costs.
2. Audit Your AI Workflows
Just like data hygiene matters in your ATS, AI hygiene matters too. Start tracking where AI is being used, how often, and what it costs in terms of compute. Ask your vendors about sustainability practices. If they look at you like you’re speaking Dutch, start looking for new vendors.
3. Partner with IT and Facilities
This isn’t just a tech issue. This is an operational risk. Talk to your infrastructure team about what’s possible (and what’s not) in the next 2–5 years. Power, cooling, latency, and reliability all matter more than whatever the marketing deck says.
4. Build with Redundancy
What happens when the model is slow, the API limit is hit, or the power flickers? If the answer is “everything breaks,” you don’t have a resilient system. You have a house of cards with a nice interface.
5. Push for Smart Regulation
The conversation around AI regulation usually gets stuck in debates about ethics or jobs. But the infrastructure side needs just as much attention. Energy planning, clean grid investment, and better reporting requirements for cloud providers should all be on the radar of any company betting big on AI.
Gen AI isn’t a magic trick. It’s a system of pipes, wires, chips, and models that consume real energy and generate real waste. The more we ask it to do, the more we have to be honest about what it costs, and whether we’re building sustainable systems to feed the beast.
The future of work might be powered by AI. But the future of AI is going to depend on who pays the power bill. And if we’re not careful, we’ll build faster than we can sustain, and the lights might flicker out before the revolution can be televised.
Related Posts









