Data Center Operators Focus on Meeting Huge Power Needs for AI
- Pete Harris, Principal, Lighthouse Partners, Inc.
- Apr 4
- 6 min read

The past few months have seen a steady stream of news from big tech companies, consortia and even nation states related to planned investments in massive data center projects to support artificial intelligence (AI) workloads. And while these announcements have focused on the business potential and magnitude of these projects, increasingly they have also noted how they would be powered.
That’s because the energy to run these increasingly bigger AI data centers – for both compute servers and the cooling systems needed to keep them running – is significant, costly, and often a blocker to opening a data center in the desired location. It makes sense then that when data center operators evaluate existing and planned facilities, their RFPs lead on their power requirements in terms of hundreds of megawatts and multiple gigawatts, and not square footage to accommodate servers.
The search for power for the world’s AI future – preferably carbon free but definitely cheap and easy to access – is driving much innovation in approaches to addressing the power challenge, which is the focus of this blog post.
Let’s start with a project called Stargate. It’s an initiative – announced by President Trump on only his second day in office – to develop data centers designed to make the U.S. a world leader in AI. Involving OpenAI, Oracle and Japan’s Softbank, it will require as much as $500 billion over the next four years and a good chunk of that investment could end up in the bank account of San Francisco-based Crusoe, a little known, one-time bitcoin miner that transformed into an AI-focused data center builder and cloud startup.
Crusoe’s first data center for Stargate got started near Abilene, TX in June 2024 when the project was purely an OpenAI/Microsoft affair. Since then, OpenAI’s relationship with Microsoft has significantly cooled, not least because the latter was not able to scale up its Azure cloud infrastructure fast enough for OpenAI’s needs. That resulted in OpenAI downsizing Microsoft’s role as it looked to new partners for what became something of a rebirth of Stargate.
Crusoe has proven to be a nimble company to work with. The first phase of the Abilene data center – within the new Lancium Clean Campus – is set to open in the middle of this year, featuring 980,000 square feet of server space in two buildings and providing 200+ megawatts of power (more than double what the entire city of Abilene currently uses). Each building can accommodate 50,000 Nvidia GB200 NVL72 liquid cooled server racks, each of which can house 72 Nvidia Blackwell GPU chips and includes a 130 terabyte/second NVLink interconnect that allows the rack to act as a single, massive GPU.
The second phase of the project, expected to be completed in mid-2026, includes six additional buildings, for a total of approximately 4 million square feet, and a total power capacity of 1.2 gigawatts (a gigawatt can power around 750,000 homes).

Lancium, headquartered in The Woodlands, TX, is an energy technology and infrastructure company – and Crusoe partner – that specializes in green energy provisioning services for hyperscale AI data centers. While the Abilene data center will draw power from the main Texas energy grid, there are plans to also tap into power from renewable sources, including surrounding wind farms, and potentially building out a large-scale onsite solar installation. Lancium brings the expertise to combine and optimize these different power sources, as well as energy storage, and how to manage the AI demand for them.
For future AI data centers, whether they be for Stargate or other projects, Crusoe is looking to tap into power generated by potentially green natural gas turbines that would be located next to the data center facilities. The company has formed a joint venture with an investment firm called Engine No. 1, which in partnership with Chevron has secured seven turbines from GE Vernova. Combined, they can provide 4.5 gigawatts of power. Yet to be decided is who to sell them to and whether they will end up split across customers or powering a single massive data center. Stargate alone might need eight gigawatts to run its data centers by 2030.
Crusoe won’t be the first AI datacenter builder to make use of gas turbines. Around 20 from Siemens Energy are already powering Elon Musk’s xAI data center in Memphis, TN, where the company’s Colossus supercomputer powers the Grok LLM chatbot. Word is that more will be added in future months because the Memphis grid cannot supply enough power the data center’s expansion plans.
Another company that is looking to build data centers for its new AI services is Meta. One report suggests that the company was surprised by the speed at which xAI built its Memphis facility and is accelerating its plans, even considering spending up to $200 billion on a huge data center to be built in either Louisiana, Wyoming or Texas that might consume up to seven gigawatts by 2030. In the meantime, Meta is planning for smaller green data centers in other regions, including one in northern Singapore that will have a 150 megawatt capacity. It will be powered by a solar array that – given Singapore’s land limitations – is floating on the Kranji Reservoir. Floating solar panels is a popular approach in Asia and is likely to be used to support future data center builds there.

An emerging green (though more controversial) energy route for data centers will be the nuclear option. While some data center operators are considering tapping into existing power plants for near term needs, most are also looking forward to commercial availability of Small Modular Reactor (SMR) technology, which is probably about five years off in the west (China and possibly Russia are close to commercial availability with their projects). When they are available, SMRs will be small enough to truck to data center sites where they can be installed next to the server suites. Each SMR should produce up to 80 megawatts of power, and multiple units can be installed to scale up to the required output.
Amazon, Google, Microsoft, and Meta have all made early positioning moves regarding nuclear power. Amazon led a $500 million financing round for X-energy, which is developing a SMR. Likewise, Google is backing SMR-developer Kairos Power to supply several reactors to produce 500 megawatts in total.

Meanwhile, Microsoft is working with Constellation Energy to restart the Unit 1 nuclear reactor at Three Mile Island in Pennsylvania. It was closed in 2019 for economic reasons, which was most certainly a better outcome than befell Unit 2. That reactor was scarily destroyed in a partial meltdown in 1979 – still considered the worst nuclear accident in the U.S. When Unit 1 does come back online in 2028, it is expected to produce 835 megawatts of power to be supplied to a local grid operator, from which Microsoft will buy enough power for its needs.
Meta is playing catch up to its rivals in that it has only recently issued a RFP to select parties for up to four gigawatts of nuclear power – either from SMRs or larger reactors. The company advertised the RFP alongside several sustainability initiatives that it is already taking for its facilities, such as sourcing green energy and implementing carbon capture mechanisms.
While AI developers continue invest in compute-intensive training to improve model accuracy, and while an increasing number of users make use of AI services – OpenAI recently added a million users in a single hour when it released art image generation features via ChatGPT – so delivery of those services will continue to require increased data center infrastructure, and the power to run it.
It’s yet to be determined whether a recent focus on greater efficiency of AI software and systems design – as highlighted by projects like DeepSeek – will dampen the demand for raw compute cycles to fuel these AI trends. Much might depend on whether increasingly popular AI coding assistants are developed to address efficiency and runtime optimization when creating code.
It’s also too early to tell whether decentralized AI initiatives, and Decentralized Physical Infrastructure Network (DePIN) approaches to power them, will have any significant impact on the demand for services from the IT heavyweights, and the centralized data center installations that they are deploying. While DePIN is already being pitched at AI use cases, such as from Akash, PAI3 and Sahara AI, it will likely be some time before implementations are scaled sufficiently to garner the interest of AI enterprises.
For the next few years then, the cutting edge for enterprise innovation in AI is likely to be characterized by an increasing number of ever bigger nuclear powered hyperscale GPU farms. What could possibly go wrong?
Comments