
The AI Revolution Is Thirsty — How Much Water Does ChatGPT Really Use?
- Vikram Singh Shahi
- Technology , Environment , Ai
- February 13, 2026
One prompt. One tiny drop of water. At least, that’s how Sam Altman has framed it: a single ChatGPT interaction might use about 1/15 of a teaspoon of water. Multiply that by a billion messages a day to ChatGPT alone — plus Gemini, Claude, DeepSeek, and the rest — and the picture changes fast.
The AI revolution is thirsty. And the real cost isn’t just in cooling servers; it’s in electricity generation, chip manufacturing, and local water supplies that communities depend on. Here’s how much water AI actually uses, why it matters, and what’s at stake for society.
How Much Water Does AI Really Use?
As reported by BBC News Sam Altman’s “1/15 of a teaspoon per interaction” is striking, but experts are cautious. Without more detail from OpenAI, it’s hard to verify. Some researchers put the number much higher: for a medium-sized large language model, 10 to 50 queries could mean roughly 500 ml of water once you include cooling and the water needed to generate the electricity that runs the servers.
Either way, the trend is clear: AI uses a lot of water, and usage is growing as models get bigger and traffic grows.
Why Do Data Centers Need So Much Water?
Every time you send a prompt to an AI, powerful chips in data centers run huge numbers of calculations. Training the models is even more intensive. All that work generates heat. If the hardware isn’t cooled properly, it overheats and fails.
Air cooling used to be enough. But the newest AI infrastructure is so energy-dense that many operators are turning to liquid cooling: piping coolant over the chips, then using water to cool that coolant in heat exchangers. The water has to be clean — often drinking-quality — to avoid bacteria, clogs, and corrosion. In the most common setup:
- Coolant flows over the chips and carries heat away.
- Water cools the coolant; the coolant goes back to the servers.
- Hot water is sent to cooling towers, where fans and evaporation dissipate the heat.
- In that process, a large share of the water evaporates — in some systems, up to 80%. That water is gone from the local water cycle: it’s no longer available for farming, drinking, or hygiene.
So the water isn’t just “used”; in many cases it’s consumed and not returned to the same source.
It’s Not Just the Data Center — It’s the Whole Chain
Water use doesn’t stop at the server room:
- Electricity: Power plants — coal, gas, nuclear — heat water to create steam that drives turbines. More AI means more data centers, which means more electricity, and thus more water used in power generation.
- Chip manufacturing: Making the semiconductor chips that run AI requires water. So does refining the raw materials that go into the hardware. The full supply chain is water-intensive from mine to data center.
The International Energy Agency has said electricity demand for AI-optimised data centres could rise by hundreds of percent by 2030 — on the order of hundreds of terawatt-hours, comparable to a large country’s annual consumption. That implies a lot more water for both cooling and power.
Communities Are Pushing Back
Around the world, communities are worried about data centres straining water supplies and electricity grids. Protests and opposition have emerged in Spain, India, Chile, Uruguay, and parts of the United States. When a data centre competes for the same water that irrigates crops or supplies taps, the trade-off becomes very real.
Getting precise numbers is difficult. Google, Meta, and Microsoft report that their data centres use billions of litres of water each year, but they generally don’t break out how much is due to AI specifically. So the full picture of AI’s water footprint is still incomplete.
Pledges, Innovation, and a Long Road Ahead
Many big tech firms have pledged to become water neutral or to sharply cut water use by 2030. Reaching those goals will require real change: less evaporation, better reuse, and smarter cooling.
Some companies are already testing:
- Cooling with little or no evaporation — keeping water in closed loops or using alternative methods.
- Using data centre waste heat to warm homes or buildings.
- Unconventional locations — data centres under the sea, in the Arctic, or even early experiments in space for backup or niche workloads.
These ideas are still early. Generative AI is young; so is the industry’s understanding of how to scale it without overwhelming water and energy systems. The hope is that, as a global society, we learn to minimise water and energy use while still benefiting from the technology — because both are shared, finite resources.
The Bottom Line: Growth at What Cost?
AI is changing how we work, create, and solve problems. But that growth has a physical cost: water for cooling, water for power, and water embedded in the supply chain. The exact number per ChatGPT query may be debated, but the direction is not — AI is thirsty, and scaling it without a plan risks stress on water-stressed regions and conflict with local needs.
The challenge is to make AI more efficient and more sustainable so that the benefits don’t come at the expense of communities and the environment. That will take transparency from tech companies, smarter design, and policy that treats water and energy as part of the real cost of AI.
Editorial Note
This article is an independent summary and analysis inspired by reporting from BBC News. All original reporting credit belongs to BBC. Readers are encouraged to view the original report for full context.
Watch the Full Report
For a deeper look at how AI uses water — from cooling towers to chip fabs to Sam Altman’s estimate — watch the full BBC report here:
How Much Water Does AI Use? The Thirsty Truth Behind ChatGPT and Data Centres (BBC)