AI emissions: What we know so far – and more importantly, what we don’t know

Speculation about the energy use and carbon footprint of artificial intelligence (AI) is rife – but hard facts are hard to find. Here’s what we know – and more importantly, what we don’t know about AI emissions.
The debate over how much energy an AI query uses was revived in June 2025 when OpenAI’s founder Sam Altman stated in a blog post that each query to ChatGPT consumes about 0.34 Wh of electricity – close to the number suggested by research firm Epoch AI earlier this year.
The number challenged another widely used estimation made by the International Energy Agency in 2024: that a ChatGPT request used 10 times the amount of electricity of a standard Google search (2.9Wh vs 0.3Wh).
But Altman’s number was strongly criticised by GreenOps specialists – IT experts focused on reducing the environmental impact of the sector.
Mark Butcher, a globally recognised digital sustainability leader, quickly countered it with a calculation of his own, which found that based on the stated 1 billion queries ChatGPT handles every day and the estimated energy use of a graphic processing unit (GPU), a 0.34 Wh electricity load would mean the company is only using 3,200 physical GPU nodes – each made of eight GPUs – to respond to the world’s queries.
“So… all of ChatGPT, all users, all tiers, all models, is being served by just 3,200 nodes on a platform running in multiple locations globally including HA, business continuity etc? That feels wildly wrong and is precisely why we need numbers that we can verify and trust,” he added.
Butcher’s comments were met very positively by other digital sustainability experts – most of whom expressed distrust about Altman’s estimation.
Lack of common framework to calculate AI energy use
So how can there be two wildly different figures for the energy consumption of an AI query? The response to this question also explains why it’s so difficult to calculate AI emissions.
There is no common framework for how AI energy consumption – and therefore emissions – should be measured: “Is it carbon per request? Is it carbon per task completed? Is it carbon per unit? Time? Is it something related to token generation?” asked Joseph Cook, Head of R&D at the Green Software Foundation, in a recent CSO Futures webinar on AI and sustainability.
Some experts, such as large language model (LLM) engineer and data scientist Kasper Groes Albin Ludvigsen, suspect that ChatGPT’s founder is only including GPU server consumption in his estimation, which “seems to be a common practice among the companies that do report on their AI consumption”.
But the IEA estimates that servers – made of GPUs or central processing units (CPUs) – account for around 60% of data centres’ energy use on average, with large differences depending on how modern they are. Other sources of electricity demand include storage units, networking equipment, and cooling and environmental controls.
Additionally, before ChatGPT or any other AI tool can answer a query, it needs to be trained – which also requires large amounts of energy (the energy consumed to train the GPT-3 model is estimated around 1,064 MWh).
The main problem is that AI and cloud companies are not being particularly transparent around how they calculate their data centres’ energy use and how much of that comes from AI queries.
AI emissions assessments and methodologies
When looking specifically at emissions, the assessment will also depend on factors such as the data centre’s location and how green the grid is there, and whether the operator is sourcing renewable electricity.
Some argue that calculations should consider the lifecycle emissions of the hardware used to power AI – so-called ‘cradle to grave’. One 2025 study of Google’s Tensor Processing Units (TPUs) found that when including upstream (extraction, parts production, server manufacturing, and transportation) and downstream (end-of-life, and recycling) stages of the data centre hardware lifecycle as well as operational emissions, each TPU had a carbon footprint of between 386 and 1,101 kilograms of CO2 equivalent per six years of use.
In another recent estimation developed in partnership with consultancy Carbone 4 and the French ecological transition agency (ADEME) and peer-reviewed by digital sustainability consultancies Resilio and Hubblo, Mistral AI calculated the lifecycle emissions of its own AI model at 20,400 tonnes of CO2 equivalent after 18 months of use.
Estimating that model training and inference represents more than 85% of AI’s lifecycle emissions, Mistral AI also stated that one query to its AI assistant generates an average 1.14 grams of CO2 equivalent.
Studies like these are bringing crucial data to the fore and sparking important discussions about methodologies – but they are still rare. It might take some time before we have a universally accepted, GHG Protocol-like carbon calculation methodology for AI – but it’s worth keeping an eye on the work of the Green Software Foundation, which is developing a carbon intensity measurement framework.
What we do know about AI emissions
While there is some debate over how to calculate the environmental impact of AI, one thing is clear: its widespread use is dramatically increasing tech companies’ carbon footprint. According to a June 2025 report by the International Telecommunication Union (ITU) and World Benchmarking Alliance (WBA), which used data reported publicly, the four top AI-focused firms experienced a whopping 150% increase in operational emissions on average since 2020.
Microsoft reported a 30% emissions rise in 2023 alone, due to the construction of new data centres to meet growing cloud and AI expectations. Google’s emissions grew by 13% the same year, and 48% since 2019, prompting CSO Kate Brandt to warn stakeholders that achieving Google’s target of net zero emissions by 2030 will require navigating a lot of uncertainty, including “around the future of AI’s environmental impacts”.
Concerns over environmental impacts have dampened businesses’ enthusiasm in adopting AI – and for good reason.
“I think the right way to think about AI sustainability is, it's like a, it's like a set of scales that has to be balanced right. On one side we've got all the negative externalities, including for the environment, and on the other side, we've got the positive externalities, including societal benefits. And the challenge that we've got today is that it's very, very hard to accurately estimate how much weight is sitting on either side of the scale,” summarised Cook at the CSO Futures webinar.
Read also: Generative AI – sustainability game changer or ticking time bomb?
Member discussion