Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Spew

Use Chatgpt In Hong Kong
Product Reviews

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

by admin June 19, 2025


Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds.

Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs.

It’s hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn’t known is whether some models have steeper energy costs than their peers as they’re answering questions.

Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model’s understanding and language generation—on 1,000 benchmark questions about various subjects.

LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special “thinking tokens” into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2.

The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model.

This reasoning process drives up energy needs, the authors found. “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,” study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”

The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers.

“Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study.

There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it’s unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be “selective and thoughtful” about the LLM use.

“Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner said in a statement.



Source link

June 19, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (672)
  • Esports (510)
  • Game Reviews (480)
  • Game Updates (602)
  • GameFi Guides (673)
  • Gaming Gear (654)
  • NFT Gaming (651)
  • Product Reviews (652)
  • Uncategorized (1)

Recent Posts

  • 28 Years Later quickly breaks 28 Days Later record after first reviews
  • Will It Support Crypto Too?
  • Heipi 3-in-1 Travel Tripod W28S review: sturdy support for multiple types of gear in multiple ways
  • Avalanche Game ‘Forgotten Playland’ Implements NFTs in Biggest Update Yet
  • Bitcoin Crashed From $17 to $0.01 14 Years Ago Today

Recent Posts

  • 28 Years Later quickly breaks 28 Days Later record after first reviews

    June 19, 2025
  • Will It Support Crypto Too?

    June 19, 2025
  • Heipi 3-in-1 Travel Tripod W28S review: sturdy support for multiple types of gear in multiple ways

    June 19, 2025
  • Avalanche Game ‘Forgotten Playland’ Implements NFTs in Biggest Update Yet

    June 19, 2025
  • Bitcoin Crashed From $17 to $0.01 14 Years Ago Today

    June 19, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • 28 Years Later quickly breaks 28 Days Later record after first reviews

    June 19, 2025
  • Will It Support Crypto Too?

    June 19, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close