Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

AIs

AI Agent
Gaming Gear

Garbage in, Agentic out: why data and document quality is critical to autonomous AI’s success

by admin October 1, 2025



There is a lot of optimism about the future of agentic AI because it promises to drive higher levels of digital transformation by autonomously handling complex, multi-step tasks with accuracy, speed, and scalability.

Much of the buzz around AI agents is due to their ability to make decisions without human intervention, freeing up skilled talent for strategic work, and scaling decision-making without adding headcount.

That said, how can companies go beyond the hype to gain a better understanding of how agentic AI can drive higher efficiency and return on investment?


You may like

According to PwC, there is growing interest translating into IT investments in agentic AI. In its May 2025 survey, 88% of respondents said their team or line of business plans to increase AI-related budgets in the next year because of agentic AI.

Scott Francis

Social Links Navigation

Technology Evangelist at PFU America, Inc.

And 79% reported AI agents are already being adopted in their companies, and of those that have adopted agents, two-thirds (66%) claim they’re delivering measurable value through increased productivity.

But there are some clouds on the horizon: Gartner predicts that more than 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.

However, done right, and with proper preparation, agentic AI has the potential to be far more disruptive than generative AI because of its direct impact on business KPIs such as cost reduction, faster decision-making, and task completion.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Agents driving healthcare transformation

Early-adopter use cases are already showing promise. Take Nvidia for example. The AI innovator is developing an enterprise AI platform to create task-specific AI agents, including one for The Ottawa Hospital that will handle patients’ pre-operative questions 24/7.

This includes providing details on how to get ready for surgery, and on post-surgery recovery and rehabilitation. According to Kimberly Powell, vice president and general manager of healthcare at Nvidia, AI agents can save providers time and money, while also enhancing the patient experience.

However, the investment in agentic AI is a waste of time, money, and resources if the input models are receiving outdated, poor quality, or inaccurate data. In the case of The Ottawa Hospital agent in development, it relies on well-organized, accurate, up-to-date patient information to drive decision making and automate tasks.


You may like

Healthcare is just one potential use case for agentic AI. Businesses in almost any industry stand to benefit from improved efficiency through task automation, reduction in human error, and scaled decision making in applications ranging from customer support, procurement, IT operations, and more.

Data and document quality determine agentic AI effectiveness

Unlike GenAI — a very useful content creation tool — agentic AI acts autonomously, which is why data and document quality is even more imperative. The LLMs at the core of agents require clean, validated, and secure data because agents’ actions and decision making are only as good as the data and rules it’s given.

Agentic AI relies on structured data and digitized documents to make decisions, trigger workflows, or generate outputs. Bottom line: inaccurate, outdated, or incomplete data directly skews the logic the AI uses to act.

One scenario illustrating how agents can go terribly wrong is in bank loan applications. If the financial data from scanned forms or other inputs is outdated, it could lead the AI to approve a high-risk applicant, increasing the potential for bank losses.

For non-digital documentation, hard copies that have been scanned using old equipment with low resolution and poor image quality can confuse optical character recognition (OCR) and natural language processing (NLP) systems, leading the agent to misinterpret content.

Advanced, high-speed imaging scanners that rotate skewed documents, offer 300 DPI resolution, and utilize adaptive thresholding to enhance characters, remove stains, watermarks, and background noise are ideal for accurate OCR recognition – and more accurate results.

Data preparation makes all the difference

To stop autonomous agents from “hallucinating” or delivering poor decisions that may could impact operational efficiency, organizations should follow industry leading data management and retention best practices to prepare data sets prior introduction to an LLM, including:

  • Preprocess and clean data – Without consistently doing data “spring cleaning” even the most advanced AI will struggle and be less effective. It’s critical to remove duplicate documents and data, outdated versions, and corrupt files. Using AI for document classification, summarization, and cleanup dramatically speeds up the process while reducing manual effort. Even fixing typos, formatting issues, and inconsistent structures in scanned documents and PDFs will improve AI inputs, minimizing the potential for “garbage in, garbage out.”
  • Classify and tag documents – Once the data has been cleansed and processed, apply metadata labels — such as “sales presentation” or “HR training manual” — to documents for easier identification and then organize content into semantic categories relevant to business processes. Giving documents structure enables agents to gain a better understanding of context and relevance.
  • Preserve data confidentiality – It’s critical that all AI systems only have access to the data and documents they need, and nothing more. This also applies to the use of external APIs or tools. Sensitive, personal data that’s no longer needed should be sanitized and erased permanently to minimize risks related to data privacy, leakage, or compliance violations.
  • Test and analyze – Finally, run tests on sample prompts using smaller document sets and then analyze the outputs. Using feedback loops and refine data sources and formatting before scaling up occurs. This important step will enable IT teams to catch formatting issues, hallucinations, or data misinterpretations early.

The quality imperative and the future of agentic AI

There’s a lot riding on the promises of autonomous AI with spending projected to reach $155 billion by 2030. However, for agentic AI to be accurate, reliable, and support compliance, organizations must prioritize data and document quality.

By adopting best practices that prioritize clean, well-governed data, and clear documentation, organizations can ensure the AI agents they’re employing operate with precision and integrity. In a future shaped by autonomous systems, high-quality information isn’t just an asset, it’s a prerequisite for trusted and effective agentic output.

We’ve featured the best cloud document storage.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



Source link

October 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
Americans Want AI's Benefits But Fear Losing What Makes Them Human: Survey
NFT Gaming

Americans Want AI’s Benefits But Fear Losing What Makes Them Human: Survey

by admin September 20, 2025



In brief

  • A new Pew survey showed Americans wanted AI’s help with chores, but feared it wrecked their minds and relationships.
  • Most Americans said AI stripped away creativity and human connection, with only 10% feeling more excited than worried.
  • The majority admitted they had no control over AI in their lives—just a digital tide they couldn’t stop.

Americans are growing increasingly uneasy about artificial intelligence infiltrating their daily lives, with half now saying they’re more concerned than excited about the technology—a sharp jump from 37% just four years ago, according to a new Pew Research Center survey.

The study of 5,023 U.S. adults, conducted in June 2025 and published this week, reveals a nation grappling with a fundamental paradox: While 73% say they’d let AI assist with day-to-day tasks, 61% simultaneously want more control over how it’s used in their lives.

Half of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited, compared with 10% who are more excited than concerned.

However, Americans see AI as both inevitable and threatening to core human capabilities. Some 53% of respondents said AI will worsen people’s ability to think creatively, compared with 16% who say it will improve this. Half believe AI will damage people’s ability to form meaningful relationships, with only 5% expecting improvement in human connections.

“I think a sizable portion of humanity is inclined to seek the path of least resistance,” one woman participating in the study told the researchers. “As annoying and troublesome as hardships and obstacles can be, I believe the experience of encountering these things and overcoming them is essential to forming our character.”

The generational divide shows that the younger the generation, the more exposure to AI they will have in their day-to-day lives. According to the study, 62% of those under 30 say they have heard or read a lot about AI, compared with 32% of those ages 65 and older.



Yet these younger Americans, despite their greater familiarity with the technology, express deeper pessimism about its effects. And 61% of adults under 30 think the increased use of AI in society will make people worse at thinking creatively, compared with 42% of those ages 65 and older.

The American unease mirrors global trends. Stanford’s HAI AI Index Report 2025 confirms that worldwide, ambivalence and worry are increasing even as people appreciate AI’s efficiency gains. The tension is particularly acute in developed nations: In 2022, countries like Great Britain (38%), Germany (37%), and the United States (35%) were among the least likely globally to view AI as having more benefits than drawbacks.

Trust emerges as another critical fault line. While 76% say it’s extremely or very important to be able to tell if pictures, videos, or text were made by AI or people, more than half admit they lack confidence in their ability to actually make that distinction. This trust deficit extends beyond content detection: KPMG’s 2025 Global Trust Report found that confidence in AI companies has been falling steadily since 2022.

Another interesting finding by Pew Research is that 57% of Americans rate the risks of AI for society as high or very high, while only a quarter seems to be hyped about the technology.

When asked to explain their concerns, respondents most frequently cited the erosion of human abilities and connections—people becoming lazy, losing critical thinking skills, or depending too heavily on machines for basic tasks.

This growing wariness contrasts sharply with AI experts surveyed by Pew earlier this year. AI experts are far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years (56% vs. 17%), according to a previous study.

The divide between experts and the public reflects deeper tensions about who benefits from AI advancement. Academic studies show marginalized groups—minorities and people with disabilities—express even more negative views about AI than the general population, suggesting the technology’s benefits aren’t reaching everyone equally.

In other words, the negative effects of AI technologies are perceived more by groups that are affected by biases or stereotypes—which generative AI models tend to amplify.

Americans do see limited roles for AI in specific contexts—weather forecasting, detecting financial crimes, or developing medicines. But they draw firm boundaries around personal matters. Some 73% of respondents said that AI should play no role in advising people about their faith in God, and two-thirds reject AI involvement in judging romantic compatibility.

The regulatory landscape reflects these concerns. A Gallup-SCSP 2025 study found overwhelming support for stricter oversight, with 72% supporting more government efforts to control that industry.

Ultimately, 57% or respondents said they have not too much or absolutely no control in whether AI is used in their lives, which shows that many Americans already feel the technology’s advance is beyond their influence—a digital tide they can neither stop nor fully embrace.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

September 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
Helene Braun
GameFi Guides

Why Bitcoin Miners Are Powering AI’s Expansion

by admin September 14, 2025



When Core Scientific signed a $3.5 billion deal to host artificial intelligence (AI) data centers earlier this year, it wasn’t chasing the next crypto token — it was chasing a steadier paycheck. Once known for its vast fleets of bitcoin mining rigs, the company is now part of a growing trend: converting energy-intensive mining operations into high-performance AI facilities.

Bitcoin miners like Core, Hut 8 (HUT) and TeraWulf (WULF) are swapping ASIC machines — the dedicated bitcoin mining computer — for GPU clusters, driven by the lure of AI’s explosive growth and the harsh economics of crypto mining.

Power play

It’s no secret that bitcoin mining requires an extensive amount of energy, which is the biggest cost of minting a new digital asset.

Back in the 2021 bull run, when the Bitcoin network’s hashrate and difficulty were low, miners were making out like bandits with margins as much as 90%. Then came the brutal crypto winter and the halving event, which slashed the mining reward in half. In 2025, with surging hashrate and energy prices, miners are now struggling to survive with razor-thin margins.

However, the need for power—the biggest input cost—became a blessing in disguise for these miners, who needed a different strategy to diversify their revenue sources.

Due to rising competition for mining, the miners continued to procure more machines to stay afloat, and with it came the need for more megawatts of electricity at a cheaper price. Miners invested heavily in securing these low-cost energy sources, such as hydroelectric or stranded natural gas sites, and developed expertise in managing high-density cooling and electrical systems—skills honed during the crypto boom of the early 2020s.

This is what captured the attention of AI and cloud computing firms. While bitcoin relies on specialized ASICs, AI thrives on versatile GPUs like Nvidia’s H100 series, which require similar high-power environments but for parallel processing tasks in machine learning. Instead of building out data centers from scratch, taking over mining infrastructure, which already has power ready, became a faster way to grow an increasing appetite for AI-related infrastructure.

Essentially, these miners aren’t just pivoting—they’re retrofitting.

The cooling systems, low-cost energy contracts, and power-dense infrastructure they built during the crypto boom now serve a new purpose: feeding the AI models of companies like OpenAI and Google.

Firms like Crusoe Energy sold off mining assets to focus solely on AI, deploying GPU clusters in remote, energy-rich locations that mirror the decentralized ethos of crypto but now fuel centralized AI hyperscalers.

Terraforming AI

Bitcoin mining has effectively “terraformed” the terrain for AI compute by building out scalable, power-efficient infrastructure that AI desperately needs.

As Nicholas Gregory, Board Director at Fragrant Prosperity, noted, “It can be argued bitcoin paved the way for digital dollar payments as can be seen with USDT/Tether. It also looks like bitcoin terraformed data centres for AI/GPU compute.”

This pre-existing “terraforming” allows miners to retrofit facilities quickly, often in under a year, compared to the multi-year timelines for traditional data center builds. Firms like Crusoe Energy sold off mining assets to focus solely on AI, deploying GPU clusters in remote, energy-rich locations that mirror the decentralized ethos of crypto but now fuel centralized AI hyperscalers.

Higher returns

In practice, it means miners can flip a facility in less than a year—far faster than the multi-year timeline of a new data center.

But AI isn’t a cheap upgrade.

Bitcoin mining setups are relatively modest, with costs ranging from $300,000 to $800,000 per megawatt (MW) excluding ASICs, allowing for quick scalability in response to market cycles. Meanwhile, AI infrastructure demands significantly higher capex due to the need for advanced liquid cooling, redundant power systems, and the GPUs themselves, which can cost tens of thousands per unit and face global supply shortages. Despite the steeper upfront costs, AI offers miners up to 25 times more revenue per kilowatt-hour than bitcoin mining, making the pivot economically compelling amid rising energy prices and declining crypto profitability.

A niche industry worth billions

As AI continues to surge and crypto profits tighten, bitcoin mining could become a niche game—one reserved for energy-rich regions or highly efficient players, especially as the next in 2028 could render many operations unprofitable without breakthroughs in efficiency or energy costs.

While projections show the global crypto mining market growing to $3.3 billion by 2030, at a modest 6.9% CAGR, the billions would be overshadowed by AI’s exponential expansion. According to KBV Research, the global AI in mining market is projected to reach $435.94 billion by 2032, expanding at a compound annual growth rate (CAGR) of 40.6%.

With investors already seeing dollar signs in this shift, the broader trend suggests the future is either a hybrid or a full conversion to AI, where stable contracts with hyperscalers promise longevity over crypto’s boom-bust cycles.

This evolution not only repurposes idle assets but also underscores how yesterday’s crypto frontiers are forging tomorrow’s AI empires.



Source link

September 14, 2025 0 comments
0 FacebookTwitterPinterestEmail
AI's 'Zoom and Enhance' Is Bullshit
Product Reviews

AI’s ‘Zoom and Enhance’ Is Bullshit

by admin September 2, 2025


Speculation about Donald Trump’s declining health was running rampant on social media over the holiday weekend as Americans wondered whether the president was actually on his deathbed. Some people even turned to AI tools in an effort to get a better look at grainy photos of Trump, who seemed to only be captured in blurry images like Bigfoot in the wild.

But using AI like that is a complete waste of time. The AI only confused people because these “enhancements” can’t actually give you a more honest picture of a grainy photo. AI just adds bullshit that wasn’t there to begin with.

Patient zero for the AI-induced nonsense about Trump this weekend appeared to be a Bluesky user named Christopher Webb. The online sleuth wrote, “Today’s press pool photo shows Trump leaving the White House this morning,” while sharing a grainy photo from Getty Images.

“I enhanced the image. And seriously, can an expert explain what the hell is happening to his forehead? Right above his nose and right brow,” Webb continued, watermarking the image in the lower-right corner.

Today’s press pool photo shows Trump leaving the White House this morning.

I enhanced the image. And seriously, can an expert explain what the hell is happening to his forehead? Right above his nose and right brow.

[image or embed]

— Christopher Webb (@cwebbonline.com) September 1, 2025 at 8:49 AM

And that AI-augmented image rippled around the entire English-speaking internet.

Grok told me it’s real

It did look weird. The AI-altered photo showed what appeared to be a large mass on Trump’s forehead, causing people to insist that he had a stroke. Others came up with even more specific diagnoses, calling it a “decompressive hemicraniectomy,” in a screenshot on X that also appeared to rely on generative AI for information.

Two tweets speculating about the mass on Trump’s head, which was added by AI enhancement tools. © X

The AI image jumped from Bluesky to just about every other social media platform on the internet, including X, Threads, Instagram, and TikTok.

Users on X asked Grok, as they so often do, to check if the photo was legit. Grok insisted the photo was real “with no obvious signs of photoshopping.”

© X

Grok, of course, often gets things wrong, even when it’s not being intentionally manipulated by its billionaire owner, Elon Musk, to spit out far-right talking points. Musk, as you may recall, gave two Nazi-style salutes back in January and tinkered with Grok in ways that made it praise Hitler and spread conspiracy theories about white farmers being killed in South Africa.

But that massive lump on Trump’s forehead isn’t real. It was added by AI because applying tools that “upscale” an image doesn’t actually bring you closer to the truth. It simply brings you closer to an image that looks sharper and less pixelated. And these tools will take things like shadows or innocuous creases in a person’s face and make it appear as if there’s something there when there isn’t.

Check the flag

Maybe the best way to understand what’s happening is to look at the flags on Trump’s hat in the original photo and the AI-enhanced image. What do you notice?

Images: Getty / Bluesky

Yep, the AI image didn’t create a better version of reality. It made an educated guess about what the flag on the hat was supposed to look like and made it sharper. It managed to get rid of the stars in the process, only producing stripes. Sharper stripes, sure. But just stripes.

Zoom and enhance

This is far from the first time we’ve seen the rise of conspiracy theories aided by AI upscalers.

Do you remember when Will Smith slapped Chris Rock on stage at the Oscars in 2022? The video circulated widely, given how shocked everyone was at the time, and people started applying AI tools to get what they assumed would be a better view. But it just added shit that wasn’t there, leading to conspiracy theories.

As I wrote at the time, images were going viral purporting to show some kind of weird prosthetic on Rock’s face. The conspiracy theory was that Smith’s slap had been planned in advance by the producers of the Academy Awards and that we were seeing some kind of pad that would cushion the blow of any physical contact.

Image likely enhanced by AI of Chris Rock from March 2022 that added a weird fold to his skin. © X

It was an odd theory, given the fact that a tiny protective cushion wouldn’t do much if you slapped someone. But that’s what the internet ran with. And it appeared that people were just uploading the more pixelated images of Rock to AI upscaler tools in an attempt to get a better look.

The “pad” on Rock’s face was created by the AI tools because it took the natural lines on Rock’s face and made them even sharper, highlighting shadows and creating new information that wasn’t in the original image.

Older than our current “AI”

These “enhancement” tools are even older than generative AI tools like ChatGPT, which was launched in late 2022. Back in 2020, a tool called Face Depixelizer was being promoted as a way to turn pixelated photos into sharp images. But people quickly figured out just how silly it was by using images of famous people like President Barack Obama, who was transformed into a white guy.

© X

The focus on social media at the time was about how these tools were racially biased. But biased or not, the tool simply didn’t work as advertised. You were supposed to be getting a better look at the world. And it gave you a more inaccurate one, regardless of race.

People shared some of the most humorous examples on X, which was then known as Twitter before Musk bought the site.

An image of @BarackObama getting upsampled into a white guy is floating around because it illustrates racial bias in #MachineLearning. Just in case you think it isn’t real, it is, I got the code working locally. Here is me, and here is @AOC. pic.twitter.com/kvL3pwwWe1

— 🔥囧Robert Osazuwa Ness囧🔥 (@osazuwa) June 20, 2020

As you can see, the AI tool is just doing its best to add information, often missing things that we can even see with the naked eye. And there is that bias, which may have previously been unintentional but is certainly much more intentional when guys like Musk are tinkering with the weights.

Should we write off upscaling completely?

When it comes to AI upscaling, not all applications are bad. If you’re trying to get a sharper image for a fictional scenario, like in a video game, there are use cases that make sense. But the confusion around what AI upscaling actually does can lead people to believe the computer program is giving them a better look at reality. And that’s just not true.

There was a trend a few years back to upscale some of the earliest short films, including clips from the 19th century, and it gave people the impression that they were seeing something more honest. And while the technique was interesting to see, it gave people a false impression.

The faces that viewers could suddenly make out (which had previously been too grainy and pixelated to see clearly) were just rendered by a computer as it tried to make something look better.

It wasn’t a more accurate version of a snowball fight, for instance, after upscaling. It was just a clearer one. And just like Chris Rock’s newly invented cheek pad, the faces on the people in that video include details that simply didn’t exist in the real world. They’re the AI program’s best guesses at what it should look like.

Trump’s health mystery

It’s not so weird to wonder about Trump’s health right now. He’s been seen with strange black marks on the back of his right hand for multiple weeks. But many social media users were asking conspiratorial questions that had reasonable answers over the weekend.

There were the viral videos that showed an ambulance riding along with the presidential motorcade. But there’s always a medical team traveling with the president in case of an emergency.

There were posts pointing out the fact that flags at the White House were lowered. But that was done to honor the victims of the shooting at a Catholic school in Minneapolis. One popular video on TikTok even claimed flags are never lowered to half-mast except when the president dies, which isn’t true.

Then there were the claims that Walter Reed hospital, where Trump was treated when he got covid-19 during his first term, had closed off several roads on Monday. But, again, that had a reasonable explanation. Those roads are often closed during holiday weekends. Monday was Labor Day.

There were countless versions of this speculation, but the truth is we simply don’t know whether Trump’s health is seriously deteriorating. The fact that he’s shied away from cameras in recent days makes people think he’s on his last legs.

But AI upscaler tools aren’t going to give us any answers. In fact, they’re just going to make all of us less informed.





Source link

September 2, 2025 0 comments
0 FacebookTwitterPinterestEmail
39% of gamers in China are interested in generative AI’s application in games
Esports

39% of gamers in China are interested in generative AI’s application in games

by admin September 1, 2025


According to a new report, gamers in China are expressing a “high interest” in generative AI.

Niko Partners’ China Gamer Behaviour and Market Insights report, published on August 27, 2025, surveyed 1,058 gamers in China to provide insights on player demographics, behaviour, and engagement in 2025.

The report found that 39.4% of gamers in China are interested in generative AI’s application in games, while 85.2% are aware of the technology.

Niko Partners noted that Chinese studios are “ahead of the game” with 60% already using generative AI in their development pipelines.

The analysis firm, in particular, highlighted NetEase, which has “integrated multiple generative AI features into Justice Online Mobile, allowing players to interact with smart NPCs or create custom videos with their in-game character.”

Elsewhere, the report found that esports and game livestreaming remain “key verticals” in China in 2025.

46.5% of respondents said they have engaged with esports, defined as having played or watched an esports game, or competed in a tournament themselves, while one in four respondents said they regularly watch game livestreamers, with “influencers and streamers playing a key role in game marketing and promotion.”

According to Niko Partners’ report, short video apps, like Douyin (TikTok) and Kuaishou, have continued to grow in importance and are “key marketing channels for game developers looking to target players in the country.”

The survey found that these short video apps are one of the leading sources of video game information, with 41% of gamers in China getting new game information from these platforms, compared to game platforms and stores (38.7%), social media (33.9%), messaging apps (33.4%), and influencers and streamers (31.6%).

Niko Partners noted that while “short video is the leading source for both mobile and PC gamers,” game platform recommendations are the leading source on console.

The report also found that mini games have “emerged as a notable segment” in China.

The firm defines mini games as those “can be played instantly, without download, and are typically available within platforms and apps such as WeChat and Douyin.”

The report found that 84.4% of gamers in China have played a mini game, with 17.7% of women playing these games daily, compared to 10.3% of men.

“Mini games are reaching a broader and more casual demographic in China, already accounting for nearly 10% of total player spending on video games,” Niko Partners stated.

In June, Niko Partners released its Market Model Reports, which reported that the Asia, Middle East, and North Africa (MENA) games markets generated $86.6 billion from player spending on software and services in 2024, marking a 1.4% increase.

The report also projected that “the regions’ development will continue reliably through 2029” with revenue estimated to reach $96 billion in Asia and MENA by 2029.



Source link

September 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
Anthropic CEO Dario Amodei
Gaming Gear

Anthropic forms new security council to help secure AI’s place in government

by admin August 27, 2025



On Aug. 27, Anthropic, the company behind Claude, unveiled what it calls its “National Security and Public Sector Advisory Council” — an 11-member council that includes a former U.S. senator and intelligence chief, to guide how its models are deployed in U.S. defense and government applications.

Partnering with the Pentagon

This might look like yet another Beltway advisory board, but it actually it appears to be Anthropic’s way of locking in its place in the compute-hungry, deep-pocketed U.S. national security sector.

Anthropic has already launched Claude Gov, a tuned-down version of its AI that “refuses less” when handling sensitive or classified queries. It has also secured a $200 million prototype contract with the Pentagon’s Chief Digital and Artificial Intelligence Office alongside Google, OpenAI, and xAI. Claude Gov is live in the Lawrence Livermore National Laboratory, and is being offered to federal agencies for a symbolic $1 price tag to spur adoption.

This push toward the public sector matters because training frontier models is now all about infrastructure. Anthropic’s next-gen Claude models will run on “Rainier,” a monster AWS supercluster powered by hundreds of thousands of Trainium 2 chips. Amazon has poured $8 billion into Anthropic and has positioned it as the flagship tenant for its custom silicon. Meanwhile, Anthropic is hedging with Google Cloud, where it taps TPU accelerators and offers Claude on the FedRAMP-compliant Vertex AI platform.

By contrast, OpenAI still relies heavily on Nvidia GPUs via Microsoft Azure — though it has started renting Google TPUs; while Elon Musk’s xAI scrapped its custom Dojo wafer-level processor initiative and fell back on Nvidia and AMD hardware. Google’s DeepMind remains anchored to Google’s in-house TPU pipeline but has kept a lower profile in defense. Neither has assembled anything like Anthropic’s new council, though.

GPUs, geopolitics, and government

Anthropic’s council can also be seen as a sign that access to compute is becoming a national security priority. The Center for a New American Security has already acknowledged that securing and extending the government’s access to compute will play a “decisive role in whether the United States leads the world in AI or cedes its leadership to competitors.”

Nvidia Blackwell GPUs are sold out through most of 2025, export controls are unpredictable, and U.S. agencies are scrambling to secure reliable training capacity. By recruiting insiders from the Department of Energy and the intelligence community, Anthropic is aiming to secure both the hardware and policy headroom it needs to stay competitive.

This strategy is risky: Tying the Claude brand to the Pentagon may alienate some users and could saddle Anthropic with political baggage. But there are also clear rewards, including steady contracts, priority access to chips, and a direct role in shaping public sector AI standards. Someone, somewhere, has made some careful calculations, and Anthropic’s leadership is clearly hoping they’ll pay off.

Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.


You may like

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.



Source link

August 27, 2025 0 comments
0 FacebookTwitterPinterestEmail
The Hidden Ingredients Behind AI’s Creativity
Product Reviews

The Hidden Ingredients Behind AI’s Creativity

by admin August 24, 2025


The original version of this story appeared in Quanta Magazine.

We were once promised self-driving cars and robot maids. Instead, we’ve seen the rise of artificial intelligence systems that can beat us in chess, analyze huge reams of text, and compose sonnets. This has been one of the great surprises of the modern era: physical tasks that are easy for humans turn out to be very difficult for robots, while algorithms are increasingly able to mimic our intellect.

Another surprise that has long perplexed researchers is those algorithms’ knack for their own, strange kind of creativity.

Diffusion models, the backbone of image-generating tools such as DALL·E, Imagen, and Stable Diffusion, are designed to generate carbon copies of the images on which they’ve been trained. In practice, however, they seem to improvise, blending elements within images to create something new—not just nonsensical blobs of color, but coherent images with semantic meaning. This is the “paradox” behind diffusion models, said Giulio Biroli, an AI researcher and physicist at the École Normale Supérieure in Paris: “If they worked perfectly, they should just memorize,” he said. “But they don’t—they’re actually able to produce new samples.”

To generate images, diffusion models use a process known as denoising. They convert an image into digital noise (an incoherent collection of pixels), then reassemble it. It’s like repeatedly putting a painting through a shredder until all you have left is a pile of fine dust, then patching the pieces back together. For years, researchers have wondered: If the models are just reassembling, then how does novelty come into the picture? It’s like reassembling your shredded painting into a completely new work of art.

Now two physicists have made a startling claim: It’s the technical imperfections in the denoising process itself that leads to the creativity of diffusion models. In a paper presented at the International Conference on Machine Learning 2025, the duo developed a mathematical model of trained diffusion models to show that their so-called creativity is in fact a deterministic process—a direct, inevitable consequence of their architecture.

By illuminating the black box of diffusion models, the new research could have big implications for future AI research—and perhaps even for our understanding of human creativity. “The real strength of the paper is that it makes very accurate predictions of something very nontrivial,” said Luca Ambrogioni, a computer scientist at Radboud University in the Netherlands.

Bottoms Up

Mason Kamb, a graduate student studying applied physics at Stanford University and the lead author of the new paper, has long been fascinated by morphogenesis: the processes by which living systems self-assemble.

One way to understand the development of embryos in humans and other animals is through what’s known as a Turing pattern, named after the 20th-century mathematician Alan Turing. Turing patterns explain how groups of cells can organize themselves into distinct organs and limbs. Crucially, this coordination all takes place at a local level. There’s no CEO overseeing the trillions of cells to make sure they all conform to a final body plan. Individual cells, in other words, don’t have some finished blueprint of a body on which to base their work. They’re just taking action and making corrections in response to signals from their neighbors. This bottom-up system usually runs smoothly, but every now and then it goes awry—producing hands with extra fingers, for example.



Source link

August 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Storage, not silicon, will trigger AI’s next breakthrough
NFT Gaming

Storage, not silicon, will trigger AI’s next breakthrough

by admin August 24, 2025



Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

The debate over graphics processing horsepower is old news. Those who succeed in the future will be those who master where data resides. As global data creation continues to explode, lagging firms will be left locked out of the next stage of innovation.

Summary

  • Data volumes are exploding, with global creation projected to surpass 200 zettabytes by end-2025, more than all prior human output combined.
  • Centralized cloud storage is the AI bottleneck, inflating costs by up to 80% with egress fees and slowing large-scale data transfers to days.
  • Decentralized storage networks offer a fix, sharding data across independent nodes and embedding cryptographic proofs for compliance-ready audit trails.
  • Regulation like the EU AI Act raises the stakes, forcing provable data provenance—making storage a strategic priority, not a background utility.

Data creation is projected to crest 200 zettabytes worldwide by the end of 2025; that’s enough to stream every film ever made more than 100 billion times. This estimate involves more digital matter than humankind has generated in every prior year combined.

In tandem with this surge, research teams have revealed the first publicly released trillion-parameter language model. This behemoth model, whose training corpus alone would have filled entire national archives ten years ago, is an example of such a Leviathan that consumes petabytes an hour.

Without storage pipelines that can ingest, stage, and stream data at these newfound scales, even the fastest processors will suffer in idle frustration.

Centralized clouds are the new bottleneck

Most organizations still rely on a handful of hyperscale storage silos engineered for web apps, not frontier science. The financial drag is brutal.

An industry audit published in April revealed that hidden egress and retrieval charges can increase real storage costs by up to 80%, rendering routine model retraining a budget-breaking endeavor. Worse, moving tens of petabytes across regions can take days; an eternity when competitive advantage is measured in iteration cycles.

Centralization, therefore, does more than inflate invoices; it embeds inequality directly into the artificial intelligence economy, giving incumbents with deeper pockets an automatic edge over all others. In response to this reality, a different blueprint is gaining momentum as decentralized storage networks that shard data across thousands of independent nodes pave the way for a future built on equal footing. 

Equally vital is the need for transparent audit trails that satisfy looming disclosure mandates on how trading data is sourced, curated, and governed. Ultimately, regulation will be the deciding factor in whether tomorrow’s models see the light of day or face litigation.

Storage’s new stress test

Real-time AI now extends far beyond the data center wall, pushing models to factory floors, hospitals, and autonomous vehicles. At these edges, a millisecond lost to sluggish I/O can trigger a production fault or a safety risk. 

The latest MLPerf Storage v2.0 benchmarks prove the strain: checkpointing a GPT-class workload across 10,000 accelerators now takes 37 seconds, and even a 100,000-GPU supercluster still stalls for 4.4 seconds while waiting on disks rather than performing mathematical operations. 

Unless pipelines can deliver petabytes in bursts and then replicate the same data to thousands of micro-sites, ‘edge-AI’ will remain more keynote than reality. Analysts are already echoing the warning that storage throughput, not memory or networking, will be the number one bottleneck throttling next-gen clusters.

Regulation adds another layer of urgency, such as the European Union’s AI Act, which entered its second enforcement wave on August 2 — forcing general-purpose model providers to document every shard of training data…or risk fines of up to 7% of global turnover.

Centralized silos struggle to satisfy this mandate. Duplicative copies blur provenance, and opaque egress logs make audit trails a nightmare for accountants. By contrast, decentralized networks embed cryptographic proofs of replication into their very fabric, turning compliance into a byproduct rather than an expensive add-on.

Ignore storage at the cost of peril

With edge latency measured in microseconds and legal penalties measured in billions, storage is no longer a background utility; it is the only substrate on which tomorrow’s AI can legally and physically run. Firms that still treat capacity as a commodity line item are courting technical debt and regulatory shock in equal measure.

Compute innovation will continue to steal headlines, but without an equally radical rethink of where (and how) data resides, the most advanced silicon will sit idle as costs and compliance risks spiral.

The race for AI dominance is on, and it will be won by those who elevate storage to a first-class strategic priority, embrace decentralization, and build audit-ready pipelines that scale from core to edge. Everyone else will discover that no amount of GPU power can outrun a bottleneck built into the very foundations of their stack.

Kai Wawrzinek

Kai Wawrzinek is a co-founder of the Impossible Cloud & Impossible Cloud Network. He is a seasoned entrepreneur with a Ph.D. in Law and a proven track record of building successful ventures. Recognizing the need for enterprise-grade solutions in the web3 space, Kai founded Impossible Cloud Network (ICN), a decentralized cloud platform aimed at creating a decentralized alternative to AWS. Before ICN, Kai founded Goodgame Studios, an online game company, and grew the company to over 1,000 employees and generated more than €1 billion in revenue, taking it public on Nasdaq in 2018 through a reverse merger. 



Source link

August 24, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close