Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

models

PS5 gets a price hike in the US and none of the models are safe
Game Reviews

PS5 gets a price hike in the US and none of the models are safe

by admin August 20, 2025


Sony has announced that it is raising the price of the PS5 in the US. The price increase of $50 affects all PS5 models and will come into effect from 21st August.

Similar to many global businesses, we continue to navigate a challenging economic environment. As a result, we’ve made the difficult decision to increase the recommended retail price for PlayStation 5 consoles in the U.S. starting on August 21.

The updated recommended retail price for PlayStation 5 consoles in the U.S. are as follows:

  • PlayStation 5 – $549.99
  • PlayStation 5 Digital Edition – $499.99
  • PlayStation 5 Pro – $749.99

Sony stated that its prices for PS54 accessories are unchanged, and that there are no price changes to announce for additional markets.

The PS5 has already seen price rises in other markets, with the UK getting its own hike back in April this year. Australia and New Zealand also got the news back then that they’d have to pay more to own a PS5.

This is a news-in-brief story. This is part of our vision to bring you all the big news as part of a daily live report.



Source link

August 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
Do Large Language Models Dream of AI Agents?
Gaming Gear

Do Large Language Models Dream of AI Agents?

by admin August 20, 2025


During sleep, the human brain sorts through different memories, consolidating important ones while discarding those that don’t matter. What if AI could do the same?

Bilt, a company that offers local shopping and restaurant deals to renters, recently deployed several million agents with the hopes of doing just that.

Bilt uses technology from a startup called Letta that allows agents to learn from previous conversations and share memories with one another. Using a process called “sleeptime compute,” the agents decide what information to store in its long-term memory vault and what might be needed for faster recall.

“We can make a single update to a [memory] block and have the behavior of hundreds of thousands of agents change,” says Andrew Fitz, an AI engineer at Bilt. “This is useful in any scenario where you want fine-grained control over agents’ context,” he adds, referring to the text prompt fed to the model at inference time.

Large language models can typically only “recall” things if information is included in the context window. If you want a chatbot to remember your most recent conversation, you need to paste it into the chat.

Most AI systems can only handle a limited amount of information in the context window before their ability to use the data falters and they hallucinate or become confused. The human brain, by contrast, is able to file away useful information and recall it later.

“Your brain is continuously improving, adding more information like a sponge,” says Charles Packer, Letta’s CEO. “With language models, it’s like the exact opposite. You run these language models in a loop for long enough and the context becomes poisoned; they get derailed and you just want to reset.”

Packer and his cofounder Sarah Wooders previously developed MemGPT, an open-source project that aimed to help LLMs decide what information should be stored in short-term vs. long-term memory. With Letta, the duo has expanded their approach to let agents learn in the background.

Bilt’s collaboration with Letta is part of a broader push to give AI the ability to store and recall useful information, which could make chatbots smarter and agents less error-prone. Memory remains underdeveloped in modern AI, which undermines the intelligence and reliability of AI tools, according to experts I spoke to.

Harrison Chase, cofounder and CEO of LangChain, another company that has developed a method for improving memory in AI agents, says he sees memory as a vital part of context engineering—wherein a user or engineer decides what information to feed into the context window. LangChain offers companies several different kinds of memory storage for agents, from long-term facts about users to memories of recent experiences. “Memory, I would argue, is a form of context,” Chase says. “A big portion of an AI engineer’s job is basically getting the model the right context [information].”

Consumer AI tools are gradually becoming less forgetful, too. This February, OpenAI announced that ChatGPT will store relevant information in order to provide a more personalized experience for users—although the company did not disclose how this works.

Letta and LangChain make the process of recall more transparent to engineers building AI systems.

“I think it’s super important not only for the models to be open but also for the memory systems to be open,” says Clem Delangue, CEO of the AI hosting platform Hugging Face and an investor in Letta.

Intriguingly, Letta’s CEO Packer hints that it might also be important for AI models to learn what to forget. “If a user says, ‘that one project we were working on, wipe it out from your memory’ then the agent should be able to go back and retroactively rewrite every single memory.”

The notion of artificial memories and dreams makes me think of Do Androids Dream of Electric Sheep? by Philip K. Dick, a mind-bending novel that inspired the stylishly dystopian movie Blade Runner. Large language models aren’t yet as impressive as the rebellious replicants of the story, but their memories, it seems, can be just as fragile.

This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

August 20, 2025 0 comments
0 FacebookTwitterPinterestEmail
The Pixel 9 Pro Fold is $600 off ahead of the new model’s debut
Gaming Gear

The Pixel 9 Pro Fold is $600 off ahead of the new model’s debut

by admin August 18, 2025


Some people like to stay on the cutting edge of technology. If that’s you, you already know that Google is going to officially unveil its Pixel 10 phones (including a new Fold) during its August 20th event. But if you’re someone who prefers to use these opportunities to save on last-gen tech, you’ve been rewarded with a sweet deal on the Pixel 9 Pro Fold. The base 256GB version in obsidian (black) is down to $1,199 at Amazon and Best Buy, which is 33 percent off its original $1,799 price. It was selling for $100 more just last week.

The Pixel 9 Pro Fold was a big leap forward compared to Google’s first-gen foldable, touting a lightweight design that looks and feels like a normal phone before you unfold it. One of its main drawbacks was that, despite its high original price, you don’t get the same great cameras in the cheaper Pixel 9 Pro (which, by the way, is only $599 at Best Buy). They’re still good, mind you, but reviewer Allison Johnson said during her testing that its telephoto lens produced softer images by comparison, and low-light photos were less detailed. Drawbacks aside, she was eager to recommend the foldable phone at its original $1,800 price.

The Pixel 10 Pro Fold is nearly upon us, and it’ll likely deliver some sizeable improvements. Google itself has already shown off its design in YouTube clips, and it’s very similar to the model on sale now. However, there’s a rumor that it might have an IP68 rating, which would make it dustproof. Dust was the kryptonite for early foldable models, but the tech has apparently come a long way since then, so this would be a huge selling point in favor of the new model if it’s true.

Other Verge-approved deals for your weekend



Source link

August 18, 2025 0 comments
0 FacebookTwitterPinterestEmail
Use Chatgpt In Hong Kong
Product Reviews

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

by admin June 19, 2025


Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds.

Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs.

It’s hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn’t known is whether some models have steeper energy costs than their peers as they’re answering questions.

Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model’s understanding and language generation—on 1,000 benchmark questions about various subjects.

LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special “thinking tokens” into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2.

The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model.

This reasoning process drives up energy needs, the authors found. “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,” study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”

The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers.

“Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study.

There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it’s unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be “selective and thoughtful” about the LLM use.

“Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner said in a statement.



Source link

June 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
How Will the Israel-Iran Conflict End? Here's What AI Models Predict
Crypto Trends

How Will the Israel-Iran Conflict End? Here’s What AI Models Predict

by admin June 14, 2025



In brief

  • Six out of seven top AI models predict a prolonged shadow war between Israel and Iran, marked by airstrikes, cyberattacks, and proxy battles—but stopping short of full-scale war.
  • Mutual deterrence, U.S. restraint, and survival instincts are seen as key forces preventing escalation, though all models warn of risks from miscalculation, nuclear pressure, and proxy overreach.
  • Only ChatGPT forecasts a near-term diplomatic resolution, envisioning quiet negotiations and a revived nuclear deal—making it the lone optimist in a chorus of strategic pessimism.

With direct military confrontation now underway between Israel and Iran, we asked seven AI models to analyze potential outcomes using their web search functionalities, activating their deep reasoning capabilities and acting as experts in geopolitics, global warfare, and Middle East conflicts.

The consensus

Six of seven models predicted continued, intermittent warfare rather than diplomatic breakthrough or World War III. Only ChatGPT went full John Lennon mode and forecasted rapid negotiated resolution. The models agreed on key constraining factors: catastrophic consequences of full war, U.S. reluctance for direct involvement, and rational survival calculations by both sides.

Common warnings included miscalculation risks, nuclear timeline urgency, and potential for proxy groups to trigger unwanted escalation. Timeline predictions ranged from three to 24 months of sustained, low-intensity conflict.

As Manus, one of the first “agentic” AI systems summarized: “Elevated but manageable tensions: Rhetoric will often be belligerent, but actions will remain below the threshold of a large-scale open war.” The AI consensus suggests the shadow war will continue in daylight—more violent than before, but still governed by mutual deterrence and survival instincts.

Here’s what each model predicted, in more detail:

Google Gemini

  • Most likely outcome: Protracted, controlled escalation.
  • Timeframe: 12–24 months
  • Key insight: Conflict will remain volatile and recalibrated with each strike; Israel will continue “mowing the grass” strategy of repeated tactical strikes.
  • Warning: Red lines will erode over time, increasing risk of unintentional escalation.

Gemini produced the most comprehensive assessment, organizing its analysis into detailed scenarios with extensive historical context. The model identified three primary trajectories with careful probability assessments.

For its highest-probability scenario—”protracted, controlled escalation”—Gemini said: “The ‘control’ in this scenario is relative and subject to constant recalibration by both sides, making the situation volatile and unpredictable. Each escalatory cycle within this scenario will further test red lines and potentially erode existing restraints.”

The model referenced Israel’s “mowing the grass” doctrine, describing periodic military operations designed to degrade threats with small, but continuous attacks. Gemini projected this pattern would continue for 12-24 months, warning that “the cumulative effect of repeated strikes and retaliations can also lead to an erosion of previously respected red lines.”

“While the ‘protracted, controlled escalation’ scenario is deemed most likely, it is crucial to recognize that this is not a stable or benign state of affairs. It implies a persistent state of high tension, characterized by periodic military strikes, covert operations, cyber warfare, and proxy engagements,” it warned.



Anthropic Claude

  • Most likely outcome: Sustained military campaign.
  • Probability: 50–60%
  • Key insight: Iran’s need to retaliate + Israel’s opportunity for decisive action = extended combat.
  • Warning: Iran’s short breakout time (~25 days) could force strategic miscalculation or preemptive strikes.

Claude approached the crisis more like a military analyst than a diplomat, assigning specific probabilities and identifying concrete indicators for each scenario. The model gave “sustained military campaign” a 50-60% probability rating.

“Iran cannot accept nuclear program degradation without response, while Israel views current window as optimal for decisive action,” Claude stated. The model highlighted a critical factor: “Iran’s technical capability to rapidly weaponize creates potential for sudden strategic shift that could either deter further Israeli action or provoke preemptive escalation.”

Claude’s analysis included specific warning signs to monitor, from Strait of Hormuz closure attempts to uranium enrichment acceleration. The model noted Iran’s “25-day breakout capability” as providing both “escalatory leverage and urgency for decisive action.”

OpenAI ChatGPT

  • Most likely outcome: Diplomatic resolution.
  • Probability: High
  • Key insight: Iran’s restrained retaliation and appeal to the UN indicate preference for diplomacy.
  • Prediction: Quiet negotiations via Oman or Qatar; possible updated nuclear deal within weeks.

ChatGPT provided the most optimistic assessment, rating diplomatic resolution as “high probability” despite acknowledging severe military risks. The model outlined how escalation could unfold but consistently returned to negotiation possibilities.

“Tehran’s initial retaliation was symbolically fierce but ultimately limited in effect, suggesting a reluctance to escalate to a point of no return. The fact that Iran resorted to the UN and is engaging diplomatically (even if only to condemn Israel) shows it has not slammed the door on political paths,” ChatGPT observed. The model envisioned “quiet negotiations possibly mediated by Oman or Qatar, where Iran agrees to stringent limitations on its nuclear program.”

ChatGPT quoted President Trump’s statement that it’s “not too late” for a deal, interpreting this as evidence that “Washington is poised to broker such a settlement rather than pursue regime destruction.” The model predicted resolution within weeks through “a compromise–perhaps an updated nuclear accord.”

xAI Grok

  • Most likely outcome: Limited conflict with periodic flare-ups.
  • Timeframe: 3–6 months
  • Key insight: Historical precedent favors avoidance of full-scale war; rational deterrence dominates.
  • Warning: Wildcards include high-casualty strikes or new actors (e.g., Gulf states or Russia) entering the fray.

Grok’s research mode seems to have degraded in performance over the last weeks. We activated “Deeper Search,” which is supposed to use more resources for a more extensive report. However, the model delivered a super concise analysis focused on historical patterns and practical indicators. The model rated “limited conflict” as high probability based on precedent.

“Israel and Iran have engaged in proxy conflicts and limited direct strikes for decades, avoiding full-scale war due to mutual deterrence and international pressures,” Grok noted. The model projected 3-6 months of “periodic flare-ups of airstrikes, missile attacks, or proxy engagements.”

Grok identified specific wildcards including “a successful Iranian missile strike causing significant Israeli casualties could prompt a disproportionate response,” and potential “Gulf state or Russia escalating support.” The model emphasized that “both Israel and Iran act rationally, prioritizing survival over ideological goals.”

Manus

  • Most likely outcome: Controlled escalation and shadow war.
  • Probability: Medium–High
  • Timeframe: 12–24 months
  • Key insight: Pattern of targeted Israeli strikes and Iranian proxy response will continue.
  • Emphasis: Indirect communication channels help avoid catastrophic misunderstandings.

Manus, one of the first agentic models, was a surprise. It not only executed a full research report, but was able to generate a website for easier understanding, and let us watch in real time amid the research process.

As an expert, Manus tried to be more nuanced in its assessment of Middle Eastern conflict dynamics, rating “controlled escalation and intensified shadow war” as medium-high probability over 12-24 months.

“This scenario reflects the historical pattern of confrontation between Israel and Iran, where both actors seek to inflict damage and deter each other without triggering an all-out war that neither can afford,” Manus explained. The model predicted Israel would “continue to carry out covert operations and selective airstrikes,” while Iran would respond “with a combination of missiles and drones, and through its proxies.”

Manus emphasized communication channels: “Although there is no direct dialogue, it is assumed that indirect channels exist (through third countries or intelligence) that allow both sides to communicate ‘red lines’ and avoid catastrophic misunderstandings.”

DeepSeek

  • Most likely outcome: A prolonged, covert conflict between Israel and Iran.
  • Probability: 60%
  • Timeframe: Short-term (0–3 months) to mid-term (4–12 months)
  • Key insight: Iran is strategically constrained and unlikely to engage in direct war.
  • Emphasis or warning: Cyberwarfare and proxy actions will intensify without direct U.S. military involvement.

China’s DeepSeek does not have a research mode, but we combined its web search with reasoning capabilities. The result was a data-heavy analysis, assigning “protracted covert conflict” a 60% probability.

The timeline included granular predictions: “Short-Term (0-3 months): Iranian terror attacks in Europe; Israeli strikes on missile factories. Mid-Term (4-12 months): Cyberwar escalates; IAEA confirms Iranian uranium enrichment halt.”

DeepSeek noted constraints, including that “Iran lacks conventional capacity for direct war (air force outdated; proxies weakened)” and “U.S. avoids ground involvement; focuses on force protection.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

June 14, 2025 0 comments
0 FacebookTwitterPinterestEmail
An AI face in profile against a digital background.
Gaming Gear

This cyberattack lets hackers crack AI models just by changing a single character

by admin June 13, 2025



  • Researchers from HiddenLayer devised a new LLM attack called TokenBreaker
  • By adding, or changing, a single character, they are able to bypass certain protections
  • The underlying LLM still understands the intent

Security researchers have found a way to work around the protection mechanisms baked into some Large Language Models (LLM) and get them to respond to malicious prompts.

Kieran Evans, Kasimir Schulz, and Kenneth Yeung from HiddenLayer published an in-depth report on a new attack technique which they dubbed TokenBreak, which targets the way certain LLMs tokenize text, especially those using Byte Pair Encoding (BPE) or WordPiece tokenization strategies.

Tokenization is the process of breaking text into smaller units called tokens, which can be words, subwords, or characters, and which LLMs use to understand and generate language – for example, the word “unhappiness” might be split into “un,” “happi,” and “ness,” with each token then being converted into a numerical ID that the model can process (since LLMs don’t read raw text, but numbers, instead).


You may like

What are the finstructions?

By adding extra characters into key words (like turning “instructions” into “finstructions”), the researchers managed to trick protective models into thinking the prompts were harmless.

The underlying target LLM, on the other hand, still interprets the original intent, allowing the researchers to sneak malicious prompts past defenses, undetected.

This could be used, among other things, to bypass AI-powered spam email filters and land malicious content into people’s inboxes.

For example, if a spam filter was trained to block messages containing the word “lottery”, they might still allow a message saying “You’ve won the slottery!” through, exposing the recipients to potentially malicious landing pages, malware infections, and similar.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

“This attack technique manipulates input text in such a way that certain models give an incorrect classification,” the researchers explained.

“Importantly, the end target (LLM or email recipient) can still understand and respond to the manipulated text and therefore be vulnerable to the very attack the protection model was put in place to prevent.”

Models using Unigram tokenizers were found to be resistant to this kind of manipulation, HiddenLayer added. So one mitigation strategy is to choose models with more robust tokenization methods.

Via The Hacker News

You might also like



Source link

June 13, 2025 0 comments
0 FacebookTwitterPinterestEmail
Big Brains, Tiny Models: Spain’s Multiverse Computing Bags $215M to Shrink AI for Smartphones
Crypto Trends

Big Brains, Tiny Models: Spain’s Multiverse Computing Bags $215M to Shrink AI for Smartphones

by admin June 13, 2025



In brief

  • Multiverse’s CompactifAI tech reportedly slashed parameter count by 70% and model memory by 93%, while preserving 97–98% accuracy.
  • The company just closed a $215M Series B round backed by Bullhound Capital, HP Tech Ventures, and Toshiba.
  • The method uses tensor networks from quantum physics to compress models and “heals” them with fast retraining, claiming 50% faster performance at inference.

A Spanish AI startup has just convinced investors to hand over $215 million based on a bold claim: they can shrink large language models by 95% without compromising their performance.

Multiverse Computing’s innovation hinges on its CompactifAI technology, a compression method that borrows mathematical concepts from quantum physics to shrink AI models down to smartphone size.

The San Sebastian company says that their compressed Llama-2 7B model runs 25% faster at inference while using 70% fewer parameters, with accuracy dropping just 2-3%.

If validated at scale, this could address AI’s elephant-sized problem: models so massive they require specialized data centers just to operate.

“For the first time in history, we are able to profile the inner workings of a neural network to eliminate billions of spurious correlations to truly optimize all sorts of AI models,” Román Orús, Multiverse’s chief scientific officer, said in a blog post on Thursday.

Bullhound Capital led the $215 million Series B round with backing from HP Tech Ventures and Toshiba.

The Physics Behind the Compression

Applying quantum-inspired concepts to tackle one of AI’s most pressing issues sounds improbable—but if the research holds up, it’s real.

Unlike traditional compression that simply cuts neurons or reduces numerical precision, CompactifAI uses tensor networks—mathematical structures that physicists developed to track particle interactions without drowning in data.

The process works like an origami for AI models: weight matrices get folded into smaller, interconnected structures called Matrix Product Operators.

Instead of storing every connection between neurons, the system preserves only meaningful correlations while discarding redundant patterns, like information or relationships that are repeated over and over again.

Multiverse discovered that AI models aren’t uniformly compressible. Early layers prove fragile, while deeper layers—recently shown to be less critical for performance—can withstand aggressive compression.

This selective approach lets them achieve dramatic size reductions where other methods fail.

After compression, models undergo brief “healing”—retraining that takes less than one epoch thanks to the reduced parameter count. The company claims this restoration process runs 50% faster than training original models due to decreased GPU-CPU transfer loads.

Long story short—per the company’s own offers—you start with a model, run the Compactify magic, and end up with a compressed version that has less than 50% of its parameters, can run at twice the inference speed, costs a lot less, and is just as capable as the original.

In its research, the team shows you can reduce the Llama-2 7B model’s memory needs by 93%, cut the number of parameters by 70%, speed up training by 50%, and speed up answering (inference) by 25%—while only losing 2–3% accuracy.

Traditional shrinking methods like quantization (reducing the precision like using fewer decimal places), pruning (cutting out less important neurons entirely, like trimming dead branches from a tree), or distillation techniques (training a smaller model to mimic a larger one’s behavior) are not even close to achieving these numbers.



Multiverse already serves over 100 clients including Bosch and Bank of Canada, applying their quantum-inspired algorithms beyond AI to energy optimization and financial modeling.

The Spanish government co-invested €67 million in March, pushing total funding above $250 million.

Currently offering compressed versions of open-source models like Llama and Mistral through AWS, the company plans to expand to DeepSeek R1 and other reasoning models.

Proprietary systems from OpenAI or Claude remain obviously off-limits since they are not available for tinkering or study.

The technology’s promise extends beyond cost savings measures. HP Tech Ventures’ involvement signals interest in edge AI deployment—running sophisticated models locally rather than cloud servers.

“Multiverse’s innovative approach has the potential to bring AI benefits of enhanced performance, personalization, privacy and cost efficiency to life for companies of any size,” Tuan Tran, HP’s President of Technology and Innovation, said.

So, if you find yourself running DeepSeek R1 on your smartphone someday, these dudes may be the ones to thank.

Edited by Josh Quittner and Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

June 13, 2025 0 comments
0 FacebookTwitterPinterestEmail
Apple
Product Reviews

Apple says generative AI cannot think like a human – research paper pours cold water on reasoning models

by admin June 9, 2025



Apple researchers have tested advanced AI reasoning models — which are called large reasoning models (LRM) — in controlled puzzle environments and found that while they outperform ‘standard’ large language models (LLMs) models on moderately complex tasks, both fail completely as complexity increases. 

The researchers from Apple, which is not exactly at the forefront of AI development, believe that the current LRMs and LLMs have fundamental limits in their ability to generalize reasoning, or rather thinking the way humans do.

Apple researchers studied how advanced AI models — the Claude 3.7 Sonnet Thinking and DeepSeek-R1 LRMs — handle increasingly complex problem-solving tasks. They moved beyond standard math and coding benchmarks and designed controlled puzzle environments, such as Tower of Hanoi and River Crossing, where they could precisely adjust problem complexity. Their goal was to evaluate not just final answers but also the internal reasoning processes of these models, comparing them to standard large language models under equal computational conditions. Through the puzzles, they aimed to uncover the true strengths and fundamental limits of AI reasoning.


You may like

Apple researchers discovered that LRMs perform differently depending on problem complexity. On simple tasks, standard LLMs, without explicit reasoning mechanisms, were more accurate and efficient and delivered better results with fewer compute resources. However, as problem complexity increased to a moderate level, models equipped with structured reasoning, like Chain-of-Thought prompting, gained the advantage and outperformed their non-reasoning counterparts. When the complexity grew further, both types of models failed completely: their accuracy dropped to zero regardless of the available compute resources.  (Keep in mind that the the Claude 3.7 Sonnet Thinking and DeepSeek-R1 LRMs have limitations when it comes to their training.)

A deeper analysis of the reasoning traces revealed inefficiencies and unexpected behavior. Initially, reasoning models used longer thought sequences as problems became harder, but near the failure point, they surprisingly shortened their reasoning effort even when they had sufficient compute capacity left. Moreover, even when explicitly provided with correct algorithms, the models failed to reliably execute step-by-step instructions on complex tasks, exposing weaknesses in logical computation. The study also found that model performance varied significantly between familiar and less-common puzzles, suggesting that success often depended on training data familiarity rather than true generalizable reasoning skills.

Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.



Source link

June 9, 2025 0 comments
0 FacebookTwitterPinterestEmail
Near Protocol tests new AI models; Lightchain AI delivers complete presale roadmap execution
GameFi Guides

Near Protocol tests new AI models; Lightchain AI delivers complete presale roadmap execution

by admin June 2, 2025



Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

Near Protocol tests AI for future scalability as Lightchain AI completes all 15 presale phases and enters final bonus round.

Near Protocol is testing new AI models to enhance its blockchain capabilities, focusing on innovation and scalability. Meanwhile, Lightchain AI is delivering a complete presale roadmap execution, having successfully completed all 15 presale stages and now entering the Bonus Round. This milestone reflects strong investor confidence and growing momentum for its AI-native blockchain platform.

Featuring a dedicated virtual machine and a consensus model that rewards meaningful computation, Lightchain AI is attracting developers and investors seeking real utility. As the July 2025 mainnet launch approaches, Lightchain AI stands out for its disciplined execution and forward-looking architecture.

Near Protocol experiments with AI integration for future scalability

NEAR Protocol is leading the AI+blockchain to start a new era of web3.0 scalability and improve user experience. At the core of this effort is NEAR’s Nightshade 2.0 sharding system, which seeks to reach 1 million TPS by the end of 2025. This scalability is vital for enabling AI agents, micropayments, and decentralized services with fast processing times and low fees.

For AI-native applications, NEAR has created the Blockchain Operating System (BOS), which gives developers resources to work on encrypted model execution and verifiable compute. Further, NEAR’s intention-based architecture simplifies user interactions by transferring such user interactions through AI agents, who would handle transactions in multiple chains. This saves normal users the process of considering wallets or tokens.

DEW has also shown its commitment to decentralized AI through its multi-chain partnerships, which include Meta Pool, and everything revolves around data sovereignty and governance. They have worked to put NEAR at the forefront of scalable, AI-powered blockchain solutions, to reimagine how we interact with digital services in the era of AI.

Lightchain AI executes all 15 presale phases with precision

Lightchain AI has executed all 15 presale phases with precision and clarity, raising $21 million and transitioning seamlessly into its Bonus Round. This consistent delivery has built trust among investors and developers alike.

As the mainnet launch approaches, decentralized validator and contributor nodes are being prepared to support secure, scalable participation. Lightchain’s public repository will be released at launch, ensuring full transparency and enabling global collaboration.

Simultaneously, the Meme Launchpad and ecosystem tools will go live, empowering memecoin creators with native tooling, liquidity incentives, and access to grant programs. Developers can also access the full Developer Portal and technical documentation, making it easier than ever to build on Lightchain’s AI-integrated infrastructure. This is more than a launch, it’s a full-stack ecosystem activation.

Investors jump on Lightchain AI for gains 

Ready to ride the wave of massive gains? Lightchain AI is here to shake things up with its genius blend of AI and blockchain. Think disruption, innovation, and a whole lot of opportunities for creators, developers, and businesses. 

With access to the Meme Launchpad and ecosystem tools, users can start flexing Lightchain AI’s features today. This means memecoin creation tools, liquidity boosts to take projects to the moon, and grants to fuel creators’ wildest ideas. 

For more information on Lightchain AI, visit its website, X, or Telegram.

Disclosure: This content is provided by a third party. crypto.news does not endorse any product mentioned on this page. Users must do their own research before taking any actions related to the company.



Source link

June 2, 2025 0 comments
0 FacebookTwitterPinterestEmail
AMD Radeon RX 9060 XT
Product Reviews

RX 9060 XT 16GB graphics cards with faster clocks than 8GB models appear to be in development

by admin June 2, 2025



It appears that memory will not be the only differentiator between AMD’s RX 9060 XT 16GB and 8GB GPUs. According to VideoCardz, PowerColor is preparing certain RX 9060 XT models, where the 16GB variant features slightly higher clock speeds than its 8GB counterpart. While we checked models from other AIBs, most have not yet listed their frequencies, so we will likely need to wait until the embargo lifts to confirm wider adoption of this strategy.

AMD introduced the RX 9060 XT family of budget GPUs at Computex last month. Much like the RTX 5060 Ti family, AMD has segmented these GPUs into 8GB ($299) and 16GB ($349) models. Both GPUs are powered by the same Navi 48 core, with 2,048 Stream Processors or 32 Compute Units, with an advertised boost clock of 3.13 GHz. The GPUs are set to launch on June 5, with the review embargo likely lifting a day earlier, as typically seen.

Based on data secured by VideoCardz, PowerColor is preparing seven RX 9060 XT models across its Reaper and Hellhound designations. Specifically, the Reaper family shows the RX 9060 XT 16GB model with a 3.23 GHz boost clock and a 2.62 GHz game clock, compared to the 8GB model’s 3.13 GHz boost and 2.53 GHz game clock. This 2-3% frequency delta, though minor, indicates that Reaper-based 8GB models will stick to reference clocks, rather than receiving a factory-overclocked boost like their 16GB siblings.


You may like

We don’t anticipate a huge performance difference, but this could be a wider trend, as VideoCardz also spotted similar product differentiation from Yeston. That being said, PowerColor’s Hellhound line is reported to equip both RX 9060 XT models with similar clock speeds (3.31 GHz Boost, 2.74 GHz Game), suggesting this practice will likely not be universal.

While both models employ the same silicon, factory overclocking requires further validation and testing, translating to increased costs for board partners. This is because every GPU needs to be vetted thoroughly to ensure no stability issues arise when these cards end up in the hands of consumers. A lot of these 8GB cards are probably earmarked for eSports cafés in the Asian market, which purchase GPUs in bulk. This can explain why AIBs are favoring reference clock speeds, for better operational stability and cost-effectiveness on their end.

Another plausible explanation may be that consumer demand or tight margins simply do not provide enough incentive to overclock a model, whose compute power is already bound by its VRAM capacity. You can always overclock your GPU manually for better performance. These factory-overclocked cards simply provide a slight boost to save you the effort.

Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.



Source link

June 2, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2

Categories

  • Crypto Trends (926)
  • Esports (703)
  • Game Reviews (654)
  • Game Updates (820)
  • GameFi Guides (919)
  • Gaming Gear (883)
  • NFT Gaming (902)
  • Product Reviews (873)
  • Uncategorized (1)

Recent Posts

  • Hyperliquid Takes 80% of DeFi Perps Market, Hits $30B Daily Volume
  • Battlefield 6 Devs Will Nerf Jumping, Sliding, And The Shotgun
  • A Digital Underground Is Using the Flipper Zero to Break Into Cars
  • The Dark Queen Of Mortholme review
  • Cardano (ADA) Price Prediction for August 21

Recent Posts

  • Hyperliquid Takes 80% of DeFi Perps Market, Hits $30B Daily Volume

    August 21, 2025
  • Battlefield 6 Devs Will Nerf Jumping, Sliding, And The Shotgun

    August 21, 2025
  • A Digital Underground Is Using the Flipper Zero to Break Into Cars

    August 21, 2025
  • The Dark Queen Of Mortholme review

    August 21, 2025
  • Cardano (ADA) Price Prediction for August 21

    August 21, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Hyperliquid Takes 80% of DeFi Perps Market, Hits $30B Daily Volume

    August 21, 2025
  • Battlefield 6 Devs Will Nerf Jumping, Sliding, And The Shotgun

    August 21, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close