Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Googles

Google’s gradient ‘G’ logo is rolling out everywhere
Gaming Gear

Google’s gradient ‘G’ logo is rolling out everywhere

by admin September 29, 2025


Google is making the gradient “G” its new company-wide logo, according to an announcement on Monday. The new logo first began to surface across the Google app on Android and iOS in May, but soon, the design will begin to appear across all of the company’s platforms, marking Google’s first big logo change in 10 years.

Google separated the red, yellow, green, and blue in the colorful “G” logo it introduced in 2015. The new logo blends everything together and makes the four colors brighter, bringing the design in line with its gradient Gemini logo. Google says the change reflects its “evolution in the AI era.”

Along with a new “G,” Google also quietly updated its Google Home logo to match its new look. Google says the design will start rolling out more widely in the “coming months,” which means you may soon start seeing the gradient look make its way across its other apps, too, like Gmail, Drive, Meet, and Calendar.



Source link

September 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
Crypto Trends

Google’s Robots Can Now Think, Search the Web and Teach Themselves New Tricks

by admin September 27, 2025



In brief

  • DeepMind’s Gemini Robotics models gave machines the ability to plan, reason, and even look up recycling rules online before acting.
  • Instead of following scripts, Google’s new AI lets robots adapt, problem-solve, and pass skills between each other.
  • From packing suitcases to sorting trash, robots powered by Gemini-ER 1.5 showed early steps toward general-purpose intelligence.

Google DeepMind rolled out two AI models this week that aim to make robots smarter than ever. Instead of focusing on following comments, the updated Gemini Robotics 1.5 and its companion Gemini Robotics-ER 1.5 make the robots think through problems, search the internet for information, and pass skills between different robot agents.

According to Google, these models mark a “foundational step that can navigate the complexities of the physical world with intelligence and dexterity”

“Gemini Robotics 1.5 marks an important milestone toward solving AGI in the physical world,” Google said in the announcement. “By introducing agentic capabilities, we’re moving beyond models that react to commands and creating systems that can truly reason, plan, actively use tools, and generalize.”

And this term “generalization” is important because models struggle with it.



The robots powered by these models can now handle tasks like sorting laundry by color, packing a suitcase based on weather forecasts they find online, or checking local recycling rules to throw away trash correctly. Now, as a human, you may say, “Duh, so what?” But to do this, machines require a skill called generalization—the ability to apply knowledge to new situations.

Robots—and algorithms in general—usually struggle with this. For example, if you teach a model to fold a pair of pants, it will not be able to fold a t-shirt unless engineers programmed every step in advance.

The new models change that. They can pick up on cues, read the environment, make reasonable assumptions, and carry out multi-step tasks that used to be out of reach—or at least extremely hard—for machines.

But better doesn’t mean perfect. For example, in one of the experiments, the team showed the robots a set of objects and asked them to send them into the correct trash. The robots used their camera to visually identify each item, pull up San Francisco’s latest recycling guidelines online, and then place them where they should ideally go, all on its own, just as a local human would.

This process combines online search, visual perception, and step-by-step planning—making context-aware decisions that go beyond what older robots could achieve. The registered success rate was between 20% to 40% of the time; not ideal, but surprising for a model that was not able to understand those nuances ever before.

How Google turn robots into super-robots

The two models split the work. Gemini Robotics-ER 1.5 acts like the brain, figuring out what needs to happen and creating a step-by-step plan. It can call up Google Search when it needs information. Once it has a plan, it passes natural language instructions to Gemini Robotics 1.5, which handles the actual physical movements.

More technically speaking, the new Gemini Robotics 1.5 is a vision-language-action (VLA) model that turns visual information and instructions into motor commands, while the new Gemini Robotics-ER 1.5 is a vision-language model (VLM) that creates multistep plans to complete a mission.

When a robot sorts laundry, for instance, it internally reasons through the task using a chain of thought: understanding that “sort by color” means whites go in one bin and colors in another, then breaking down the specific motions needed to pick up each piece of clothing. The robot can explain its reasoning in plain English, making its decisions less of a black box.

Google CEO Sundar Pichai chimed in on X, noting that the new models will enable robots to better reason, plan ahead, use digital tools like search, and transfer learning from one kind of robot to another. He called it Google’s “next big step towards general-purpose robots that are truly helpful.”

New Gemini Robotics 1.5 models will enable robots to better reason, plan ahead, use digital tools like Search, and transfer learning from one kind of robot to another. Our next big step towards general-purpose robots that are truly helpful — you can see how the robot reasons as… pic.twitter.com/kw3HtbF6Dd

— Sundar Pichai (@sundarpichai) September 25, 2025

The release puts Google in a spotlight shared with developers like Tesla, Figure AI and Boston Dynamics, though each company is taking different approaches. Tesla focuses on mass production for its factories, with Elon Musk promising thousands of units by 2026. Boston Dynamics continues pushing the boundaries of robot athleticism with its backflipping Atlas. Google, meanwhile, bets on AI that makes robots adaptable to any situation without specific programming.

The timing matters. American robotics companies are pushing for a national robotics strategy, including establishing a federal office focused on promoting the industry at a time when China is making AI and intelligent robots a national priority. China is the world’s largest market for robots that work in factories and other industrial environments, with about 1.8 million robots operating in 2023, according to the Germany-based International Federation of Robotics.

DeepMind’s approach differs from traditional robotics programming, where engineers meticulously code every movement. Instead, these models learn from demonstration and can adapt on the fly. If an object slips from a robot’s grasp or someone moves something mid-task, the robot adjusts without missing a beat.

The models build on DeepMind’s earlier work from March, when robots could only handle single tasks like unzipping a bag or folding paper. Now they’re tackling sequences that would challenge many humans—like packing appropriately for a trip after checking the weather forecast.

For developers wanting to experiment, there’s a split approach to availability. Gemini Robotics-ER 1.5 launched Thursday through the Gemini API in Google AI Studio, meaning any developer can start building with the reasoning model. The action model, Gemini Robotics 1.5, remains exclusive to “select” (meaning “rich,” probably) partners.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

September 27, 2025 0 comments
0 FacebookTwitterPinterestEmail
Wall of video screens with an AI video generation button.
Gaming Gear

How to Use Google’s ‘Nano Banana’ AI Photo Editor: My Expert Tips

by admin September 26, 2025


Google’s new AI image model got the nickname “nano banana” after a series of fruit-themed teasers from its executives and early reviewers thought its capabilities were, well, bananas. Since its launch, the nickname “nano banana” has stuck. But don’t worry if you’re not sure exactly what that’s referring to. Here’s what you need to know to find the model and make the most out of this powerful new tool.

The model’s government name is Gemini 2.5 Flash Image — it doesn’t sound nearly as fun as “nano banana,” but it’s important to note. It’s taken off dramatically in the past few weeks, especially compared to other AI image tools. Google reported that its users made over 5 billion AI images, and the model led to over 10 million new Gemini users in the first few weeks after nano banana’s release. It also helped launch Gemini to the top of the Apple App Store’s free app chart for a while. 

Although Google’s invested in integrating AI everywhere, nano banana image editing seems to be making one of the biggest leaps for Google in the race to develop the best AI.

As CNET’s AI image generator expert, I put nano banana to the test to see if the model held up or if it was all simply AI hype. I found Google’s nano banana impressive, with its refreshing focus on editing existing photos. There were still some slipups, which is to be expected with any AI service. But it was spookily good at adding new elements to my existing photos. If you’re an AI enthusiast, or even someone who just needs a quick, custom photo edit, I do think it’s worth exploring. 

To save you time, this guide is built on my real experience finding and playing with the model, along with some crowd-sourced tips for the best results. For more, check out our guides on creating AI images and writing the best AI image prompt.

How to access Google’s nano banana

You can access nano banana in a couple of ways. If you want to go directly through Google, you’ll need to have Google’s AI Pro plan, an AI-specific subscription that’s $20 a month. Once you’re signed up, you can navigate to Google’s AI Studio and select 2.5 Flash Image as the model you want to use. You can’t edit photos with it through your Gemini chats, unlike with OpenAI’s image generation with ChatGPT.

Developers can also access the model through Vertex AI and the Gemini API. If you don’t know what those are, you probably don’t have access to them. So don’t worry about it and stick with the AI Studio, which is open to everyone.

You can also use nano banana through certain Adobe programs. Nano banana is available as a third-party model in Photoshop, Adobe Express and on its AI platform, Firefly. Those programs are also usually paywalled, but it’s worth checking to see if you have access to them if you use some Adobe programs already. Make sure you select 2.5 Flash Image before you click generate, as Adobe has many third-party models available.

As always, an important note about privacy: Google’s general Gemini privacy policy says it can use the information you upload for improving its AI products, which is why the company recommends avoiding uploading sensitive or private information. And nano banana-edited photos aren’t watermarked, so be sure to disclose that you used AI when sharing those images.

Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

How to use Google’s nano banana and get the most out of it

Once you’re in Google’s AI Studio, you can easily upload your image for editing. But like any AI image tool, your chances of success depend on how well you prompt the model. Here are some tips for using nano banana.

  • Be specific. Writing an AI prompt is different from doing a Google search. The more details you provide, the more likely it is you’ll get the result you want. Talk about the colors, the style, the aesthetic. Keywords are great, but don’t worry about writing too much.
  • Tell it what you don’t want changed. This is different from the advice I give when dealing with AI image generators. Most AIs struggle to understand negative prompts, or things you don’t want to see. But nano banana is decent at understanding what you do and don’t want adjusted, so include that in your prompts.
  • If you want text, tell it exactly what words you want. I admit, this is partially inspired by Google’s own best practices for prompting, but it’s also something I’ve seen in my own testing. You need to be specific when you want text; otherwise, it will end up messy. Don’t forget to include the style and color of the font, too.

An example of how I edited an existing photo (left) with nano banana to include a new element — a spookily similar looking sister in the AI-edited photo (right).

Katelyn Chedraoui/CNET

One of the best things you can use nano banana for is adding new elements to an image you already have. But there are some other good use cases. Here are some ideas to help spark your inspiration, based on some of my favorite prompts I use to test AI services.

  • Play around with variations. You can ask the model to create different versions of your existing photo, whether that’s a headshot with different hairstyles or transform your photo into a different style. I’ve taken nature landscapes I photographed and had AI reimagine them in the style of a dark, sci-fi or fantasy world, for example.
  • Use nano banana as an AI-powered eraser. Removing background distractions is one of the most common uses of photo editing software. Any AI worth its weight should be able to help you get rid of errant obstructions and blend the affected area seamlessly into your photo.
  • Ask Gemini for inspiration or ideas when you’re stuck. I always recommend talking through photo editing projects with real people when you’re able. But when you’re not able to, you can use the chatbot as a kind of sounding board for your ideas. Gemini can also help you put your own ideas into words that nano banana will be able to understand.

For more, check out my full hands-on experience with Google’s nano banana and the best AI image generators.



Source link

September 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

How to Use Google’s Nano Banana AI to Transform Your Selfies Into Digital Figurines

by admin September 11, 2025



In brief

  • The free Gemini 2.5 Flash Image tool turns selfies into hyperrealistic 1/7-scale figurines, sparking explosive demand.
  • Users simply upload a full-body photo, paste a detailed prompt, and receive toy-store-ready results in seconds.
  • Politicians and influencers are adopting the trend, while advanced users tweak materials, poses, and props for personalized dolls.

Google’s latest AI model has sparked another viral transformation trend, with millions of users turning themselves into miniature Bandai-style figurines that look pulled from a Tokyo toy store shelf.

The tool, nicknamed Nano Banana after its internal codename at Google DeepMind, officially launched as Gemini 2.5 Flash Image in late August. Within two weeks, users generated over 200 million images. The Philippines alone accounts for 25.5 million image creations just days after its launch, making it the top region globally for use.

“From photo to figurine style in just one prompt,” Google’s Gemini team posted on X September 1, kicking off tutorials that spread the trend further. Josh Woodward, Google’s VP, reported 10 million new Gemini users joined specifically for the feature.

Update: In the last 4 days, @GeminiApp has added:
+ 13M more first-time users (23M+ total now)
+ 300M more images (500M+ total now)

🍌 @NanoBanana is unreal

— Josh Woodward (@joshwoodward) September 8, 2025

The digital figurines appear as 1/7 scale collectibles on clear acrylic bases, often displayed next to packaging boxes and computer screens showing 3D modeling software. The AI captures facial features, clothing details, and poses with enough accuracy that results frequently fool viewers into thinking they’re real product photos.

This marks the third major AI portrait trend of 2025. ChatGPT’s GPT-4o sparked the Studio Ghibli wave in March, with users transforming photos into soft, anime-style portraits reminiscent of Hayao Miyazaki’s films. Sam Altman changed his X profile picture to a “Ghiblified” version of himself as servers struggled under “biblical demand.”

April brought the Barbie Box Challenge through ChatGPT, where people became plastic dolls in blister packaging, complete with accessories like tiny laptops and coffee mugs. LinkedIn professionals embraced it for personal branding, turning themselves into “executive action figures.”

Nano Banana isn’t the only player in this space. Alibaba’s Seedream4, released just a few days after Google’s model, produces outputs with comparable quality and sometimes better consistency in complex poses. For those preferring open-source alternatives, Flux Kontext offers the most powerful option for local deployment. It integrates into complex workflows and remains the only model without content restrictions, though that flexibility comes with the usual responsibilities of self-hosted AI.

How to turn yourself into a figurine, for free

Creating your own Nano Banana figurine takes under a minute. Visit gemini.google.com or open the Gemini app—the basic version is completely free, though your daily generations are limited. 

Click on the option to try Nano Banana, which you can find it in the banner on top. It is also under “tools” as “Create Images.” (Google is showing banana emojis everywhere, so you won’t have any trouble finding it.)



Next, upload your photo or the photo of the person you want to turn into a doll, preferably a full-body photo with good lighting. You can do that by clicking on the “Plus” button next to “tools.” We trust you will be responsible, ok?

Once the image has been uploaded, paste in this prompt:

“Create a 1/7 scale commercialized figurine of the character in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is a 3D modeling process of this figurine. Next to the computer screen is a toy packaging box, designed in a style reminiscent of high-quality collectible figures, printed with original artwork.”

There are some tweaks you can make to customize your doll.  Specifying materials like “PVC figure” or “polished resin” increases realism. Dynamic poses work better than static standing positions—arms outstretched or mid-action yield more convincing figures.

If you don’t think the model is realistic enough, adding something like “The face is exactly the same” or “the face remains unaltered,” does the trick. But beware, you may end up with something too realistic and not really doll-like.

Don’t go too crazy, though. The free tier offers limited daily edits, while the pro version removes restrictions. Some platforms, like Freepik, give users unlimited iterations (and unlimited Wan 2.2 videos too) with the tradeoff of a small degradation in quality after a very generous threshold is met.

Advanced users chain multiple prompts for complex scenes. After generating the base figurine, you can improve your doll with further iterations—asking it, for example, to change the clothes, add props, alter the pose, etc. Nano Banana’s outstanding character reference makes it a very powerful tool that won’t degrade your face too much after many iterations. This is something that previous models used to struggle with.

The figurine aesthetic has proven particularly sticky on social media. Politicians in India and the Philippines posted their miniature versions, while TikTokers use the hashtag #NanoBanana to share increasingly elaborate scenes with multiple figures and custom dioramas.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

September 11, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google AI Figurines
Gaming Gear

I turned myself into a 3D figurine with Google’s Nano Banana – here’s how you can hop on the latest AI image trend

by admin September 11, 2025



Google’s latest image model, Gemini 2.5 Flash Image, also known as Nano Banana, has produced a fun new trend using its advanced image capabilities. You can turn yourself (or your pet) into a highly detailed 3D figurine.

All you need to do is upload a photo and employ a very detailed prompt to have a stylized image of a miniature, plasticized version of the subject, posed on a little clear base, less than a minute later, with a box and even a wire-frame model to show it off.

This particular flavor of AI-generated toy is exploding across social media right now because the model is fast, free, and surprisingly good at what it does. Unlike earlier versions of these kinds of prompts that ran on GPT-4o or Midjourney, Nano Banana has better prompt adherence, understands packaging and posing more consistently, and renders faces that, while not always perfect, are often impressively accurate.

It’s all built into Google AI Studio and the Gemini apps and website if you want to try it. You just have to upload a picture, ideally a full-body shot, of who or whatever you want to make into a figurine, and submit the right prompt. You can play around with it, but the following template, shared around on social media, works very well.

Type this in:

“Create a 1/7 scale commercialized figurine of the characters in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is a 3D modeling process of this figurine. Next to the computer screen is a toy packaging box, designed in a style reminiscent of high-quality collectible figures, printed with original artwork. The packaging features two-dimensional flat illustrations.”

When you paste that into Gemini, along with a photo, it doesn’t just try to render a toy version of what’s in the picture, it imagines the toy existing in the real world, with all the context that goes along with a premium 3D figurine release. It’s like a high-end collectible a company would make if you became famous for whatever pose you you’re in.

Toying with AI

Figurine me (Image credit: Google)

I went with fun photo of myself from a big circus-themed party a few years ago where I went as a lion tamer (see the small lion in my pocket). I shared the photo with Nano Banana along with the prompt and twenty seconds later, there “I” was, six inches tall, standing on a desk and looking jaunty with my whip like I was about to command a herd of miniature jungle cats.

The packaging beside me showed a great illustrated version of the same pose, except it decided I was the ringmaster and named Rhett for some reason. The computer screen behind the figurine showed a 3D modeling window open with “my” miniature wire-frame form on it, being rotated in space like it was being finalized for mass production.

It genuinely looks like a photo, right down to the scuffed desk and random paperwork. Even the stuffed lion in my pocket looked right. It felt like an alternate version of me had been shrink-wrapped and made collectible.

Puppy pose

“Firecracker Fido”. (Image credit: Google)

Next, I decided to try with a photo of my dog, Cabbage. I uploaded a picture of her sitting regally on the ground and used the same default prompt. The toy created by the AI was almost too realistic. I had to look closely to tell it’s supposed to be made of molded plastic.

The screen behind the figure showed the hound rendered in a 3D modeling program appropriately, but the packaging went a little awry. It had multiple images of the dog like it was a test of different poses. But I did like that, lacking her real name, the AI went with her bandanna to name her Firecracker Fido.

The thing that struck me after both generations was how smoothly it all worked. No fine-tuning needed to get 95% of the way there. The Nano Banana just understood the visual reference and ran with it. I wouldn’t claim it’s anything like as valuable as what real human artists can do, but it was a fun experiment.

Much like the Studio Ghibli AI image trend, it’s worthwhile for personal amusement, but the idea of using these images for any kind of money-making scheme to sell actual toys would be several steps beyond propriety.

You might also like



Source link

September 11, 2025 0 comments
0 FacebookTwitterPinterestEmail
Three Pixel 10 phones with the phrase "vs" between them on a colorful background.
Gaming Gear

Pixel 10 vs. Pixel 10 Pro vs. Pixel 10 Pro XL: How Google’s Latest Phones Compare

by admin September 6, 2025


Google’s new Pixel 10 phones are here, and they boast a handful of spec upgrades over their predecessors, as well as more AI features.

The three models look very similar to last year’s Pixel 9 phones. There’s a standard Pixel 10, a higher-spec Pixel 10 Pro and a large Pixel 10 Pro XL. 

There are still some differentiators between the baseline and Pro models, but that lead is shrinking with more top-level features trickling down. All in all, they’re powerful handsets that rank among the year’s best phones.

Pixel 10 price comparison

  • Pixel 10: $799. The Pixel 10 didn’t get a price increase from last year’s standard model despite getting more features.
  • Pixel 10 Pro: $999. The Pixel 10 Pro also kept its predecessor’s price, though it hasn’t changed much from the Pixel 9 Pro.
  • Pixel 10 Pro XL: $1,199. The top-end Pixel 10 Pro XL got a $100 price hike this year, with a handful of spec upgrades.

Both the Pixel 10 ($799) and Pixel 10 Pro ($999) retain the prices of their predecessors, keeping them in line with their equivalents from other brands, like the Samsung Galaxy S25 ($799) and Galaxy S25 Plus ($999). 

But the Pixel 10 Pro XL got slightly more expensive this year, getting a $100 price bump to $1,199 (up from the $1,099 Pixel 9 Pro XL with 128GB of storage), likely to account for the higher 256GB starting storage option. 

The Pixel 10 Pro XL.

Andrew Lanxon/CNET

Pixel 10 lineups’ design and display changes, compared

  • Pixel 10: Unchanged, but additional camera. The baseline model got an extra camera, widening the lens part of its camera bump, and the display is slightly brighter.
  • Pixel 10 Pro: Unchanged. The smaller Pro model looks almost exactly like its predecessor, and the display is slightly brighter.
  • Pixel 10 Pro XL: Unchanged. The larger Pro XL looks just like its predecessor, and the display is slightly brighter. 

The Pixel 10 lineup looks virtually the same as last year’s Pixel 9 phones, leaving most of the upgrades under the hood. 

All three phones retain the general design of last year’s phones: a metal back with a pill-shaped camera bump. The only major difference is that the glass on the Pixel 10’s camera bump is slightly wider, owing to the third, telephoto camera fit into it. All three models are ever-so-slightly heavier than their predecessors (from 6 grams on the Pixel 10 to 11 grams on the Pixel 10 Pro XL), possibly due to their nominal battery capacity increases. 

The standard Pixel 10 retains its predecessor’s 6.3-inch OLED display, though its maximum brightness has been increased to 3,000 nits (up from 2,700). Likewise, the Pixel 10 Pro and Pixel 10 Pro XL inherit their 6.3-inch OLED and 6.8-inch OLED displays from their predecessors, but they also get maximum screen brightness increases to 3,300 nits (up from 3,000). 

The Pixel 10’s rear cameras.

Celso Bulgatti/CNET

Comparing Pixel 10, Pixel 10 Pro and Pixel 10 Pro XL cameras

  • Pixel 10: Added rear camera, for a total of three. A 48-megapixel main, 13-megapixel ultrawide and new 10.8-megapixel telephoto make up the rear cameras, plus a 10.5-megapixel selfie camera.
  • Pixel 10 Pro: Same trio of rear cameras. A 50-megapixel main, 48-megapixel ultrawide and 48-megapixel telephoto make up the rear cameras, plus a 42-megapixel selfie camera.
  • Pixel 10 Pro XL: Same trio of rear cameras. A 50-megapixel main, 48-megapixel ultrawide and 48-megapixel telephoto make up the rear cameras, plus a 42-megapixel selfie camera.

As with other brands, Google has previously differentiated its standard and Pro models by reserving an extra zoom camera for its top-end phones. That changes this year, as the standard Pixel 10 gets a telephoto lens, shrinking the feature gap between the phone tiers.

The Pixel 10 packs a 48-megapixel main camera, though its 13-megapixel ultrawide is a slight downgrade from the 50-megapixel ultrawide shooter in the Pixel 9, which means the new phone may not take as sharp of photos when set to its widest capture settings. The trade-off is the addition of a 10.8-megapixel telephoto with 5x zoom that still adds a lot of functionality, even if it may not take as high-resolution photos as the zoom cameras on its Pro siblings.

The Pixel 10 Pro and 10 Pro XL retain their predecessors’ trio of rear cameras: a 50-megapixel main, a 48-megapixel ultrawide and a 48-megapixel telephoto with 5x zoom. Ostensibly, not much has changed here, though they still take up to 8K video (better than the 4K video maximum on the Pixel 10).

The selfie cameras on all three phones are unchanged from last year, but represent a boost when going from the 10.5-megapixel front-facing camera on the Pixel 10 to the 42-megapixel selfie shooters on the Pixel 10 Pro and 10 Pro XL.

The Pixel 10 Pro XL with an Android logo.

Andrew Lanxon/CNET

All Pixel 10 model specs, compared

  • Pixel 10: Better CPU, slightly bigger battery. The Pixel 10 gets the Google Tensor G5 chip, as well as a 4,970-mAh battery.
  • Pixel 10 Pro: Better CPU, slightly bigger battery. The Pixel 10 Pro gets similar spec upgrades, including the Google Tensor G5 chip, as well as a 4,870-mAh battery.
  • Pixel 10 Pro XL: Better CPU, slightly bigger battery. The Pixel 10 gets the Google Tensor G5 chip, as well as a 5,200-mAh battery.

This year’s Pixel 10 lineup got the usual set of incremental upgrades along with slight boosts to battery capacity, which could have an impact, however small, on increasing battery life.

Like the other models, the Pixel 10 gets the new Google Tensor G5 chip, which Google promises will perform better than its predecessor, last year’s Tensor G4. The new handset inherits the Pixel 9’s 12GB of RAM and baseline storage options of 128GB or 256GB. If you want more, you’ll have to use cloud options or pay more for a Pro or Pro XL model.

The Pixel 10 Pro and 10 Pro XL also have the Tensor G5 chip, which helps power new AI features. Similar to the base model, the Pro versions inherit the same specs as their predecessors: 16GB of RAM and the options of 128GB (Pixel 10 Pro only), 256GB, 512GB or 1TB of storage.

Across the board, all models got slight battery capacity increases, though curiously, the Pixel 10 Pro has the least with 4,870-mAh (up from 4,700-mAh in the Pixel 9 Pro). The next largest is the standard Pixel 10, which got an increase to 4,970-mAh (up from 4,700-mAh in the Pixel 9). The Pixel 10 Pro XL has the largest battery with a 5,200-mAh capacity (up from 5,060mAh).

The Pixel 10 Pro XL also has the edge in juicing up its battery, with a maximum 45-watt wired charging, compared to 30 watts on the Pixel 10 and 10 Pro. All three support Qi2 wireless charging.

All three Pixel 10 phones receive Android 16 out of the box. It was released in June, a month after Google I/O 2025, far earlier than when it typically comes out (usually alongside that year’s Pixel phones). They also get a slew of new AI features like Magic Cue, which scrapes data from Gmail, Google Messages, Google Calendar and other information sources to surface timely details and events.  

Google Pixel 10 vs. Pixel 10 Pro vs. Pixel 10 Pro XL

Google Pixel 10Google Pixel 10 ProGoogle Pixel 10 Pro XLDisplay size, tech, resolution, refresh rate 6.3-inch OLED; 2,424×1,080 pixels; 60 to 120Hz variable refresh rate6.3-inch LTPO OLED; 2,856×1,280 pixels; 1 to 120Hz variable refresh rate6.8-inch LTPO OLED; 2,992×1,344 pixels; 1 to 120Hz variable refresh ratePixel density 422 ppi495 ppi486 ppiDimensions (inches) 6 x 2.8 x 0.3 inches6 x 2.8 x 0.3 in6.4 x 3 x 0.3 inDimensions (millimeters) 152.8 x 72 x 8.5 mm152.8 x 72 x 8.5 mm162.8 x 76.6 x 8.5 mmWeight (grams, ounces) 204 g (7.2 oz)207 g (7.3 oz)232 g (8.2 oz)Mobile software Android 16Android 16Android 16Camera 48-megapixel (wide), 13-megapixel (ultrawide), 10.8-megapixel (5x telephoto)50-megapixel (wide), 48-megapixel (ultrawide), 48-megapixel (5x telephoto)50-megapixel (wide), 48-megapixel (ultrawide), 48-megapixel (5x telephoto)Front-facing camera 10.5-megapixel42-megapixel42-megapixelVideo capture 4K8K8KProcessor Google Tensor G5Google Tensor G5Google Tensor G5RAM + storage 12GB RAM + 128GB, 256GB16GB RAM + 128GB, 256GB, 512GB, 1TB16GB RAM + 256GB, 512GB, 1TBExpandable storage NoneNoneNoneBattery 4,970 mAh4,870 mAh5,200 mAhFingerprint sensor Under displayUnder displayUnder displayConnector USB-CUSB-CUSB-CHeadphone jack NoneNoneNoneSpecial features Gorilla Glass 2 Victus cover glass; 3,000 nits peak brightness; Satellite SOS; Dual-eSIM; Wi-Fi 6E; NFC; Bluetooth 6; 30W fast charging (wall charger not included); Qi2 15W wireless charging; support for PixelSnap magnetic accessories; Google VPN; Super Res Zoom up to 20x; Camera Coach; Add Me; Macro mode; Face Unblur; Auto Best Take; IP68 rating for dust and water resistance; 7 years of OS, security, and Pixel Drop updates; Corning Gorilla Glass Victus 2 polished back with satin finish aluminum frameGorilla Glass 2 Victus cover glass; 3,300 nits peak brightness; Satellite SOS; Dual-eSIM; Wi-Fi 7; NFC; Bluetooth 6; 30W fast charging (wall charger not included); Qi2 15W wireless charging; support for PixelSnap magnetic accessories; Google VPN; Pro Res zoom up to 100x; Camera Coach; Add Me; Macro mode; Face Unblur; Auto Best Take; High-Res Portrait mode; IP68 rating for dust and water resistance; 7 years of OS, security, and Pixel Drop updates; Corning Gorilla GlassVictus 2 silky matte back with polished finish aluminum frame; ultrawideband chipGorilla Glass 2 Victus cover glass; 3,300 nits peak brightness; Satellite SOS; Dual-eSIM; Wi-Fi 7; NFC; Bluetooth 6; 45W fast charging (wall charger not included); Qi2.2 25W wireless charging; support for PixelSnap magnetic accessories; Google VPN; Pro Res zoom up to 100x; Camera Coach; Add Me; Macro mode; Face Unblur; Auto Best Take; High-Res Portrait mode; IP68 rating for dust and water resistance; 7 years of OS, security, and Pixel Drop updates; Corning Gorilla GlassVictus 2 silky matte back with polished finish aluminum frame; ultrawideband chipUS price starts at $799 (128GB)$999 (128GB)$1,199 (256GB)UK price starts at £799 (128GB)£999 (128GB)£1,199 (256GB)Australia price starts at AU$1,349 (128GB)AU$1,699 (128GB)AU$1,999 (256GB)

Watch this: The Google Pixel 10 Pro XL’s Camera Is So Smart, It Almost Took the Photos for Me

07:57



Source link

September 6, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google’s Pixel Care Plus includes free screen and battery repair
Gaming Gear

Google’s Pixel Care Plus includes free screen and battery repair

by admin August 28, 2025


Google is phasing out its Preferred Care extended warranty plan for the Pixel Care Plus program. Pricing between the two is pretty similar. You’ll still pay $8 per-month, or $159 for a two-year plan on a Pixel 9. For a Pixel 10 Pro Fold, that jumps up to $339 for two years, or $18 per-month, with the optional loss and theft package for a small extra charge.

The big changes here are that screen and battery repairs are free, and service fees for other accidental damage are much lower. Under the old Preferred Care program, replacing a cracked screen would run you $29. Under Pixel Care Plus a cracked front screen or battery running at under 80-percent capacity will get swapped out for $0. Unfortunately, if you happen to mess up the internal screen on your 10 Pro Fold, you are not covered.

Other accidental damage fees vary depending on model, ranging from $49 on some older models like the Pixel 8a and 9a, to $99 on the Pixel 10 Pro Fold. On average they’re lower though, with service fees reaching $129 for the Pixel 9 Pro and Fold models. The new loss and theft option, which adds $1 or $2 a month to the plan, also varies per model with deductibles ranging up to $149 on the high end.

The new plans bring Google more inline with the likes of Samsung, which ditched screen replacement fees under its new extended coverage plans back in January.



Source link

August 28, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google's AI Weather Model Nailed Its First Major Storm Forecast
Gaming Gear

Google’s AI Weather Model Nailed Its First Major Storm Forecast

by admin August 26, 2025


While generative AI tools that primarily amount to slop generators grab most of the attention in the artificial intelligence space, there are occasionally some actually useful applications of the technology, like Google DeepMind’s use of AI weather models to predict cyclones. The experimental tool, launched earlier this year, successfully managed to provide accurate modeling of Hurricane Erin as it started gaining steam in the Atlantic Ocean earlier this month.

As Ars Technica first reported, Hurricane Erin—which reached Category 5 status and caused some damage to the island of Bermuda, parts of the Caribbean, and the East Coast of the United States—provided Google DeepMind’s Weather Lab with the first real test of its capabilities.

According to James Franklin, former chief of the hurricane specialist unit at the National Hurricane Center, it did quite well, outperforming the National Hurricane Center’s official model and topping several other physics-based models during the first 72 hours of modeling. It did ultimately fall off a bit the longer the prediction effort ran, but it still topped the consensus model through the five-day forecast.

While Google’s model was impressively accurate in the first days of modeling, it’s the latter ones that are most important to experts, per Ars Technica, as days three through five of the model are the ones that officials count on to make decisions on calls for evacuation and other preparatory efforts. Still, it seems like there may be some promise in the possibility of AI-powered weather modeling—though the sample size here is pretty small.

Most of the current gold standard modeling techniques used for storm prediction use physics-based prediction engines, which essentially try to recreate the conditions of the atmosphere by factoring in things like humidity, air pressure, and temperature changes to simulate how a storm might behave. Google’s model instead pulls from a massive amount of data that it was trained on, including a “reanalysis dataset that reconstructs past weather over the entire Earth from millions of observations, and a specialized database containing key information about the track, intensity, size and wind radii of nearly 5,000 observed cyclones from the past 45 years.”

According to Google, it tested its model on storms from 2023 and 2024, and found that its five-day prediction managed to predict the path of a storm with more accuracy than most other models, coming about 140km or 90 miles closer to the ultimate location of the cyclone than the European Centre for Medium-Range Weather Forecasts’ ensemble model, which is considered the most accurate model available. Now it can point to a storm that it tracked in real-time as proof of concept, though there is no reason to think AI tools like this will completely displace the other approaches at this stage.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Gaming Gear

Is this Google’s first Gemini-powered smart speaker?

by admin August 23, 2025


Google held its annual Made by Google showcase this week, during which it unveiled the latest Pixel phones and various new AI features. What it didn’t announce is a Gemini-powered smart home speaker, but it might have snuck one into the presentation as an easter egg.

As spotted by The Verge, about 15 minutes into the stream, on a table next to F1 driver Lando Norris during a Gemini demonstration, you can clearly see what looks a lot like a smart speaker. The fabric-wrapped device we see in the video is white with a light around its base, and it looks more like an upside down HomePod mini than any of Google’s other speakers. According to Android Headlines, it’s called the Google Home Speaker, and is unsurprisingly powered by Gemini.

Android Headlines claims to have seen more images of the as-yet unannounced smart speaker, which it says will also be available in red, light green and black. It adds that Google’s Gemini Live AI assistant will be a baked-in feature for voice conversations, though without a screen or camera there’s no visual element here. There could also be new natural voice options on the speaker, as well as standard features like media control and smart home automations. It can reportedly also detect unusual sounds like breaking glass or smoke alarms and can send an alert to your phone or tablet if this happens.

According to the Android Headlines report, the Home Speaker can pair with a Google TV Streamer and double as an additional surround sound speaker, much like the HomePod does with Apple TV. It functions as a smart hub too, with support for Matter. Google announced Gemini for Home this week, which will gradually replace Google Assistant on the company’s existing smart devices, with free and paid tiers offered to users.

What the leak doesn’t contain is a launch date for this mysterious speaker. But given that Google was seemingly happy enough to have one sitting there in its most important keynote of the year, the smart money would be on an official reveal pretty soon. Engadget reached out to Google for comment, but the company said it had nothing to share at this time.



Source link

August 23, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google’s Gemini Live AI assistant will show you what it’s talking about
Gaming Gear

Google’s Gemini Live AI assistant will show you what it’s talking about

by admin August 21, 2025


Google is bringing a bundle of new features to Gemini Live, its AI assistant that you can have real-time conversations with. Next week, Gemini Live will be able to highlight things directly on your screen while sharing your camera, making it easier for the AI assistant to point out a specific item.

If you’re trying to find the right tool for a project, for example, you can point your smartphone’s camera at a collection of tools, and Gemini Live will highlight the correct one on your screen. This feature will be available on the newly announced Pixel 10 devices when they launch on August 28th. Google will begin rolling out visual guidance to other Android devices at the same time before expanding to iOS “in the coming weeks.”

Google is also launching new integrations that will soon allow Gemini Live to interact with more apps, including Messages, Phone, and Clock. Say you’re in the middle of a conversation with Gemini about directions to your destination, but you realize you’re running late. Google says you’ll be able to interrupt the chatbot with something like: “This route looks good. Now, send a message to Alex that I’m running about 10 minutes late.” From there, Google can draft a text to your friend for you.

Lastly, Google is launching an updated audio model for Gemini Live that the company says will “dramatically improve” how the chatbot “uses the key elements of human speech, like intonation, rhythm and pitch.” Soon, Gemini will change its tone based on what you’re speaking about, such as using a calmer voice if you’re asking about a stressful topic.

You’ll also be able to change how fast — or slow — Gemini talks, which sounds a bit similar to how users can now tweak the style of ChatGPT’s voice mode. And, if you ask Gemini for a dramatic retelling of a story from the perspective of a particular character or historical figure, the chatbot may adopt an accent for a “rich, engaging narrative.”



Source link

August 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close