Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Googles

Google Veo 3 video generation.
Product Reviews

Google’s Veo 3 Is Already Deepfaking All of YouTube’s Most Smooth-Brained Content

by admin May 22, 2025


Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on.

One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts.

You can barely tell this iPhone unboxing isn’t real pic.twitter.com/vfZ2lUoliZ

— Matt Shumer (@mattshumer_) May 21, 2025

Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame.

2. Stand-up comedy telling a funny joke that never happenedhttps://t.co/05M6cDZlzK

— Min Choi (@minchoi) May 22, 2025

Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass.

Uhhh… I don’t think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox

— Matt Shumer (@mattshumer_) May 21, 2025

Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts.

At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course.





Source link

May 22, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google AI Try On
Gaming Gear

I tried Google’s new AI try-on feature, and it’s given me some new fashion ideas

by admin May 22, 2025



Google has rolled out a new AI-powered shopping feature to help you figure out what the clothes you are interested in buying might look like when you wear them. It’s dubbed “try it on” and it’s available right now in the US through Google Search Labs.

To get started, you just need to switch it on in the lab. Then, you upload a full-length photo of yourself and start looking for clothes in the Google Shopping tab.

When you click on an image of some outfit from the search results, you’ll see a little “try it on” button in the middle of the enlarged version of the outfit in the right-hand panel. One click and about ten seconds later, you’ll see yourself wearing the outfit. It may not always be a perfect illusion, but you’ll at least get a sense of what it would look like on you.


You may like

Google claims the whole thing runs on a model trained to see the relationship between your body and clothing. The AI can, therefore, realistically drape, stretch, and bunch material across a variety of body types.

The feature doesn’t work with every piece of clothing you might see, or even every type of outfit. The clothing retailer has to opt into the program, and Google said it only works for shirts, pants, dresses, and skirts.

I did notice that costumes and swimwear both had no usable images, but I could put shorts on myself, and costumes that looked enough like regular clothes were usable. The AI also didn’t seem to have an issue with jackets and coats as categories.

Elvis looks

(Image credit: Photo/Google AI)

For instance, on Google Shopping, I found replicas of the outfits Elvis wore for his 1966 comeback and one of his jumpsuits from the 1970s. With a couple of clicks, I could imagine myself dressed as the King in different eras.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

It even changed my shoes in the all-black suit. I’d always wondered if I could pull off either look. The images are shareable, and you can save or send them to others from the Google mobile app and see how much of an Elvis your friends think you are.

Super summer

(Image credit: Photo/Google AI)

The details that the AI changes to make the photos work are impressive. I used the AI to try on a fun summer look and the closest to a superhero costume I could try. The original photo is me in a suit and jacket with a bowtie and black dress shoes. But the shoes and socks on both AI-generated images not only match what was in the search result, but they’re shaped to my stance and size.

Plus, despite wearing long sleeves and pants, the AI found a way to show some of my arms and legs. The color matches reality, but its imperfections are noticeable to me. My legs look too skinny in both, like the AI thinks I skipped leg day, and my legs in the shorts have not been that hairless since I turned 13.

Imperfections aside, it does feel like this will be a major part of the next era of e-commerce. The awkward guessing of whether a color or cut works for your skin tone and build might be easier to resolve.

I wouldn’t say it can make up for trying them on in real life, especially when it comes to sizing and comfort, but as a digital version of holding an outfit up against you while you look in a mirror, it’s pretty good.

Ending unnecessary returns

(Image credit: Photo/Google AI)

Uncanny as some of the resulting images are, I think this will be a popular feature for Google Shopping. I’d expect it to be heavily imitated by rivals in AI development and online retail, where it isn’t already.

I particularly like how the AI lets you see how you’d look in more outlandish or bold looks you might hesitate to try on at a store. For example, the paisley jacket and striped pants on the left or the swallowtail jacket and waistcoat with Victorian trousers on the right. I’d hesitate to order either look and would almost certainly plan on returning one or both of them even before they arrive.

Returns are a plague on online retailers and waste tons of packaging and other resources. But if Google shows us how we’d look in clothes before we buy them, it could chip away at return rates; retailers will race to sign up for the program.

It could also open the door to more personalized style advice from AI. You could soon have an AI personal dresser, ready to give you a virtual fit check and suggest your next look, even if it isn’t something Elvis would have worn.

You might also like



Source link

May 22, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google's Android XR prototype demonstrated at Google I/O 2025
Product Reviews

Hands on: I tried Google’s Android XR prototype and they can’t do much but Meta should still be terrified

by admin May 21, 2025



Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

The Google Android XR can’t do very much… yet. At Google I/O 2025, I got to wear the new glasses and try some key features – three features exactly – and then my time was up. These Android XR glasses aren’t the future, but I can certainly see the future through them, and my Meta Ray Ban smart glasses can’t match anything I saw.

The Android XR glasses I tried had a single display, and it did not fill the entire lens. The glasses projected onto a small frame in front of my vision that was invisible unless filled with content.

To start, a tiny digital clock showed me the time and local temperature, information drawn from my phone. It was small and unobtrusive enough that I could imagine letting it stay active at the periphery.


You may like

Google Gemini is very responsive on this Android XR prototype

(Image credit: Philip Berne / Future)

The first feature I tried was Google Gemini, which is making its way onto every device Google touches. Gemini on the Android XR prototype glasses is already more advanced than what you might have tried on your smartphone.

I approached a painting on the wall and asked Gemini to tell me about it. It described the pointillist artwork and the artist. I said I wanted to look at the art very closely and I asked for suggestions on interesting aspects to consider. It gave me suggestions about pointillism and the artist’s use of color.

The conversation was very natural. Google’s latest voice models for Gemini sound like a real human. The glasses also did a nice job pausing Gemini when somebody else was speaking to me. There wasn’t a long delay or any frustration. When I asked Gemini to resume, it said ‘no problem’ and started up quickly.

That’s a big deal! The responsiveness of smart glasses is a metric I haven’t considered before, but it matters. My Meta Ray Ban Smart Glasses have an AI agent that can look through the camera, but it works very slowly. It responds slowly at first, and then it takes a long time to answer the question. Google’s Gemini on Android XR was much faster and that made it feel more natural.

Google Maps on Android XR wasn’t like any Google Maps I’ve seen

Celebrities Giannis Antetokounmpo and Dieter Bohn wear Android XR glasses and shake hands with the crowd (Image credit: Philip Berne / Future)

Then I tried Google Maps on the Android XR prototype. I did not get a big map dominating my view. Instead, I got a simple direction sign with an arrow telling me to turn right in a half mile. The coolest part of the whole XR demo was when the sign changed as I moved my head.

If I looked straight down at the ground, I could see a circular map from Google with an arrow showing me where I am and where I should be heading. The map moved smoothly as I turned around in circles to get my bearings. It wasn’t a very large map – about the size of a big cookie (or biscuit for UK friends) in my field of view.

As I lifted my head, the cookie-map moved upward. The Android XR glasses don’t just stick a map in front of my face. The map is an object in space. It is a circle that seems to remain parallel with the floor. If I look straight down, I can see the whole map. As I move my head upward, the map moves up and I see it from a diagonal angle as it lifts higher and higher with my field of view.

By the time I am looking straight ahead, the map has entirely disappeared and has been replaced by the directions and arrow. It’s a very natural way to get an update on my route. Instead of opening and turning on my phone, I just look towards my feet and Android XR shows me where they should be pointing.

Showing off the colorful display with a photograph

(Image credit: Philip Berne / Future)

The final demo I saw was a simple photograph using the camera on the Android XR glasses. After I took the shot, I got a small preview on the display in front of me. It was about 80% transparent, so I could see details clearly, but it didn’t entirely block my view.

Sadly that was all the time Google gave me with the glasses today, and the experience was underwhelming. In fact, my first thought was to wonder if the Google Glass I had in 2014 had the exact same features as today’s Android XR prototype glasses. It was pretty close.

My old Google Glass could take photos and video, but it did not offer a preview on its tiny, head-mounted display. It had Google Maps with turn directions, but it did not have the animation or head-tracking that Android XR offers.

There was obviously no conversational AI like Gemini on Google Glass, and it could not look at what you see and offer information or suggestions. What makes the two similar? They both lack apps and features.

Which comes first, the Android XR software or the smart glasses to run it?

(Image credit: Philip Berne / Future)

Should developers code for a device that doesn’t exist? Or should Google sell smart glasses even though there are no developers yet? Neither. The problem with AR glasses isn’t just a chicken and egg problem of what comes first, the software or the device. That’s because AR hardware isn’t ready to lay eggs. We don’t have a chicken or eggs, so it’s no use debating what comes first.

Google’s Android XR prototype glasses are not the chicken, but they are a fine looking bird. The glasses are incredibly lightweight, considering the display and all the tech inside. They are relatively stylish for now, and Google has great partners lined up in Warby Parker and Gentle Monster.

The display itself is the best smart glasses display I’ve seen, by far. It isn’t huge, but it has a better field of view than the rest; it’s positioned nicely just off-center from your right eye’s field of vision; and the images are bright, colorful (if translucent), and flicker-free.

The author in Ray-Ban Meta Smart Glasses looking dumbfounded (Image credit: Future / Philip Berne)

When I first saw the time and weather, it was a small bit of text and it didn’t block my view. I could imagine keeping a tiny heads-up display on my glasses all the time, just to give me a quick flash of info.

This is just the start, but it’s a very good start. Other smart glasses haven’t felt like they belonged at the starting line, let alone on retail shelves. Eventually, the display will get bigger, and there will be more software. Or any software, because the feature set felt incredibly limited.

Still, with just Gemini’s impressive new multi-modal capabilities and the intuitive (and very fun) Google Maps on XR, I wouldn’t mind being an early adopter if the price isn’t terrible.

My Ray-Ban Meta Smart Glasses are mostly just sunglasses now (Image credit: Future / Philip Berne)

Of course, Meta Ray Ban Smart Glasses lack a display, so they can’t do most of this. The Meta Smart Glasses have a camera, but the images are beamed to your phone. From there, your phone can save them to your gallery, or even use the Smart Glasses to broadcast live directly to Facebook. Just Facebook – this is Meta, after all.

With its Android provenance, I’m hoping whatever Android XR smart glasses we get will be much more open than Meta’s gear. It must be. Android XR runs apps, while Meta’s Smart Glasses are run by an app. Google intends Android XR to be a platform. Meta wants to gather information from cameras and microphones you wear on your head.

I’ve had a lot of fun with the Meta Ray Ban Smart Glasses, but I honestly haven’t turned them on and used the features in months. I was already a Ray Ban Wayfarer fan, so I wear them as my sunglasses, but I never had much luck getting the voice recognition to wake up and respond on command. I liked using them as open ear headphones, but not when I’m in New York City and the street noise overpowers them.

I can’t imagine that I will stick with my Meta glasses once there is a full platform with apps and extensibility – the promise of Android XR. I’m not saying that I saw the future in Google’s smart glasses prototype, but I have a much better view of what I want that smart glasses future to look like.

You might also like…



Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
We tried on Google’s prototype AI smart glasses
Product Reviews

We tried on Google’s prototype AI smart glasses

by admin May 21, 2025


Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there’s a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google’s Android XR smart glasses prototypes. (The Project Mariner booth is maybe 10 feet away and remarkably empty.)

While nothing was going to steal AI’s spotlight at this year’s keynote — 95 mentions! — Android XR has been generating a lot of buzz on the ground. But the demos we got to see here were notably shorter, with more guardrails, than what I got to see back in December. Probably because, unlike a few months ago, there are cameras everywhere and these are “risky” demos.

The Project Moohan VR headset.

First up is Project Moohan. Not much has changed since I first slipped on the headset. It’s still an Android-flavored Apple Vision Pro, albeit much lighter and more comfortable to wear. Like Oculus headsets, there’s a dial in the back that lets you adjust the fit. If you press the top button, it brings up Gemini. You can ask Gemini to do things, because that is what AI assistants are here for. Specifically, I ask it to take me to my old college stomping grounds in Tokyo in Google Maps without having to open the Google Maps app. Natural language and context, baby.

But that’s a demo I’ve gotten before. The “new” thing Google has to show me today is spatialized video. As in, you can now get 3D depth in a regular old video you’ve filmed without any special equipment. (Never mind that the example video I’m shown is most certainly filmed by someone with an eye for enhancing dramatic perspectives.)

When angled just so, you can see a glimpse of the hidden display.

Because of the clamoring crowd outside, I’m then given a quick run-through of Google’s prototype Android XR glasses. Emphasis on prototype. They’re simple; it’s actually hard to spot the camera in the frame and the discreet display in the right lens. When I slip them on, I can see a tiny translucent screen showing the time and weather. If I press the temple, it brings up — you guessed it — Gemini. I’m prompted to ask Gemini to identify one of two paintings in front of me. At first, it fails because I’m too far away. (Remember, these demos are risky.) I ask it to compare the two paintings, and it tells me some obvious conclusions. The one on the right uses brighter colors, and the one on the left is more muted and subdued.

Tapping the side will bring up Gemini on the Android XR prototype glasses.

On a nearby shelf, there are a few travel guidebooks. I tell Gemini a lie — that I’m not an outdoorsy type, so which book would be the best for planning a trip to Japan? It picks one. I’m then prompted to take a photo with the glasses. I do, and a little preview pops up on the display. Now that’s something the Ray-Ban Meta smart glasses can’t do — and arguably, one of the Meta glasses’ biggest weaknesses for the content creators that make up a huge chunk of its audience. The addition of the display lets you frame your images. It’s less likely that you’ll tilt your head for an accidental Dutch angle or have the perfect shot ruined by your ill-fated late-night decision to get curtain bangs.

These are the safest demos Google can do. Though I don’t have video or photo evidence, the things I saw behind closed doors in December were a more convincing example of why someone might want this tech. There were prototypes with not one, but two built-in displays, so you could have a more expansive view. I got to try the live AI translation. The whole “Gemini can identify things in your surroundings and remember things for you” demo felt personalized, proactive, powerful, and pretty dang creepy. But those demos were on tightly controlled guardrails — and at this point in Google’s story of smart glasses redemption, it can’t afford a throng of tech journalists all saying, “Hey, this stuff? It doesn’t work.”

Reminder: this is a prototype. What’ll end up shipping to consumers will be different.

Meta is the name that Google hasn’t said aloud with Android XR, but you can feel its presence loom here at the Shoreline. You can see it in the way Google announced stylish eyewear brands like Gentle Monster and Warby Parker as partners in the consumer glasses that will launch… sometime, later. This is Google’s answer to Meta’s partnership with EssilorLuxottica and Ray-Ban. You can also see it in the way Google is positioning AI as the killer app for headsets and smart glasses. Meta, for its part, has been preaching the same for months — and why shouldn’t it? It’s already sold 2 million units of the Ray-Ban Meta glasses.

The problem is, even though Google let us take photo and video this time, it is so freakin’ hard to convey why Silicon Valley is so gung-ho on smart glasses. I’ve said it time and time again. You have to see it to believe it. Renders and video capture don’t cut it. Even then, even if, in the limited time we have, we could frame the camera just so and give you a glimpse into what I see when I’m wearing these things — it just wouldn’t be the same.





Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google’s new Flow tool brings AI magic to video creation
Product Reviews

Google’s new Flow tool brings AI magic to video creation

by admin May 21, 2025



Google’s latest I/O event, which took place on Tuesday, showcased a striking expansion of AI across its growing range of products, with new generative tools like Imagen 4 for images, Veo 3 for video, and Flow for AI-driven filmmaking, taking center stage alongside a revamped AI-powered search experience and the premium Google AI Ultra subscription.

The all-new Flow editing tool enables AI-powered movie creation and — at least at first glance — marks a significant leap in Google’s creative AI capabilities.

Flow incorporates Veo, Imagen, and Gemini AI models, and the new editing tool is designed to help storytellers develop their ideas and create cinematic clips and scenes for their stories.

Notably, the AI tool lets you easily produce video content by combining generated visuals and audio, which is supposed to enable a rapid workflow to arrive at the desired results.

Google said that Flow is aimed at “professionals or those just getting started,” which means pretty much everyone with an interest in filmmaking. Features include camera controls that let you create precisely the kind of shot you’re after, bringing direct control of camera motion, angles, and perspectives.

Other Flow features include scenebuilder, which lets you seamlessly edit and extend your existing shots, so you can reveal more of the action or smoothly transition to what happens next with continuous motion and consistent characters.

Google’s new AI tool is a wake-up call for OpenAI’s Sora, which is one of Flow’s high-profile competitors. Both are designed for filmmakers and creatives to generate cinematic video scenes from text prompts, but Flow distinguishes itself by integrating native audio generation.

Established creatives and those on film crews will be looking at these new tools with some trepidation, with Google’s latest AI-powered content creation products having even greater potential to disrupt traditional filmmaking, advertising, and similar industries. But some filmmakers are embracing the profound changes heading their way, with Oscar-nominated director Darren Aronofsky, for one, announcing a partnership with Google for a new generative-AI storytelling initiative to create short films using some of Google’s newly announced tools, IndieWire reported.

“Filmmaking has always been driven by technology,” Aronofsky said in a statement. “After the Lumiere Brothers and Edison’s ground-breaking invention, filmmakers unleashed the hidden storytelling power of cameras. Later technological breakthroughs — sound, color, VFX — allowed us to tell stories in ways that couldn’t be told before. Today is no different. Now is the moment to explore these new tools and shape them for the future of storytelling.”

Flow is available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.






Source link

May 21, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (100)
  • Esports (78)
  • Game Reviews (84)
  • Game Updates (91)
  • GameFi Guides (98)
  • Gaming Gear (99)
  • NFT Gaming (92)
  • Product Reviews (100)
  • Uncategorized (1)

Recent Posts

  • Let’s Talk About ChatGPT and Cheating in the Classroom
  • SHIB and PEPE cool off as Codename:Pepe and FART kick into overdrive
  • Overwatch 2’s hero bans mean changes for Sombra, Zarya, and more
  • RTX 4090 16-pin meltdown victims resort to DIY solutions to prevent further disaster
  • Elden Ring live-action film officially in development

Recent Posts

  • Let’s Talk About ChatGPT and Cheating in the Classroom

    May 24, 2025
  • SHIB and PEPE cool off as Codename:Pepe and FART kick into overdrive

    May 24, 2025
  • Overwatch 2’s hero bans mean changes for Sombra, Zarya, and more

    May 24, 2025
  • RTX 4090 16-pin meltdown victims resort to DIY solutions to prevent further disaster

    May 24, 2025
  • Elden Ring live-action film officially in development

    May 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Let’s Talk About ChatGPT and Cheating in the Classroom

    May 24, 2025
  • SHIB and PEPE cool off as Codename:Pepe and FART kick into overdrive

    May 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close