Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

image

Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator
Product Reviews

Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

by admin September 17, 2025


Nvidia CEO Jensen Huang is in London, standing in front of a room full of journalists, outing himself as a huge fan of Gemini’s Nano Banana. “How could anyone not love Nano Banana? I mean Nano Banana, how good is that? Tell me it’s not true!” He addresses the room. No one responds. “Tell me it’s not true! It’s so good. I was just talking to Demis [Hassabis, CEO of DeepMind] yesterday and I said ‘How about that Nano Banana! How good is that?’”

It looks like lots of people agree with him: The popularity of the Nano Banana AI image generator—which launched in August and allows users to make precise edits to AI images while preserving the quality of faces, animals, or other objects in the background—has caused a 300 million image surge for Gemini in the first few days in September already, according to a post on X by Josh Woodward, VP of Google Labs and Google Gemini.

Huang, whose company was among a cohort of big US technology companies to announce investments into data centers, supercomputers, and AI research in the UK on Tuesday, is on a high. Speaking ahead of a white-tie event with UK prime minister Keir Starmer (where he plans to wear custom black leather tails), he’s boisterously optimistic about the future of AI in the UK, saying the country is “too humble” about the country’s potential for AI advancements.

He cites the UK’s pedigree in themes as wide as the industrial revolution, steam trains, DeepMind (now owned by Google), and university researchers, as well as other tangential skills. “No one fries food better than you do,” he quips. “Your tea is good. You’re great. Come on!”

Nvidia announced a $683 million equity investment in datacenter builder Nscale this week, a move that—alongside investments from OpenAI and Microsoft—has propelled the company to the epicenter of this AI push in the UK. Huang estimates that Nscale will generate more than $68 billion in revenues over six years. “I’ll go on record to say I’m the best thing that’s ever happened to him,” he says, referring to Nscale CEO Josh Payne.

“As AI services get deployed—I’m sure that all of you use it. I use it every day and it’s improved my learning, my thinking. It’s helped me access information, access knowledge a lot more efficiently. It helps me write, helps me think, it helps me formulate ideas. So my experience with AI is likely going to be everybody’s experience. I have the benefit of using all the AI—how good is that?”

The leather-jacket-wearing billionaire, who previously told WIRED that he uses AI agents in his personal life, has expanded on how he uses AI (that’s not Nano Banana) for most daily things, including his public speeches and research.

“I really like using an AI word processor because it remembers me and knows what I’m going to talk about. I could describe the different circumstance that I’m in and yet it still knows that I’m Jensen, just in a different circumstance,” Huang explains. “In that way it could reshape what I’m doing and be helpful. It’s a thinking partner, it’s truly terrific, and it saves me a ton of time. Frankly, I think the quality of work is better.”

His favorite one to use “depends on what I’m doing,” he says. “For something more technical I will use Gemini. If I’m doing something where it’s a bit more artistic I prefer Grok. If it’s very fast information access I prefer Perplexity—it does a really good job of presenting research to me. And for near everyday use I enjoy using ChatGPT,” Huang says.

“When I am doing something serious I will give the same prompt to all of them, and then I ask them to, because it’s research oriented, critique each other’s work. Then I take the best one.”

In the end though, all topics lead back to Nano Banana. “AI should be democratized for everyone. There should be no person who is left behind, it’s not sensible to me that someone should be left behind on electricity or the internet of the next level of technology,” he says.

“AI is the single greatest opportunity for us to close the technology divide,” says Huang. “This technology is so easy to use—who doesn’t know how to use Nano?”





Source link

September 17, 2025 0 comments
0 FacebookTwitterPinterestEmail
AI in a search bar above a rainbow-lit keyboard
Gaming Gear

Writing a Good AI Image Prompt Isn’t Hard, but You Need These Essential Elements

by admin September 16, 2025


One of the first things I learned while testing AI image generators is that there are a lot of things that can go wrong when you’re trying to get the image you see in your head to appear on your screen. If you’ve ever used an AI image or video generator, you know what I mean.

I’ve spent the past year testing and reviewing different AI image generators, and I’ve generated hundreds of images across services like Google’s nano bananas model, Midjourney and Dall-E. But the images I created haven’t all been winners. A bunch of them have been downright horrifying. But all my testing forced me to learn that the best way to avoid creating a wonky AI image is using a good prompt.

Prompt engineering, as experts call it, is knowing what words to use to get AI products to do what you want. For AI images, that means creating a holistic description of what you want, beyond just the characters and setting. No matter what service you use, there are essential elements you need in every prompt for the best results. This is especially important if the generator you’re using doesn’t have a lot of editing tools, like the ability to upload reference images or fix weird hallucinations.

10 Photos That Show What AI Image Generators Struggle With Most

See all photos

Writing a good AI image prompt isn’t hard, but it may take a little more work than you expect. These are my best expert tips for crafting the right prompt, including some helpful phrases to use and common mistakes to avoid.

Start with these three elements

When you first write your prompt, you might feel overwhelmed or like you’re not sure where to start. I’ve been there, and the best place to begin is with the essentials. These are the three necessary elements every prompt needs. Once you have something for each of these, you can build it out from there.

  • Characters and elements in the scene
  • Setting or where it takes place
  • Dimensions, like portrait, landscape or a specific ratio (3:2, 16:9, etc)

You might be tempted to add some exclusionary characteristics in your prompt, or things that you do not want in your image. I would caution against it. Even the most prompt-adherent generator is likely to ignore these, or worse, misread the prompt and include something you specifically asked it not to. If you want to eliminate an element from one image, it’s usually easier to do that in the editing stage rather than in the original prompt.

Specify the style and color palette you want

Beyond the “who, what and where” in your basic prompt, you’ll want to guide the generator toward a specific style. Here are some of the most popular styles of AI images.

  • Photorealistic: As close to real life as possible. AI image generators aren’t great at this, but it’s worth trying.
  • Stock photography: Like real photos, but shinier and brighter.
  • Product features: Emphasizes individual elements over the background or scene.
  • Cartoon: Fun, bright and usually less detailed.
  • Illustration: Similar to paintings, pencil sketches.
  • Gaming/Game UI: More advanced than cartoon, sometimes anime-like.

Include specific colors you want, too. If you’re not picky about the exact shades you want, you can still lead the generator down the right road by specifying if you want warm or cool tones.

This Canva image keeps the magic alive with a cartoonish warm-toned image.

Katelyn Chedraoui/Canva Magic Media AI

You’ll want different styles for different projects. Photorealistic AI images are likely to be better suited for professional environments than cartoon-style images, but they might not be right for a creative mock-up. Illustrations might be best for more detail-oriented, creative projects, like building out brainstorming ideas, and gaming is good for first iterations of new characters and worlds.

Describe the aesthetic, vibe and emotion

Take your prompt a step further and include a description of the overall aesthetic or vibe. This can help elevate your images and reach that extra layer of detail. These details are a jumping-off point to get you in the ballpark of what you want without overwhelming the generator with a novel-length prompt. Here are some common options to include in your prompt.

  • Abstract
  • Anime
  • Medieval
  • Retro
  • Psychedelic 
  • Glow, neon
  • Geometric
  • Painting, brushstroke, oil painting
  • Comic
  • Noir
  • Vintage
  • Impressionist
  • Simple, minimalistic
  • Fantasy, sci-fi
  • High tech
  • Surrealist

If none of these aesthetics feel right, try picking the closest one and building from there. Include textures, the time period and landmarks. If you care less about the specific style but want to ensure a specific emotional response, try describing that. Often describing the emotional temperature of a scene can jump-start the generator toward a specific kind of visual look. For example, happy scenes tend to have bright colors and a warm feel, no matter if they’re photorealistic or illustrations. Stressful scenes might have more detail, cool tones and a foreboding feeling that the generator might show you fits better with a fantasy or nonrealistic aesthetic.

Leonardo might not understand “cottage core coastal grandma,” but it does understand the rustic feel with blues and warm light.

Katelyn Chedraoui/Leonardo AI

You can try using more specific or pop culture aesthetics, but there’s no guarantee the generator will understand and adhere to them. For example, you might want to consider translating “cottage core coastal grandmother” to “vintage style with a light, breezy, feel using pastel blues and neutral tones.” It gets at the same idea with more specific instructions.

My AI images still aren’t right. What now?

Even with a well-written prompt, AI image generators aren’t perfect and you’ll get some duds. The tech behind the text-to-image generators is advancing, but it’s still very much in progress.

Tweaking your prompt is the fastest way to troubleshoot big problems. But if issues persist, try narrowing down what exactly is wrong with the images and tracing the problem back to where it may be coming from. For example, if your images aren’t professional-looking enough to present, it could be because the style or aesthetic included in your prompt isn’t right. Even making smaller changes to your presets, like the image dimensions, can make a big difference in the end results.

Midjourney took the “stressful” emotion too far in this image and lost the photorealistic style I wanted.

Katelyn Chedraoui/Midjourney AI

Many AI image services offer post-generation editing tools that can help you fix smaller errors. Services more geared toward professional creators like Adobe Firefly have extensive tools. More beginner-friendly programs run the gambit, with Leonardo having the most, then Midjourney with an average amount, with Canva having barely any.

Still, it can be frustrating not to get what you want after lots of work. Even more frustrating is that sometimes the best thing to do is start over. Resetting your settings to default, rethinking your prompts and beginning anew can feel like going backward. But when nothing else works, it can be a good last resort.

At the end of the day, AI image generators are not replacements for creators. They’re like other image editing software: You need to spend time getting to know your program, understanding how it works and its editing capabilities. Once you have a handle on your program, you’ll have a good understanding of what kind of prompts deliver the best results. These tips will help get you close to what you want in the meantime.

For more, check out the best AI chatbots and what to know about AI video generators.



Source link

September 16, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google AI Figurines
Gaming Gear

I turned myself into a 3D figurine with Google’s Nano Banana – here’s how you can hop on the latest AI image trend

by admin September 11, 2025



Google’s latest image model, Gemini 2.5 Flash Image, also known as Nano Banana, has produced a fun new trend using its advanced image capabilities. You can turn yourself (or your pet) into a highly detailed 3D figurine.

All you need to do is upload a photo and employ a very detailed prompt to have a stylized image of a miniature, plasticized version of the subject, posed on a little clear base, less than a minute later, with a box and even a wire-frame model to show it off.

This particular flavor of AI-generated toy is exploding across social media right now because the model is fast, free, and surprisingly good at what it does. Unlike earlier versions of these kinds of prompts that ran on GPT-4o or Midjourney, Nano Banana has better prompt adherence, understands packaging and posing more consistently, and renders faces that, while not always perfect, are often impressively accurate.

It’s all built into Google AI Studio and the Gemini apps and website if you want to try it. You just have to upload a picture, ideally a full-body shot, of who or whatever you want to make into a figurine, and submit the right prompt. You can play around with it, but the following template, shared around on social media, works very well.

Type this in:

“Create a 1/7 scale commercialized figurine of the characters in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is a 3D modeling process of this figurine. Next to the computer screen is a toy packaging box, designed in a style reminiscent of high-quality collectible figures, printed with original artwork. The packaging features two-dimensional flat illustrations.”

When you paste that into Gemini, along with a photo, it doesn’t just try to render a toy version of what’s in the picture, it imagines the toy existing in the real world, with all the context that goes along with a premium 3D figurine release. It’s like a high-end collectible a company would make if you became famous for whatever pose you you’re in.

Toying with AI

Figurine me (Image credit: Google)

I went with fun photo of myself from a big circus-themed party a few years ago where I went as a lion tamer (see the small lion in my pocket). I shared the photo with Nano Banana along with the prompt and twenty seconds later, there “I” was, six inches tall, standing on a desk and looking jaunty with my whip like I was about to command a herd of miniature jungle cats.

The packaging beside me showed a great illustrated version of the same pose, except it decided I was the ringmaster and named Rhett for some reason. The computer screen behind the figurine showed a 3D modeling window open with “my” miniature wire-frame form on it, being rotated in space like it was being finalized for mass production.

It genuinely looks like a photo, right down to the scuffed desk and random paperwork. Even the stuffed lion in my pocket looked right. It felt like an alternate version of me had been shrink-wrapped and made collectible.

Puppy pose

“Firecracker Fido”. (Image credit: Google)

Next, I decided to try with a photo of my dog, Cabbage. I uploaded a picture of her sitting regally on the ground and used the same default prompt. The toy created by the AI was almost too realistic. I had to look closely to tell it’s supposed to be made of molded plastic.

The screen behind the figure showed the hound rendered in a 3D modeling program appropriately, but the packaging went a little awry. It had multiple images of the dog like it was a test of different poses. But I did like that, lacking her real name, the AI went with her bandanna to name her Firecracker Fido.

The thing that struck me after both generations was how smoothly it all worked. No fine-tuning needed to get 95% of the way there. The Nano Banana just understood the visual reference and ran with it. I wouldn’t claim it’s anything like as valuable as what real human artists can do, but it was a fun experiment.

Much like the Studio Ghibli AI image trend, it’s worthwhile for personal amusement, but the idea of using these images for any kind of money-making scheme to sell actual toys would be several steps beyond propriety.

You might also like



Source link

September 11, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries
Crypto Trends

Google Boosts Gemini AI Image Capabilities in Latest Salvo Against ChatGPT

by admin August 26, 2025



In brief

  • Google upgraded Gemini with 2.5 Flash Image, taking aim at OpenAI’s dominance.
  • Developers can remix template apps in AI Studio and deploy custom image projects instantly.
  • Google has expanded access through OpenRouter and fal.ai, widening distribution to coders worldwide.

Google launched Gemini 2.5 Flash Image on Tuesday, delivering a new AI model that generates and edits images with more precision and character consistency than previous tools—attempting to close the gap with OpenAI’s ChatGPT.

The tech giant’s push to integrate advanced image editing into Gemini reflects a broader push among AI platforms to include image generation as a must-have feature. The new tool, now available across Gemini apps and platforms, lets users edit visuals using natural language—handling complex tasks like pose changes or multi-image fusion without distorting faces or scenes.

In a blog post, Google said the model allows users to “place the same character into different environments, [and] showcase a single product from multiple angles… all while preserving the subject.”

🍌 nano banana is here → gemini-2.5-flash-image-preview

– SOTA image generation and editing

– incredible character consistency

– lightning fast

available in preview in AI Studio and the Gemini API pic.twitter.com/eKx9lwWc9j

— Google AI Studio (@googleaistudio) August 26, 2025

The model first appeared under the pseudonym “nano-banana” on crowdsourced testing site LMArena, where it drew attention for its seamless editing. Google confirmed Tuesday it was behind the tool.

Google said the system can fuse multiple images, maintain character consistency for storytelling or branding, and integrate “world knowledge” to interpret diagrams or combine reference materials—all within a single prompt.



The model costs $30 per million output tokens—about four cents per image—on Google Cloud. It’s also being distributed via OpenRouter and fal.ai.

OpenAI introduced the GPT-4o model in May 2024 and added image generation in March 2025, which helped push ChatGPT’s usage above 700 million weekly active users. Google reported 400 million monthly active Gemini users in August 2025, which would indicate weekly usage that considerably trails OpenAI.

Google said all outputs will include an invisible SynthID watermark and metadata tag to mark them as AI-generated to address concerns around misuse and authenticity.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

Gemini image generation is adding more editing tools

by admin August 26, 2025


Google’s Gemini is getting new features for editing images with AI. The latest image editing model is the work of the DeepMind artificial intelligence team, and these updates will be available starting today in the Gemini app. Google noted that all images either generated or edited in Gemini will be visibly and digitally watermarked to designate them as created with artificial intelligence.

The first aspect of the new editing model offers consistency across different versions of an image. It’s particularly focused on keeping human characters looking the same as more changes are made the the surrounding visuals, with the idea that people could upload a picture of someone and still look true to reality as they put themselves in different settings or outfits with AI.

The second part of the model involves advanced image editing options. These features include the ability to combine two separate pre-existing images into a new scene, using a visual trait from an existing image as a new prompt or new design element, and multi-stage editing where individual components of an image can be changed in a sequence without losing the accumulated updates.

Image generation of humans was unavailable for a few months last year after Gemini created inaccurately diverse results for prompts based on particular places and times in history. Users regained the ability to AI-generate images of people with the Imagen 3 model.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
Trippy Image From Deep Space Shows Earth and Moon From 180 Million Miles Away
Product Reviews

Trippy Image From Deep Space Shows Earth and Moon From 180 Million Miles Away

by admin August 21, 2025


The Psyche spacecraft is on a six-year journey to reach a metal-rich asteroid by the same name. Well into its voyage, the probe looked back at its home planet and captured a rare view of Earth, accompanied by its Moon, as a mere speck engulfed by the dark void of space.

NASA’s Psyche mission launched on October 13, 2023, and is assigned to explore a distant target in the main asteroid belt that’s believed to be the exposed core of a protoplanet. Before it reaches its destination, the imaging team behind the mission is testing the spacecraft’s ability to capture objects that shine by light reflected from the Sun. The target objects of these tests are awfully familiar—our very own planet and moon—but they were taken from a rather unfamiliar perspective.

In July, scientists on the imaging team snapped multiple, long-exposure photos of Earth and the Moon. The pair is seen amidst a dark background littered with several stars in the constellation Aries. Earth appears as a bright dot, with the Moon sitting right above it. The image was taken from about 180 million miles (290 kilometers) away and offers a rare look at our planet as seen from deep space.

The photo brings the famous Pale Blue Dot to mind, an image of Earth captured by the Voyager 1 spacecraft in 1990. That image was taken from a distance of 3.7 billion miles (6 billion kilometers), with Earth appearing as a mere speck amid the cosmic backdrop.

Although it wasn’t captured from the same distance, Psyche’s recent image is a similar reminder of Earth’s place and size in the solar system. The spacecraft is equipped with a pair of cameras, designed to collect pictures in wavelengths of light that are both visible and invisible to the human eye, to help determine the composition of the metal-rich asteroid.

Psyche needs to travel a total of around 2.2 billion miles to reach the main asteroid belt and enter asteroid Psyche’s orbit in late July 2029. The 173-mile-wide (280-kilometer) asteroid orbits the Sun in the outer part of the main asteroid belt between Mars and Jupiter. Scientists believe the space rock might be an exposed core of a planetesimal, or an early planetary building block, which was stripped of its outer layer during the early formation of the solar system.



Source link

August 21, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close