Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

AIGenerated

Nexon respond to claims they're using AI-generated Tiktok streamers to advertise The First Descendant
Game Updates

Nexon respond to claims they’re using AI-generated Tiktok streamers to advertise The First Descendant

by admin August 19, 2025


South Korean publishers Nexon are investigating a recent bit of TikTok marketing for their free-to-play shooter The First Descendant, after players spotted some ads that feature AI-generated ‘human’ streamers bigging up the game. Well, we at least have to assume it’s following that, as Nexon’s statement omits mentioning AI in favour of the wonderfully nebulous phrase “certain irregularities”.

If you’ve not seen one of these ads that look to have been posted by The First Descendant’s official account, allow me to show you, because you’ll see right quick why folks have been asking questions. Boom. There are further booms compiled into one Reddit post by user iHardlyTriHard down below.

As you can see, them be some weird renderings of streamy blokes, especially the one who says Nier: Automata in a fashion that’d be hilarious if this wasn’t terrible AI sloppage. Seriously, he pronounces it NEER (pause for ten seconds) audomada. He also gets points for doubling up on his use of free-to-play in his attempt to get you hyped about an F2P update to the F2P looter shooter, which has launched F2Ply.

Anyway, in addition to the more obviously not-streamers, it appears the first one in the compilation below may be based on the likeness of streamer DanieltheDemon. He’s since taken to the comments section of a TikTok about the clips to make clear he didn’t agree to any of this. “I have no affiliation nor contract with The First Descendant,” the streamer wrote. “They stole my face/reactions from my most viral video and used AI to change what my mouth says and a voice that isn’t mine. I did not consent for my likeness to be used…”

While you can’t find these full ads if you go to The First Descendant’s official TikTok account, you can see some of the gameplay clips the AI streamers have been paired with to deliver their samey sales pitches.

What do Nexon have to say about this? The following:

We would like to inform you of certain irregularities identified in the operation of our TikTok Creative Challenge for creators. As a part of our marketing campaign for Season 3: Breakthrough, we recently ran a Creative Challenge program for TikTok creators, which allows creators to voluntarily submit their content to be used as advertising materials. All submitted videos are verified through TikTok’s system to check copyright violations before they are approved as advertising content.

However, we have become aware of cases where the circumstances surrounding the production of certain submitted videos appear inappropriate. Thus, we are conducting a thorough joint investigation with TikTok to determine the facts. We sincerely apologize for the delay in providing this notice as the review is taking longer than expected. Once the verification is complete, we will promptly share an update through an official notice.

So, the company ran a competition of sorts that let anyone create ads for the game, which is a strange example of outsourcing at the very least, then seemingly left final approval for publication to TikTok’s copyright checker, which merrily waved this dodgy stuff through. Either that or Nexon potentially gave it all the green light themselves. Either way, oof.

We’ll have to see what the company say once that investigation’s concluded, but here’s hoping the next statement’s a bit more like the easier-to-parse response offered when Nexon were accused of taking a bit too much inspiration from some Destiny 2 icons.





Source link

August 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
Nexon launches investigation into claims AI-generated The First Descendant ads were used on TikTok
Esports

Nexon launches investigation into claims AI-generated The First Descendant ads were used on TikTok

by admin August 18, 2025


Nexon said it has launched an investigation after The First Descendant players noticed a number of seemingly AI-generated advertisements on TikTok, including one that used a content creator’s likeness without permission.

The developer/publisher issued a statement after Reddit user iHardlyTriHard penned a viral post collating four AI-like advertisements they had found after “only 10-15 minutes” browsing their For You Page.

The statement did not apologize directly for the AI advertisements but did apologize for “the delay in providing notice as the [investigative] review is taking longer than expected.” Nexon did not, however, explain when it was notified of the adverts, or when it launched its review.

Nexon said it wanted to “inform [players] of certain irregularities identified in the operation of [its] TikTok Creative Challenge for creators,” and said the campaign “allows creators to voluntarily submit their content to be used as advertising materials.” It also stressed that all submitted videos had been “verified through TikTok’s system.”

“However, we have become aware of cases where the circumstances surrounding the production of certain submitted videos appear inappropriate,” the statement added. “Thus, we are conducting a thorough joint investigation with TikTok to determine the facts.

“We sincerely apologize for the delay in providing this notice as the review is taking longer than expected. Once the verification is complete, we will promptly share an update through an official notice. Thank you for your patience and understanding.”

Nexon has yet to respond to DanieltheDemon’s public claim that he “did not consent for my likeness to be used.”



Source link

August 18, 2025 0 comments
0 FacebookTwitterPinterestEmail
Jurassic World Evolution 3 is ditching its AI-generated art after "some initial feedback"
Game Reviews

Jurassic World Evolution 3 is ditching its AI-generated art after “some initial feedback”

by admin June 24, 2025


Jurassic World Evolution 3’s unveiling earlier this month drew some mixed responses; there was, for instance, ample excitement over the fact baby dinosaurs will finally be wandering all over the place come its arrival later this year, but less enthusiasm for Frontier Developments’ decision to whip the theme park sim’s scientist avatar art into existence using generative AI. But now, the studio has confirmed it’s reversing course on the latter after “some initial feedback”.


Word Frontier would be relying on generative AI to create its boffins first emerged via Jurassic World Evolution 3’s Steam page, which, alongside the game’s modest system requirements, included the AI disclosure statement – as mandated by Valve – that “Scientists’ avatars” would be created using the controversial technology. This, seemingly, referred to the face portraits accompanying employable staff – rather than key characters – throughout the game.


Some fans have dismissed concerns around the use of generative AI by pointing out that scientist avatars are a fairly minor element of Jurassic World Evolution 3, and there’ve also been suggestions the game’s 2021 predecessor made similar use of the technology – albeit before Steam’s mandatory disclosure rule came into play. But the pushback among the community has been significant enough that Frontier has taken note and ditched its AI usage.

Jurassic World Evolution 3 announcement trailer.Watch on YouTube


It shared the news on the game’s Steam forum, but a longer statement was provided to Game Watcher. “We have removed the use of generative AI for scientists portraits in Jurassic World Evolution 3 following some initial feedback,” Frontier wrote. “The team are continuing their diligent work on the game and are very much looking forward to launching on 21st October.”


Frontier, of course, is far from the only developer to have dabbled with generative AI, and far from the only one to face criticism for doing so. Activision was accused of creating “AI slop” by Call of Duty fans after AI-generated art was used in promotional material for the billion dollar franchise, including a Santa zombie crafted with so little artistic regard, apparently nobody at Activision noticed it had six fingers. Microsoft also recently drew criticism after releasing an AI generated playable (in the loosest sense of the word) demo “inspired” by Quake 2.


For all the controversy, though, it’s clear AI is changing game development forever, and numerous companies, including Ubisoft and Take-Two, have discussed exploring generative AI “tools”. Nintendo’s Doug Bowser recently addressed the technology too, acknowledging its potential to “enhance productivity”, while adding, “There’s always, always going to be a human touch, and a human engagement in how we develop and build our games.”



Source link

June 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Ancestra says a lot about the current state of AI-generated videos
Gaming Gear

Ancestra says a lot about the current state of AI-generated videos

by admin June 19, 2025


After watching writer / director Eliza McNitt’s new short film Ancestra, I can see why a number of Hollywood studios are interested in generative AI. Many of the shots were made and refined solely with prompts, in collaboration with Google’s DeepMind team. It’s obvious what Darren Aronofsky’s AI-focused Primordial Soup production house and Google stand to gain from the normalization of this kind of creative workflow. But when you sit down to listen to McNitt and Aronofsky talk about how the short came together, it is hard not to think about generative AI’s potential to usher in a new era of “content” that feels like it was cooked up in a lab — and put scores of filmmakers out of work in the process.

Inspired by the story of McNitt’s own complicated birth, Ancestra zooms in on the life of an expectant mother (Audrey Corsa) as she prays for her soon-to-be-born baby’s heart defect to miraculously heal. Though the short features a number of real actors performing on practical sets, Google’s Gemini, Imagen, and Veo models were used to develop Ancestra’s shots of what’s racing through the mother’s mind and the tiny, dangerous hole inside of the baby’s heart. Inside the mother’s womb, we’re shown Blonde-esque close-ups of the baby, whose heartbeat gradually becomes part of the film’s soundtrack. And the woman’s ruminations on what it means to be a mother are visualized as a series of very short clips of other women with children, volcanic explosions, and stars being born after the Big Bang — all of which have a very stock-footage-by-way-of-gen-AI feel to them.

It’s all very sentimental, but the message being conveyed about the power of a mother’s love is cliched, particularly when it’s juxtaposed with what is essentially a montage of computer-generated nature footage. Visually Ancestra feels like a project that is trying to prove how all of the AI slop videos flooding the internet are actually something to be excited about. The film is so lacking in fascinating narrative substance, though, that it feels like a rather weak argument in favor of Hollywood’s rush to get to the slop trough while it’s hot.

As McNitt smash cuts to quick shots of different kinds of animals nurturing their young and close-ups of holes being filled in by microscopic organisms, you can tell that those visuals account for a large chunk of the film’s AI underpinnings. They each feel like another example of text-to-video models’ ability to churn out uncanny-looking, decontextualized footage that would be difficult to incorporate into fully produced film. But in the behind-the-scenes making-of video that Google shared in its announcement last week, McNitt speaks at length about how, when faced with the difficult prospect of having to cast a real baby, it made much more sense to her to create a fake one with Google’s models.

“There’s just nothing like a human performance and the kind of emotion that an actor can evoke,” McNitt explains. “But when I wrote that there would be a newborn baby, I did not know the solution of how we would [shoot] that because you can’t get a baby to act.”

Filmmaking with infants poses all kinds of production challenges that simply aren’t an issue with CGI babies and doll props. But going the gen AI route also presented McNitt with the opportunity to make her film even more personal by using old photos of herself as a newborn to serve as the basis for the fake baby’s face.

With a bit of fine-tuning, Ancestra’s production team was able to combine shots of Corsa and the fake baby to create scenes in which they almost, but not quite, appear to be interacting as if both were real actors. If you look closely in wider shots, you can see that the mother’s hand seems to be hovering just above her child because the baby isn’t really there. But the scene moves by so quickly that it doesn’t immediately stand out, and it’s far less “AI-looking” than the film’s more fantastical shots meant to represent the hole in the baby’s heart being healed by the mother’s will.

Though McNitt notes how “hundreds of people” were involved in the process of creating Ancestra, one of the behind-the-scenes video’s biggest takeaways is how relatively small the project’s production team was compared to what you might see on a more traditional short film telling the same story. Hiring more artists to conceptualize and then craft Ancestra’s visuals would have undoubtedly made the film more expensive and time-consuming to finish. Especially for indie filmmakers and up-and-coming creatives who don’t have unlimited resources at their disposal, those are the sorts of challenges that can be exceedingly difficult to overcome.

Image: Google

But Ancestra also feels like a case study in how generative AI stands to eliminate jobs that once would have gone to people. The argument is often that AI is a tool, and that jobs will shift rather than be replaced. Yet it’s hard to imagine studio executives genuinely believing in a future where today’s VFX specialists, concept artists, and storyboarders have transitioned into jobs as prompt writers who are compensated well enough to sustain their livelihoods. This was a huge part of what drove Hollywood’s film / TV actors and writers to strike in 2023. It’s also why video game performers have been on strike for the better part of the past year, and it feels irresponsible to dismiss these concerns as people simply being afraid of innovation or resistant to change.

In the making-of video, Aronofsky points out that cutting-edge technology has always played an integral role in the filmmaking business. You would be hard-pressed today to find a modern film or series that wasn’t produced with the use of powerful digital tools that didn’t exist a few decades ago. There are things about Ancestra’s use of generative AI that definitely make it seem like a demonstration of how Google’s models could, theoretically and with enough high-quality training data, become sophisticated enough to create footage that people would actually want to watch in a theater. But the way Aronofsky goes stony-faced and responds “not good” when one of Google’s DeepMind researchers explains that Veo can only generate eight-second-long clips says a lot about where generative AI is right now and Ancestra as a creative endeavor.

It feels like McNitt is telling on herself a bit when she talks about how the generative models’ output influenced the way she wrote Ancestra. She says “both things really informed each other,” but that sounds like a very positive way of spinning the fact that Veo’s technical limitations required her to write dialogue that could be matched to a series of clips vaguely tied to the concepts of motherhood and childbirth. This all makes it seem like, at times, McNitt’s core authorial intent had to be deprioritized in favor of working with whatever the AI models spat out. Had it been the other way around, Ancestra might have wound up telling a much more interesting story. But there’s very little about Ancestra’s narrative or, to be honest, its visuals that is so groundbreaking that it feels like an example of why Hollywood should be rushing to embrace this technology whole cloth.

Films produced with more generative AI might be cheaper and faster to make, but the technology as it exists now doesn’t really seem capable of producing art that would put butts in movie theaters or push people to sign up for another streaming service. And it’s important to bear in mind that, at the end of the day, Ancestra is really just an ad meant to drum up hype for Google, which is something none of us should be rushing to do.





Source link

June 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
The Google Gemini logo
Gaming Gear

Google is experimenting with AI-generated podcast-like audio summaries at the top of its search results

by admin June 15, 2025



Google just launched its most impressive (and unsettling) addition to AI Overview yet, a new feature called “Audio Overview” that generates audio summaries of search results, narrated in the style of two life-like, yet not quite human podcast hosts.

Audio Overview is currently an opt-in Search Labs feature, meaning you won’t see the option for it unless you toggle a switch in Search Labs. Right now it’s only available in the U.S. and only generates English summaries.

I tried out Audio Overview myself and the results weren’t exactly what I was expecting.


Related articles

After you activate the feature in Search Labs, some Google searches will include an Audio Overview box, usually below the regular AI Overview and “People also ask” sections. You just tap the button to generate the audio summary and wait for it to finish processing.

(Image credit: Google)

The audio clip you’ll get is generated on the spot, so if you refresh the page and generate it again, it could end up being different. The summaries I generated ranged from 3 to 5 minutes long. All of them feature the same pair of AI-generated voices who go back and forth discussing whatever topic you searched, in the style of a podcast.

The voices are admittedly significantly more lifelike than the robotic Siri sound I was expecting. There’s tone changes, conversational word choices, seemingly natural language. It’s not quite realistic, though. The two voices are like a pair of podcasters with zero rapport who seem like they’re reading off a teleprompter. It’s just shy of being lifelike but still lifelike enough that some people could be fooled at first.

Google shows you which search results it used to generate the audio summary, so you can double-check whatever info your AI podcasters give you. However, they sound realistic enough that some people might just assume these are real people and take whatever they say as fact. Of course, that’s also an issue with text AI summaries.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

There are some hiccups that give away that these aren’t real people. For instance, in one of the summaries I got, one of the AI voices asks a question then immediately answers it herself, which sounded pretty awkward.

Both voices use emotional language from time to time, like exclaiming “Wow!” at a fun fact, but it definitely sounds stiff and just shy of something a real person would say. The AI voices also mispronounce words once in a while, like “musk” instead of “must.”

Uncanny and eerie as this feature is, I can see it being helpful to some people, especially those who may have vision impairments, or otherwise rely on screen reading tools. The AI-generated voices sound pretty good for the fact that they’re AI, too.

That would be cool if it didn’t come with a host of concerns around the spread of misinformation through AI and the threat AI-generated voices like this could pose to jobs like voice acting. Like any innovation in AI, Google’s Audio Overviews are a double-edged sword and unfortunately I’m still more skeptical than impressed.



Source link

June 15, 2025 0 comments
0 FacebookTwitterPinterestEmail
Jackson Chen
Gaming Gear

Google Search uses AI-generated podcast hosts to answer your questions

by admin June 14, 2025


Instead of digging through all the top search results, you can now ask Google Search to give you a comprehensive AI-generated summary with its Audio Overviews feature. The AI feature uses Google Gemini models to create a short audio clip that sounds like a conversational podcast with two hosts.

It’s not ideal for your basic search queries like finding out when Father’s Day is, but it’s helpful if you want an in-depth and hands-free response to the history and significance of Flag Day. The Audio Overviews option pulls from the front page Google Search results and compiles them into an audio summary where two voices bounce off each other for a more engaging answer. You can also adjust the volume and playback speed between 0.25x and 2x. Audio Overviews even includes the webpages it pulls the info from, letting you continue down the Google Search rabbit hole.

It’s not the first time Google has offered its Audio Overviews tool, but it was previously reserved for its NotebookLM tool. Google expanded on this feature by making Audio Overviews within NotebookLM more interactive, allowing you to ask the AI hosts questions in real-time, and added a “Deep Dive” option to get the AI to focus on a specific topic. To test out the Audio Overviews as part of Google Search, you have to opt into the Google Labs feature on its website.



Source link

June 14, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
NFT Gaming

Spain Dives Into AI-Generated Movies While Hollywood Battles Over Its Soul

by admin May 20, 2025



In brief

  • Spain is embracing AI in filmmaking, debuting its first fully AI-generated feature, The Great Reset, at the 2025 Berlin Film Festival.
  • New legislation aligns with the EU AI Act, mandating clear labelling of AI-generated content and imposing fines for misuse.
  • Spanish-developed tools like Magnific are already in use in Hollywood, signaling broader adoption of the country’s AI tech.

Spain is beginning to integrate artificial intelligence into its film and television industries, attempting to position itself as a pioneer in both the creation and regulation of AI content.

The shift includes notable projects like “The Great Reset,” an AI-generated feature film presented at the Berlin International Film Festival in 2025.

The sci-fi thriller, directed by Daniel H. Torrado, uses AI for image synthesis, animation, and post-production, eliminating the need for on-screen actors or physical locations.

Produced by Virtual World Pictures, Canary Film Factory, and EPC Media, the film follows an AI from a renegade hacker’s mind that’s planning to destroy humanity, with the protagonist racing to prevent global collapse.

Despite its technological innovations, human involvement remains crucial, with the script, artistic direction, and narrative supervised by a creative team led by Torrado. Real actors served as references for interpretation and dubbing in key scenes.

“AI allowed us to simulate complex decisions early on and experiment without the budgetary risk that often paralyzes many independent creators,” Torrado told the Hollywood Reporter. “Human oversight was constant. Every artistic, narrative, and emotional decision went through my hands. AI was a powerful tool, not a substitute for the creator.”

Spain’s push into AI-generated content comes amid heated global debates about AI’s role in filmmaking—and all art in general.

The controversy centers on concerns about authenticity, transparency, and ethical use, with audiences and creators worried about AI-generated content being mistaken for human work.

Recent examples illustrate those tensions.

The film “The Brutalist” faced significant backlash after its editor revealed that AI was used to enhance the Hungarian accents of lead actors Adrien Brody and Felicity Jones.

Director Brady Corbet defended the use, stating it was a meticulous, manual process, but the controversy highlighted sensitivity around AI’s role in performances.

Even huge studios from Lucasfilm to Marvel Studios have been in the bullseye, from small things like using AI to create posters to more influential decisions like incorporating AI into the final creation.

Spain’s boost for an AI-friendly industry

In March 2025, Spain approved a draft law to regulate AI, aligning with the European Union’s AI Act.

This law focuses on ethical, inclusive, and beneficial use, including strict labeling requirements for AI-generated content and significant fines for non-compliance.

Mislabeling AI content could result in penalties up to €35 million (US$39.3 million), aiming to ensure transparency and prevent misuse like deepfakes.

But Hollywood has already begun incorporating Spanish AI technologies into mainstream productions.

A case in point is the film “Here,” directed by Robert Zemeckis and starring Tom Hanks and Robin Wright, which utilized AI tools to enhance visual effects.

VFX supervisor Kevin Baillie recently revealed that the team used an AI-powered upscaler called Magnific for numerous scenes in the film.

“Magnific was used to enhance 20+ scenes in here,” Baillie said in an interview shared by Javier Lopez, Magnific’s co-founder.

⚡ Magnific on the big screen!

I CAN FINALLY TALK ABOUT THIS!

The VFX team of Here (directed by Robert Zemeckis and starring Robin Wright & Tom Hanks) used Magnific for their FX 🤯

To break it all down (+more), I interviewed VFX supervisor Kevin Baillie! 🧵👇 pic.twitter.com/EVhZCP0jlh

— Javi Lopez ⛩️ (@javilopen) February 18, 2025

Baillie explained how the tool transformed their workflow: “Instead of spending 20% of the time focusing on the creative aspects of a shot and 80% on the details, Magnific helped us spend 20% on the details and 80% on the creativity! It’s putting what artists do best at the forefront, which I absolutely love.”

The film also employed face-swapping technology for de-aging, with 53 minutes of complete face replacement to bring Tom Hanks, Robin Wright, and other actors back to their younger years.

The team used real-time face swap models during filming, allowing actors and crew to see the de-aged versions immediately on set.

Besides generative upscaling and filtering, there are already a lot of interesting ideas being developed in the Spanish industry.

Speaking with Decrypt, Freepik CEO Joaquin Cuenca explained that beyond simple (and uncontrolled) AI generations, they are working on true workflows and AI-powered video editing suites.

“We are working on video editors,” he told Decrypt. “Today, you can generate small clips, but we are working on something that allows users to compile them on-site, add audio, and do all the composition to end up with a fully functional long clip.”

Spain’s television sector shows signs of exploration, though less documented than film.

From using generative AI in text and charts to leveraging AI tools to enhance the cataloging of its historical archive, TV stations are no strangers to adapting their workflows to incorporate AI.

And there have been some performative experiments with purely generative video among enthusiasts—at least on a short scale, non professional level.

One example is the experimental news show “Telediario” set in the year 2088, created by the Human XR Lab at the Universidad del Atlántico Medio.

The short video is part of a virtual reality experience at the Museo Élder in Las Palmas de Gran Canaria.

While not intended as commercial content, it reflects Spain’s growing appetite for innovation in pursuit of a more creative future.

“Artificial intelligence doesn’t replace artistic vision or human creativity,” Torrado said earlier this year. “[It] allows filmmakers to focus on what truly matters: telling stories that move and connect with the audience.”

Edited by Sebastian Sinclair and Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

May 20, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (985)
  • Esports (742)
  • Game Reviews (690)
  • Game Updates (865)
  • GameFi Guides (976)
  • Gaming Gear (931)
  • NFT Gaming (958)
  • Product Reviews (922)
  • Uncategorized (1)

Recent Posts

  • Pepe price nears big move as whales buy 650 billion coins
  • Higher on Powell Jackson Hole Remarks
  • Save $40 On Silent Hill 2’s PlayStation 5 Remake For A Limited Time
  • ‘Blade Runner 2099’ Gets Official 2026 Window by Prime Video
  • 875% Dogecoin Liquidation Imbalance, DOGE Price to Explode?

Recent Posts

  • Pepe price nears big move as whales buy 650 billion coins

    August 24, 2025
  • Higher on Powell Jackson Hole Remarks

    August 24, 2025
  • Save $40 On Silent Hill 2’s PlayStation 5 Remake For A Limited Time

    August 24, 2025
  • ‘Blade Runner 2099’ Gets Official 2026 Window by Prime Video

    August 24, 2025
  • 875% Dogecoin Liquidation Imbalance, DOGE Price to Explode?

    August 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Pepe price nears big move as whales buy 650 billion coins

    August 24, 2025
  • Higher on Powell Jackson Hole Remarks

    August 24, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close