Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

AIGenerated

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos
Product Reviews

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

by admin September 29, 2025


OpenAI is preparing to launch a stand-alone app for its video generation AI model Sora 2, WIRED has learned. The app, which features a vertical video feed with swipe-to-scroll navigation, appears to closely resemble TikTok—except all of the content is AI-generated. There’s a For You–style page powered by a recommendation algorithm. On the right side of the feed, a menu bar gives users the option to like, comment, or remix a video.

Users can create videoclips up to 10 seconds long using OpenAI’s next-generation video model, according to documents viewed by WIRED. There is no option to upload photos or videos from a user’s camera roll or other apps.

The Sora 2 App has an identity verification feature that allows users to confirm their likeness. If a user has verified their identity, they can use their likeness in videos. Other users can also tag them and use their likeness in clips. For example, someone could generate a video of themselves riding a roller coaster at a theme park with a friend. Users will get a notification whenever their likeness is used—even if the clip remains in draft form and is never posted, sources say.

OpenAI launched the app internally last week. So far, it’s received overwhelmingly positive feedback from employees, according to documents viewed by WIRED. Employees have been using the tool so frequently that some managers have joked it could become a drain on productivity.

OpenAI declined to comment.

OpenAI appears to be betting that the Sora 2 app will let people interact with AI-generated video in a way that fundamentally changes their experience of the technology—similar to how ChatGPT helped users realize the potential of AI-generated text. Internally, sources say, there’s also a feeling that President Trump’s on-again, off-again deal to sell TikTok’s US operations has given OpenAI a unique opportunity to launch a short-form video app—particularly one without close ties to China.

OpenAI officially launched Sora in December of last year. Initially, people could only access it via a web page, but it was soon incorporated directly into the ChatGPT app. At the time, the model was among the most state-of-the-art AI video generators, though OpenAI noted it had some limitations. For example, it didn’t seem to fully understand physics and struggled to produce realistic action scenes, especially in longer clips.

OpenAI’s Sora 2 app will compete with new AI video offerings from tech giants like Meta and Google. Last week, Meta introduced a new feed in its Meta AI app called Vibes, which is dedicated exclusively to creating and sharing short AI-generated videos. Earlier this month, Google announced that it was integrating a custom version of its latest video generation model, Veo 3, into YouTube.

TikTok, on the other hand, has taken a more cautious approach to AI-generated content. The video app recently redefined its rules around what kind of AI-generated videos it allows on the platform. It now explicitly bans AI-generated content that’s “misleading about matters of public importance or harmful to individuals.”

Oftentimes, the Sora 2 app refuses to generate videos due to copyright safeguards and other filters, sources say. OpenAI is currently fighting a series of lawsuits over alleged copyright infringements, including a high-profile case brought by The New York Times. The Times case centers on allegations that OpenAI trained its models on the paper’s copyrighted material.

OpenAI is also facing mounting criticism over child safety issues. On Monday, the company released new parental controls, including the option for parents and teenagers to link their accounts. The company also said that it is working on an age-prediction tool that could automatically route users believed to be under the age of 18 to a more restricted version of ChatGPT that doesn’t allow for romantic interactions, among other things. It is not known what age restrictions might be incorporated into the Sora 2 app.

This is an edition of the Model Behavior newsletter. Read previous newsletters here.



Source link

September 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
Werner Herzog on AI-Generated Movies: 'They Look Completely Dead'
Gaming Gear

Werner Herzog on AI-Generated Movies: ‘They Look Completely Dead’

by admin September 29, 2025



Legendary filmmaker and ‘Here Comes Honey Boo Boo’ superfan Werner Herzog can see the beauty in just about everything, with two notable exceptions: Chickens and art created by artificial intelligence. During an appearance on the podcast “Conan O’Brien Needs A Friend,” Herzog spoke of the incredible possibilities presented by technological advances, but lamented the sheer lifelessness of its application in areas that require humanity.

Much of the conversation between O’Brien and Herzog centered around the idea of truth (fitting for a guy who just wrote a book called The Future of Truth), which inevitably led them into a conversation about AI. Herzog, who is a fascinating mix of a man somewhat removed from technology but also filled with endless wonder about everything, didn’t dismiss the technology out of hand, but has some grave concerns about it.

“AI, I do not want to put it down completely because it has glorious, magnificent possibilities,” he said, citing its potential uses in scientific fields. “But at the same time, it is already en route to take over warfare. … It will be the overwhelming face of warfare of the future.”

He also simply can’t find much value in generative AI’s takes on works of art.

“I’ve seen movies, short films, completely created by artificial intelligence. Story, acting, everything. They look completely dead. They are stories, but they have no soul,” he told O’Brien. “They are empty and soulless. You know it is the most common, lowest denominator of what is filling billions and billions of informations on the internet. The common denominator and nothing beyond this common denominator can be found in these fabrications.”

Those fabrications of AI are a real point of fascination for Herzog. In his new book, according to an excerpt from The New Republic, he writes AI “sees its occasional errors, and arrives at strategies and decisions that were not programmed in it by humans,” and notes that its outputs arrive “with a little pinch of chaos and imprecision, as is also embedded in human nature.”

While talking to O’Brien, Herzog brought up how AI generates these falsehoods and how we have to navigate them. “And of course, cheating, pretending, propagandizing—all these things are like a nemesis. It is out there, and we have to be alert to it.” His advice? Simply do not take anything entirely at face value. “Again, I say, when you are curious and access different sources, very quickly you will find this is invented.”

In general, Herzog is not much for technology. He didn’t own a cellphone until, according to his telling, he had to get one after he was unable to retrieve his car (an 18-year-old Ford Explorer) from a parking garage in Dublin without downloading an app. But it’s not that he fears it. He just doesn’t trust it. “Everything that comes in via your cellphone or your laptop, emails, whatever—you have to distrust, you have to doubt,” he told O’Brien. In response, O’Brien offered up that he gets updates on his phone when his cats use the litter box because it is internet-connected, and proposed that it should be illegal for anything to require an app to function.

Herzog spoke of how natural navigating technology is for younger people, how effortlessly they spot a phishing email that he wouldn’t be able to identify. He compared the instincts of humans using technology to those of prehistoric men foraging for food and learning to avoid poisonous berries. “They had a natural acquired suspicion about things, and it was so natural that we can certainly assume that they didn’t hate nature,” he said. “They just knew how to navigate. And it’s the same thing—you don’t have to hate the internet and the cell phone and whatever is coming at you in this new media, you just have to maintain a complete level of suspicion.”

All of this comes from Herzog’s greater search for truth, which is central to his new book. On the podcast, he assessed, “Nobody knows what truth is.” And in some ways, it doesn’t matter. O’Brien and Herzog share that in art, sheer truth sometimes matters less than telling a good story. But in the rest of the world, the concept of truth is just as elusive, and the cause of conflict and strife. Whose truth are we operating from?

“Truth is not a point somewhere far out in the distance,” Herzog says. “It’s more a process of searching for it, approximating, having doubts.” O’Brien at one point added, “Emotions get us to a truth sometimes that facts cannot deliver.” That is perhaps why AI art falls so flat. The truth lies in the emotion the work conveys and provokes. AI has nothing to offer.



Source link

September 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
youtube music with paywall background
Esports

Spotify removes 75 million “spammy” songs as it cracks down on AI-generated music

by admin September 27, 2025



Spotify has announced a sweeping crackdown on AI misuse and spam uploads, confirming it has already removed more than 75 million “spammy” songs in the past year.

The platform said this move is part of a broader effort to protect artists against impersonation, fraudulent uploads, and the spread of low-quality AI-generated tracks. Spotify also outlined new rules and tools designed to safeguard artist identity and improve transparency for listeners.

The changes come as generative AI continues to reshape the music industry, raising concerns over spam, deepfake vocals, and deceptive uploads.

Article continues after ad

Spotify

Spotify’s new rules & spam filter explained

Spotify has introduced a new policy specifically addressing AI voice clones and vocal deepfakes. Unauthorized impersonation of an artist’s voice will not be permitted unless it is officially licensed. The company is also expanding its measures to stop fraudulent uploads made to appear under another artist’s profile.

Back in July, it was reported that Spotify was publishing AI-generated songs of several different musicians who had died years before.

Article continues after ad

To combat mass uploads and track manipulation, Spotify will roll out a new spam detection system this fall. The filter will identify accounts engaging in tactics such as duplicates, artificially short tracks, and other spam strategies, then prevent those tracks from being recommended.

Article continues after ad

AI disclosures in credits

Spotify is supporting a new industry standard for AI disclosures in music credits. This will allow artists and labels to indicate how AI was used in the creation of a track, from vocals to instrumentation. The disclosures will be displayed across the app as they are provided by distributors and partners.

Instagram: The Velvet Sundown

Spotify said the updates are designed to ensure artists remain in control of how AI is used in their work, while maintaining trust for listeners as the technology becomes more widely adopted.

Article continues after ad

Back in June, Spotify listeners began to discover a new band by the name of The Velvet Sundown, and many quickly accused it of being entirely AI-generated.

Article continues after ad

The band responded on social media in an attempt to deny those accusations, but they later revealed they were, in fact, AI.



Source link

September 27, 2025 0 comments
0 FacebookTwitterPinterestEmail
French Lara Croft actor starts lawsuit over alleged AI-generated voice lines in Aspyr's Tomb Raider remasters
Game Updates

French Lara Croft actor starts lawsuit over alleged AI-generated voice lines in Aspyr’s Tomb Raider remasters

by admin September 12, 2025


Françoise Cadol, the long-time French voice actor for Lara Croft, has reportedly launched legal action against Tomb Raider IV-VI Remastered publishers Aspyr over alleged use of generative AI to replicate her voice for lines in the game. Cadol claims Aspyr didn’t contact her to ask for permission, and it appears the voice of Croft in at least one other dub of the remasters may have been put in a similar position.

According to a report from French publication Le Parisien, spotted by TheGamer, Cadol has issued a cease and desist to Aspyr over the matter.

We’ve reached out to Aspyr for comment.

It looks like the alleged AI-generated lines may have been added to the game by Patch 2 in mid-August. The notes for that patch include one that reads: “Restored missing or incorrect voiceover lines in various languages (especially Brazilian Portuguese)”.

Cadol told Le Parisien she’d been alerted to the possible use of genAI to mimic her lines in the remaster by Tomb Raider fans, who’ve circulated what they believe to be examples of it happening on social media. Some of the lines supposedly using genAI in the remaster look to be tutorial instructions telling you how to do things like climb, with the comparison below featuring some different wording in the Remastered versions of some lines. The apparent genAI lines have a more robotic edge to them, in contrast to the natural inflections Cadol speaks with in the original.

🇫🇷 ALERTE INFO — Le patch des remasters de Tomb Raider IV, V et VI par @AspyrMedia vient de sortir…

Mauvaise surprise pour les fans français de Lara Croft, certaines répliques de Françoise Cadol dans le tutoriel ont été refaites avec l’IA et ça s’entend ! pic.twitter.com/YRbZsY669H

— Bastien D. Fry  (@BastienDruker) August 15, 2025

To see this content please enable targeting cookies.

Manage cookie settings

The French version isn’t the only dub of the game to have allegedly done this. Lene Bastos, Croft’s Brazilian Portuguese actor, has posted a video to Instagram stating that Tomb Raider fans have informed her that some of voice lines appear to have been subject to AI generation in the remasters. Bastos claims to have recieved a message from Aspyr about it, with the publishers putting the blame at the feet of an external development partner and telling her the lines will soon be removed in the game’s next patch.

Voice actors fighting for protections against the possibility of companies using generative AI to mimic their voices was a key element of the SAG-AFTRA strike that concluded a few months ago. “Basically you have to get our consent to make a digital replica of us,” Horizon and Borderlands voice actor Ashly Burch told the BBC when the strike was suspended. “You have to tell us how you’re going to use it, and then you have to compensate us fairly.”





Source link

September 12, 2025 0 comments
0 FacebookTwitterPinterestEmail
French Lara Croft taking legal action against Aspyr for AI-generated voice use in Tomb Raider remasters
Game Reviews

French Lara Croft taking legal action against Aspyr for AI-generated voice use in Tomb Raider remasters

by admin September 11, 2025


Françoise Cadol, the actor who voiced the French version of Lara Croft in all of Tomb Raider games from their inception to 2008, is taking legal action against Aspyr Media for the apparent AI-generated use of her voice in the Tomb Raider 4-6 remasters.

Cadol told French newspaper Le Parisien (via The Gamer) that Aspyr Media generated lines of dialogue using her voice without asking her or even notifying her. She has reportedly issued Aspyr a cease and desist demand.

Examples of the allegedly AI-generated voice lines were shared on social media, and in them you can clearly hear the difference between the original Tomb Raider games and the recently remastered ones. A much more robotic reading of the lines suddenly takes over part-way through the remastered dialogue.

🇫🇷 ALERTE INFO — Le patch des remasters de Tomb Raider IV, V et VI par @AspyrMedia vient de sortir…

Mauvaise surprise pour les fans français de Lara Croft, certaines répliques de Françoise Cadol dans le tutoriel ont été refaites avec l’IA et ça s’entend ! pic.twitter.com/YRbZsY669H

— Bastien D. Fry  (@BastienDruker) August 15, 2025

To see this content please enable targeting cookies.

Manage cookie settings

Cadol, who has regularly dubbed for high-profile actors like Angelina Jolie, including dubbing her performance in the 2001 Tomb Raider film, was alerted to the recording discrepancies by French Tomb Raider fans. Speaking to Le Parisien, Cadol said the use of AI here was a “betrayal” of their support.

“It’s a game, my voice accompanies the gamers, we play together,” Cadol said in a translation of the article. “Tomb Raider has been followed by a lot of people over the years, for them, it’s a betrayal, a total disrespect. They are angry.”

Alarmingly, this isn’t the only example of AI-generated voice work in the game. Brazilian actor Lene Bastos was also told by fans that her voice seemed to have been AI-generated in the remasters, and she decided to speak out about it. This prompted a reply – and apology – from Aspyr. Bastos explains in an Instagram video that Aspyr promised to remove the AI-generated lines in an update in a few weeks.

It’s unclear if Aspyr has contacted Cadol and said the same. We’ve requested comment from Aspyr.

Earlier this year, the entire French cast of multiplayer shooter Apex Legends refused to sign an agreement allowing their voices to train generative AI for the game.

“We are asked to give up our expertise to train the generative AI that will replace us tomorrow,” commented Pascale Chemin at the time, the French voice of Wraith in Apex Legends. “We are asked to agree to what we specifically fight against. We are asked to shoot ourselves in the foot. We are asked to support AI.”

Love Eurogamer? Make us a Preferred Source on Google and catch more of our coverage in your feeds.





Source link

September 11, 2025 0 comments
0 FacebookTwitterPinterestEmail
Blockchain-Based Identity Can Help HR Navigate AI-Generated Applications
Crypto Trends

Blockchain-Based Identity Can Help HR Navigate AI-Generated Applications

by admin September 7, 2025



Opinion by: Ignacio Palomera, co-founder and CEO of Bondex

The global hiring landscape is changing rapidly. Today’s job seekers are increasingly turning to generative AI to draft cover letters, tailor resumes and even simulate interview prep. 

Agentic AI is auto-applying, generative AI is drafting personalized applications at scale, and AI auto-apply tools enable candidates to apply to thousands of roles in minutes. Employers are inundated with applications that look polished, persuasive and tailored — but often lack any real signal of effort, capability or authenticity.

When anyone can crank out a polished, high-quality application with just a few AI prompts, the traditional cover letter — once seen as a chance to stand out and show real intent — becomes a commodity. It stops signaling effort or enthusiasm and starts looking more like standardized output. 

Hiring managers are now staring at inboxes filled with slick, personalized applications that all feel strangely similar. And that’s where the real problem kicks in: If everyone sounds qualified on paper, how can you tell who has the skills and knows how to game a prompt? It’s not about who writes best but about who can prove they can deliver in the real world.

A fragile trust system gets worse with AI

Traditional hiring has long relied on trust-based signals such as resumes, references and degrees, but these have always been weak proxies. Titles can be inflated, education overstated and past work exaggerated. AI blurs things even more, cloaking unverifiable claims in artificial eloquence.

For fast-paced, remote-native industries like crypto or decentralized autonomous organization ecosystems, the stakes are even higher, as there’s rarely time for deep due diligence. Trust is extended quickly and often informally — risky in a pseudonymous, global environment. More HR tooling or AI detection won’t solve this. What’s needed is a stronger foundation for trust itself.

It’s time for verifiable reputation and onchain employment

Consider a hiring manager trying to verify work history, social handles or onchain contributions. 

Today, decentralized identity (DID) systems help you prove that you’re a real human — that you exist and are not a bot. That’s useful, but it’s only the start.

What they don’t address is the deeper layer: What have you actually done? There’s a new frontier emerging — one where your professional history, credentials and contributions can be verified and made portable. It’s not just about checking a box to prove that you exist. It’s about codifying your experience so your reputation is built on what you’ve done, not just what you say.

Related: Blockchain needs regulation, scalability to close AI hiring gap

In this model, your resume becomes a programmable asset. It is not a static PDF but something that can evolve, be queried and, in some cases, be privately verified without revealing every detail. That’s where tools like zero-knowledge proofs come in, giving users control over how much they reveal and to whom.

Some might argue that this all feels a little too invasive. In practice, however, and especially in Web3, most serious contributors already operate through pseudonymous identities built on provable actions, not job titles. DIDs got us to “real humans.” Verifiable reputation gets us to “real contributors.” And that’s the fundamental shift worth paying attention to.

From HR filters to smart contract gates

As reputation becomes programmable, entire industries stand to be reshaped. Grants, hiring rounds and even token sales could use provable credentials as filters. No more guessing who’s qualified or compliant. You can’t fake a pull request merged into a core repo or pretend you completed a course linked to a non-fungible token (NFT) issued by a smart contract.

This makes trust composable — something that can be built into protocols and platforms by default. What’s provable today includes contributions, learning history and verifiable credentials. Soon, entire work histories could be onchain.

A trust upgrade for AI-era hiring

The AI-generated job application is just a symptom of a larger trust breakdown. We’ve long accepted unverifiable self-reporting as the default in hiring, and now we’re facing the consequences. Blockchain-based identity and credential systems offer a path forward — where individuals can prove their work and hiring decisions can be based on verifiable data, not guesswork.

We need to stop pretending that polished language equals proof of skill. If hiring — and broader reputation systems — are to survive the coming AI wave, we need to rebuild the foundation of trust. Onchain credentials are a compelling place to start.

Opinion by: Ignacio Palomera, co-founder and CEO of Bondex.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.



Source link

September 7, 2025 0 comments
0 FacebookTwitterPinterestEmail
Leaked Fallout Merch Appears To Use AI-Generated Slop
Game Updates

Leaked Fallout Merch Appears To Use AI-Generated Slop

by admin September 3, 2025


Earlier this week, someone leaked what they claim to be new Fallout-themed merchandise that will reportedly soon be available on Target shelves around the United States. But fans think that at least one piece of leaked merch is using AI-generated imagery, aka slop.

On September 2, a user on the Fallout subreddit posted a short video of them seemingly pulling out pieces of not-yet-announced Fallout-themed merchandise in a warehouse-like setting. The user claimed that the Fallout goodies would be sold at Target starting in November or December. The original video has since been deleted, possibly because the original poster was nervous about getting in trouble with Target. But Kotaku viewed the video before it was deleted, and the merch seemed real. And one item, Fallout-themed hot sauce, appears to be shipping to stores in a box that has AI-generated imagery on the back.

As spotted by PCGamesn, users in the comments below the leaked Target video began focusing on peculiarities in the hot sauce artwork. In the image, a couple is seen sitting at a table eating a Deathclaw’s hand. But the man can be seen using his knife and fork to cup up a napkin. The fork looks oddly shaped, too. Others pointed out that the Deathclaw hand seems to be floating oddly above the plate. The man’s right hand also seems off. All of this would lead one to believe this new Fallout merch is using AI-generated artwork.

Kotaku contacted Bethesda about the leaked art, but didn’t receive a response before publication. It should be noted that Bethesda likely contracted this artwork out via another company and didn’t create the image.

The use of AI-generated slop has become more and more prevalent in marketing and advertising over the last two years as companies desperate to make the line go up cut corners to prove to investors they aren’t being left behind as AI-powered tech continues to grow. In fact, this isn’t even the first time an official Fallout-related artwork has been scrutinized by fans for some strange, AI-like oddities. Will a man using a terrible fork to cut up a napkin stop collectors and fans from buying this piece of Fallout merch? Probably not, which is what companies are counting on.



Source link

September 3, 2025 0 comments
0 FacebookTwitterPinterestEmail
Trump's New History Exhibit Features Fake Quotes From AI-Generated Founding Fathers
Product Reviews

Trump’s New History Exhibit Features Fake Quotes From AI-Generated Founding Fathers

by admin September 3, 2025


The White House recently commissioned a new history exhibit in Washington, D.C. created by the far-right “education” group PragerU. The exhibit features 82 paintings and 40 AI videos, presenting a rather distorted view of America’s founding.

The AI-generated videos even include fake quotes from the founding fathers, including one that seems like a joke. But the folks at PragerU are serious about their mission, even if the intention is to own the libs.

What kind of quotes are we talking about? The video of an AI-generated John Adams, which is available online at Prager U’s website, shows the second president saying, “facts do not care about our feelings.” That’s a phrase that became popularized in the 2010s by far-right influencers like Ben Shapiro and Charlie Kirk.

Historically, White House exhibits haven’t included such obvious attempts at trolling.

It should be noted that the closed captioning reads “your feelings” rather than “our feelings,” the latter being how it’s spoken by the AI John Adams for whatever reason. But that’s precisely the kind of attention to detail you’d expect from PragerU.

The new exhibit is dubbed the Founders Museum and was created for the lead-up to the semiquincentennial celebration (250th anniversary) that will be happening in 2026. PragerU created the exhibit with the White House Task Force 250, which is overseeing the semiquincentennial activities, according to NPR.

Secretary of Education Linda McMahon, who has said she’s trying to put herself out of a job by shutting down the Department of Education, is featured on PragerU’s website in a promotional video insisting that the new exhibit’s “patriotic education does not mean propaganda.” McMahon infamously referred to AI as “A1” (like the steak sauce) at a recent speaking engagement.

The PragerU videos are also filled with the kind of AI-generated distortions and anomalies that we’ve come to expect. Some of the videos include figures with either too many or too few fingers. Generative AI tools still struggle with human hands, creating alien-like figures that seem to haunt the PragerU creations, as you can see below in a screenshot from the John Adams video.

© PragerU / Gizmodo

PragerU is notorious for creating inaccurate learning materials that sanitize history. One video that went viral in recent years shows Christopher Columbus scolding children for judging him about slavery. “Being taken as a slave is better than being killed, no? I don’t see the problem,” the animated Columbus says.

It’s no surprise that PragerU would create ridiculous nonsense. The shocking part is that their version of history is being legitimized by the U.S. government. The new videos also inject plenty of mentions of God, which is certainly a choice. The founder of PragerU, Dennis Prager, often talks about spreading “Judeo-Christian values.”

Aside from the larger fabrications, like John Adams quoting Ben Shapiro, there are also countless smaller issues with the way the history is presented at the new exhibit. For example, Samuel Adams says that he was called a “troublemaker,” a word that didn’t exist until four decades after his death and wouldn’t actually become popular until the 20th century. It’s a small issue, of course, but it’s the kind of thing that probably wouldn’t have been permitted by museum curators who actually know any history.

PragerU is an unaccredited “university” that doesn’t hold classes or issue diplomas. But its materials are still finding their way into the hands of impressionable kids. PragerU’s materials have been approved for use in public schools in at least ten states, according to the company’s website, including Alaska, Arizona, Florida, Idaho, Louisiana, Montana, New Hampshire, Oklahoma, South Carolina, and Texas. That’s up from five states in early 2024.

Donald Trump has major issues with the way history is taught in the 21st century. The president has criticized leaders at the Smithsonian, calling the institution “out of control,” during his regular rants. Trump even complained in one post on Truth Social that the Smithsonian was discussing “how bad Slavery was,” among other things. To be clear, slavery is bad, no matter what folks like Trump and PragerU’s animated Christopher Columbus might insist. And learning about slavery is necessary to understand American history.

PragerU is becoming normalized as a reliable source of information, whether it’s in America’s classrooms or at the White House. And the nation is certainly going to be dumber and more fascist as a result. But it doesn’t seem like there’s much that can be done about any of that at the moment.

Trump is picking off America’s institutions one by one, destroying our understanding of U.S. history and bulldozing any opposition. All of that is happening while America’s “opposition” leaders like Chuck Schumer and Hakeem Jeffries insist fighting back isn’t the savvy thing to do.

What would a guy like John Adams think about bowing to tyrants like Trump?



Source link

September 3, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

Chinese social media platforms roll out labels for AI-generated material

by admin September 2, 2025


Major social media platforms in China have started rolling out labels for AI-generated content to comply with a law that took effect on Monday. Users of the likes of WeChat, Douyin, Weibo and RedNote (aka Xiaohongshu) are now seeing such labels on posts. These denote the use of generative AI in text, images, audio, video and other types of material, according to the South China Morning Post. Identifiers such as watermarks have to be included in metadata too.

WeChat has told users they must proactively apply labels to their AI-generated content. They’re also prohibited from removing, tampering with or hiding any AI labels that WeChat applies itself, or to use “AI to produce or spread false information, infringing content or any illegal activities.”

ByteDance’s Douyin — the Chinese version of TikTok — similarly urged users to apply a label to every post of theirs that includes AI-generated material while noting it’s able to use metadata to detect where a piece of content content came from. Weibo, meanwhile, has added the option for users to report “unlabelled AI content” option when they see something that should have such a label.

Four agencies drafted the law — which was issued earlier this year — including the main internet regulator, the Cyberspace Administration of China (CAC). The Ministry of Industry and Information Technology, the Ministry of Public Security and the National Radio and Television Administration also helped put together the legislation, which is being enforced to help oversee the tidal wave of genAI content. In April, the CAC started a three-month campaign to regulate AI apps and services.

Mandatory labels for AI content could help folks better understand when they’re seeing AI slop and/or misinformation instead of something authentic. Some US companies that provide genAI tools offer similar labels and are starting to bake such identifiers into hardware. Google’s Pixel 10 devices are the first phones that implement C2PA (Coalition for Content Provenance and Authenticity) content credentials right inside the camera app.



Source link

September 2, 2025 0 comments
0 FacebookTwitterPinterestEmail
Nexon respond to claims they're using AI-generated Tiktok streamers to advertise The First Descendant
Game Updates

Nexon respond to claims they’re using AI-generated Tiktok streamers to advertise The First Descendant

by admin August 19, 2025


South Korean publishers Nexon are investigating a recent bit of TikTok marketing for their free-to-play shooter The First Descendant, after players spotted some ads that feature AI-generated ‘human’ streamers bigging up the game. Well, we at least have to assume it’s following that, as Nexon’s statement omits mentioning AI in favour of the wonderfully nebulous phrase “certain irregularities”.

If you’ve not seen one of these ads that look to have been posted by The First Descendant’s official account, allow me to show you, because you’ll see right quick why folks have been asking questions. Boom. There are further booms compiled into one Reddit post by user iHardlyTriHard down below.

As you can see, them be some weird renderings of streamy blokes, especially the one who says Nier: Automata in a fashion that’d be hilarious if this wasn’t terrible AI sloppage. Seriously, he pronounces it NEER (pause for ten seconds) audomada. He also gets points for doubling up on his use of free-to-play in his attempt to get you hyped about an F2P update to the F2P looter shooter, which has launched F2Ply.

Anyway, in addition to the more obviously not-streamers, it appears the first one in the compilation below may be based on the likeness of streamer DanieltheDemon. He’s since taken to the comments section of a TikTok about the clips to make clear he didn’t agree to any of this. “I have no affiliation nor contract with The First Descendant,” the streamer wrote. “They stole my face/reactions from my most viral video and used AI to change what my mouth says and a voice that isn’t mine. I did not consent for my likeness to be used…”

While you can’t find these full ads if you go to The First Descendant’s official TikTok account, you can see some of the gameplay clips the AI streamers have been paired with to deliver their samey sales pitches.

What do Nexon have to say about this? The following:

We would like to inform you of certain irregularities identified in the operation of our TikTok Creative Challenge for creators. As a part of our marketing campaign for Season 3: Breakthrough, we recently ran a Creative Challenge program for TikTok creators, which allows creators to voluntarily submit their content to be used as advertising materials. All submitted videos are verified through TikTok’s system to check copyright violations before they are approved as advertising content.

However, we have become aware of cases where the circumstances surrounding the production of certain submitted videos appear inappropriate. Thus, we are conducting a thorough joint investigation with TikTok to determine the facts. We sincerely apologize for the delay in providing this notice as the review is taking longer than expected. Once the verification is complete, we will promptly share an update through an official notice.

So, the company ran a competition of sorts that let anyone create ads for the game, which is a strange example of outsourcing at the very least, then seemingly left final approval for publication to TikTok’s copyright checker, which merrily waved this dodgy stuff through. Either that or Nexon potentially gave it all the green light themselves. Either way, oof.

We’ll have to see what the company say once that investigation’s concluded, but here’s hoping the next statement’s a bit more like the easier-to-parse response offered when Nexon were accused of taking a bit too much inspiration from some Destiny 2 icons.





Source link

August 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close