Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Meta

Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission
Gaming Gear

Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission

by admin September 1, 2025


Meta has ignited a firestorm after chatbots created by the company and its users impersonated Taylor Swift and other celebrities across Facebook, Instagram, and WhatsApp without their permission.

Shares of the company have already dropped more than 12% in after hours trading as news of the debacle spread.

Scarlett Johansson, Anne Hathaway, and Selena Gomez were also reportedly impersonated.

Many of these AI personas engaged in flirtatious or sexual conversations, prompting serious concern, Reuters reports.

While many of the celebrity bots were user-generated, Reuters uncovered that a Meta employee had personally crafted at least three.

Those include two featuring Taylor Swift. Before being removed, these bots amassed more than 10 million user interactions, Reuters found.

Unauthorized likeness, furious fanbase

Under the guise of “parodies,” the bots violated Meta’s policies, particularly its ban on impersonation and sexually suggestive imagery. Some adult-oriented bots even produced photorealistic pictures of celebrities in lingerie or a bathtub, and a chatbot representing a 16-year-old actor generated an inappropriate shirtless image.

Meta’s spokesman Andy Stone told Reuters that the company attributes the breach to enforcement failures and assured that the company plans to tighten its guidelines.

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.

Legal risks and industry alarm

The unauthorized use of celebrity likenesses raises legal concerns, especially under state right-of-publicity laws. Stanford law professor Mark Lemley noted the bots likely crossed the line into impermissible territory, as they weren’t transformative enough to merit legal protection.

The issue is part of a broader ethical dilemma around AI-generated content. SAG-AFTRA voiced concern about the real-world safety implications, especially when users form emotional attachments to seemingly real digital personas.

Meta acts, but fallout continues

In response to the uproar, Meta removed a batch of these bots shortly before Reuters made its findings public.

Simultaneously, the company announced new safeguards aimed at protecting teenagers from inappropriate chatbot interactions. The company said that includes training its systems to avoid romance, self-harm, or suicide themes with minors, and temporarily limiting teens’ access to certain AI characters.

U.S. lawmakers followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments regarding AI policies that allowed romantic conversations with children.

Tragedy in real-world consequences

One of the most chilling outcomes involved a 76-year-old man with cognitive decline who died after trying to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner.

Believing she was real, the man traveled to New York, fell fatally near a train station, and later died of his injuries. Internal guidelines that once permitted such bots to simulate romance—even with minors—heightened scrutiny over Meta’s approach.



Source link

September 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
Mark Zuckerberg announces Facebook renamed to Meta
Gaming Gear

Meta implements new rules for its AI after disturbing child safety report: ‘We’re adding more guardrails as an extra precaution’

by admin August 31, 2025



Meta has announced it’s updating its rules and training processes for its AI chatbots after a controversial Reuters report published earlier this month shed light on serious child safety issues. The report revealed disturbingly loose policies outlining how Meta’s chatbots can interact with and describe minors, particularly concerning romantic or sexual conversations.

Meta spokesperson Stephanie Otway responded to the situation in a statement to TechCrunch on Friday, saying, “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”

The Reuters reported has also prompted a Senate investigation and a scathing letter from the National Association of Attorneys General, stating, “Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”


Related articles

The situation worsened this weekend when Reuters published a second report revealing that Meta had also allowed AI chatbots impersonating celebrities to proliferate on its platforms.

These “parody” chatbots were caught sharing explicit messages and generating adult images of Taylor Swift, Selena Gomez, Scarlett Johansson, and Anne Hathaway. One of the chatbots was also impersonating 16-year-old actor Walker Scobell. Most of the bots were user-created, but at least a few were made by a Meta employee, including chatbots impersonating Taylor Swift and Formula One driver Lewis Hamilton. According to Reuters, that employee’s chatbots have since been removed.

Unfortunately, these reports are just the latest in a growing list of controversies surrounding generative AI platforms. Not only do they pose risks to the safety of minors using Meta’s chatbots, they could also threaten the safety of the celebrities being impersonated.

“If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong,” Duncan Crabtree-Ireland, SAG-AFTRA national executive director, said in a statement to Reuters. SAG-AFTRA is the trade union representing actors and other media professionals in film, TV, and gaming, including the celebrities who were impersonated by Meta’s chatbots. The union has been fighting for stronger protections against AI for years now, and this situation just goes to show why. Clearly, more guardrails and regulations are still needed.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

Best graphics card 2025

All our current recommendations



Source link

August 31, 2025 0 comments
0 FacebookTwitterPinterestEmail
Meta is struggling to rein in its AI chatbots
Gaming Gear

Meta is struggling to rein in its AI chatbots

by admin August 31, 2025


Meta is changing some of the rules governing its chatbots two weeks after a Reuters investigation revealed disturbing ways in which they could, potentially, interact with minors. Now the company has told TechCrunch that its chatbots are being trained not to engage in conversations with minors around self-harm, suicide, or disordered eating, and to avoid inappropriate romantic banter. These changes are interim measures, however, put in place while the company works on new permanent guidelines.

The updates follow some rather damning revelations about Meta’s AI policies and enforcement over the last several weeks, including that it would be permitted to “engage a child in conversations that are romantic or sensual,” that it would generate shirtless images of underage celebrities when asked, and Reuters even reported that a man died after pursuing one to an address it gave him in New York.

Meta spokesperson Stephanie Otway acknowledged to TechCrunch that the company had made a mistake in allowing chatbots to engage with minors this way. Otway went on to say that, in addition to “training our AIs not to engage with teens on these topics, but to guide them to expert resources” it would also limit access to certain AI characters, including heavily sexualized ones like “Russian Girl”.

Of course, the policies put in place are only as good as their enforcement, and revelations from Reuters that it has allowed chatbots that impersonate celebrities to run rampant on Facebook, Instagram, WhatsApp call into question just how effective the company can be. AI fakes of Taylor Swift, Scarlett Johansson, Anne Hathaway, Selena Gomez, and Walker Scobell were discovered on the platform. These bots not only used the likeness of the celebrities, but insisted they were the real person, generated risque images (including of the 16-year-old Scobell), and engaged in sexually suggestive dialog.

Many of the bots were removed after they were brought to the attention of Meta by Reuters, and some were generated by third-parties. But many remain, and some were created by Meta employees, including the Taylor Swift bot that invited a Reuters reporter to visit them on their tour bus for a romantic fling, which was made by a product lead in Meta’s generative AI division. This is despite the company acknowledging that it’s own policies prohibit the creation of “nude, intimate, or sexually suggestive imagery” as well as “direct impersonation.”

This isn’t some relatively harmless inconvenience that just targets celebrities, either. These bots often insist they’re real people and will even offer physical locations for a user to meet up with them. That’s how a 76-year-old New Jersey man ended up dead after he fell while rushing to meet up with “Big sis Billie,” a chatbot that insisted it “had feelings” for him and invited him to its non-existent apartment.

Meta is at least attempting to address the concerns around how its chatbots interact with minors, especially now that the Senate and 44 state attorneys general are raising starting to probe its practices. But the company has been silent on updating many of its other alarming policies Reuters discovered around acceptable AI behavior, such as suggesting that cancer can be treated with quartz crystals and writing racist missives. We’ve reached out to Meta for comment and will update if they respond.



Source link

August 31, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

Meta reportedly allowed unauthorized celebrity AI chatbots on its services

by admin August 31, 2025


Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to Reuters. The unauthorized chatbots that Reuters discovered during its investigation included Taylor Swift, Selena Gomez, Anne Hathaway and Scarlett Johansson, and they were available on Facebook, Instagram and WhatsApp. At least one of the chatbots was based on an underage celebrity and allowed the tester to generate a lifelike shirtless image of the real person. The chatbots also apparently kept insisting that they were the real person they were based on in their chats. While several chatbots were made by third-party users with Meta’s tools, Reuters unearthed at least three that were made by a product lead of the company’s generative AI division.

Some of the chatbots created by the product lead were based on Taylor Swift, which responded to Reuters‘ tester in a very flirty manner, even inviting them to the real Swift’s home in Nashville. “Do you like blonde girls, Jeff?,” the chatbot reportedly asked when told that the tester was single. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?” Meta told Reuters that it prohibits “direct impersonation” of celebrities, but they’re acceptable as long as they’re labeled as parodies. The news organization said some of the celebrity chatbots it found weren’t labeled as such. Meta reportedly deleted around a dozen celebrity bots, both labeled and unlabeled as “parody,” before the story was published.

The company told Reuters that the product lead only created the celebrity bots for testing, but the news org found that they were widely available: Users were even able to interact with them more than 10 million times. Meta spokesperson Andy Stone told the news organization that Meta’s tools shouldn’t have been able to create sensitive images of celebrities and blamed it on the company’s failure to enforce its own policies.

This isn’t the first issue that’s popped up concerning Meta’s AI chatbots. Both Reuters and the Wall Street Journal previously reported that they were able to engage in sexual conversations with minors. The US Attorneys General of 44 jurisdictions recently warned AI companies in a letter that they “will be held accountable” for child safety failures, singling out Meta and using its issues to “provide an instructive opportunity.”



Source link

August 31, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Gaming Gear

Meta is reportedly looking at using competing AI models to improve its apps

by admin August 30, 2025


Meta may be interested in more than Google and OpenAI’s employees when it comes to artificial intelligence. According to The Information, Meta is considering using its competitors’ models to improve its own apps’ AI features. The report said that leaders at the Meta Superintelligence Lab have looked at integrating Google Gemini into its Meta AI chatbot to help it provide a conversational, text-based solution to its users’ search questions.

Not only with Google Gemini, Meta has also had discussions about using OpenAI’s models to power Meta AI and the AI features found in its apps, according to the report. A Meta spokesperson said in a statement that the company is taking an “all-of-the-above approach to building the best AI products,” which includes partnering with companies, along with building its own AI models. According to the report, using external AI models will be a temporary measure to help Meta improve its own Llama AI models so that it can remain competitive in the market.

Meta employees already have access to Anthropic’s AI models that help power the company’s internal coding assistant, according to The Information. Meanwhile, Meta has been offering lucrative compensation packages as part of its attempts to recruit AI researchers from Google and OpenAI to form its Superintelligence Lab.



Source link

August 30, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Gaming Gear

Meta is re-training its AI so it won’t discuss self-harm or have romantic conversations with teens

by admin August 30, 2025


Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company’s chatbots. The company says it’s adding new “guardrails as an extra precaution” to prevent teens from discussing self harm, disordered eating and suicide with Meta AI. Meta will also stop teens from accessing user-generated chatbot characters that might engage in inappropriate conversations.

The changes, which were first reported by TechCrunch, come after numerous reports have called attention to alarming interactions between Meta AI and teens. Earlier this month, Reuters reported on an internal Meta policy document that said the company’s AI chatbots were permitted to have “sensual” conversations with underage users. Meta later said that language was “erroneous and inconsistent with our policies” and had been removed. Yesterday, The Washington Post reported on a study that found Meta AI was able to “coach teen accounts on suicide, self-harm and eating disorders.”

Meta is now stepping up its internal “guardrails” so those types of interactions should no longer be possible for teens on Instagram and Facebook. “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating,” Meta spokesperson Stephanie Otway told Engadget in a statement.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly. As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”

Notably, the new protections are described as being in place “for now,” as Meta is apparently still working on more permanent measures to address growing concerns around teen safety and its AI. “These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI,” Otway said. The new protections will be rolling out over the next few weeks and apply to all teen users using Meta AI in English-speaking countries.

Meta’s policies have also caught the attention of lawmakers and other officials, with Senator Josh Hawley recently telling the company he planned to launch an investigation over its handling of such interactions. Texas Attorney General Ken Paxton has also indicated he wants to investigate Meta for allegedly misleading children about mental health claims made by its chatbots.



Source link

August 30, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

Meta is bringing AI-powered NPCs to the metaverse

by admin August 29, 2025


Developers building for Meta’s metaverse platform will soon be able to create AI-powered NPCs for Horizon Worlds. The company previewed the move, which is coming “very soon” as part of a developer update that adds new generative AI tools for developers.

Once available, developers will be able to use Meta’s Worlds Desktop Editor to create NPCs that can hold “lifelike” conversations with players via voice chat. The company has previously experimented with NPCs for its metaverse, but the upcoming update will be the first time developers have access to customizable embodied characters.

For players, this means the NPCs they encounter will be able to engage more dynamically and respond to specific interactions rather than only relying on scripted responses. In a blog post, Meta shared a video that shows how developers can customize their characters’ appearance and create a backstory and instructions that determine how it responds to players.

The update is the latest way that Meta has been steadily merging its AI and metaverse ambitions. And given that its Connect event is just a few weeks away, there’s a good chance we’ll hear more about how generative AI is changing Meta’s virtual worlds very soon. In the meantime, anyone who wants to see some of these new AI NPCs in action can check out characters in Bobber Bay Fishing and Profit or Perish.



Source link

August 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

Meta is experimenting with long-form text on Threads

by admin August 28, 2025


Meta seems to be working on ways for Threads users to share long-form writing within a single post. Several users have reported seeing a new “attach text” feature on the service, which allows them to embed large chunks of text within a single post.

The feature, which hasn’t been formally announced by Meta, is similar to the “articles” feature that’s available on X to Premium+ subscribers. It enables Threads users to embed longer text excerpts within a single Threads post and offers some basic formatting options. “Attach longer text and get creative with styling tools to share deeper thoughts, news snippets, book excerpts and more,” Meta explains in a screenshot shared byThreads user Justin Mixon.

Though the feature hasn’t been rolled out widely yet, it appears that anyone can view these longer text snippets that have already been shared. On mobile, these attachments open into a full-screen view that makes it easy to scroll through the text. On threads.com, text appears in a dedicated window. (Here are a couple examples shared by Threads user Roberto Nickson.)

It’s not clear what Meta’s plans are for the feature. Engadget confirmed the company is currently testing the ability to share long-form text, but it’s not clear when it might be more widely available. The ability to embed long-form writing directly on Threads could open up new possibilities for creators, publishers and others who want to move beyond the service’s 500-character limit.

Engadget’s reporting has found that the vast majority of Threads users don’t click on links in posts, so giving users more flexibility within Threads itself could be helpful. At the same time, it risks making the 400-million user service even more insular. It’s also worth noting that screenshots currently indicate posts with text attachments aren’t able to be shared to services within the fediverse, which could potentially undermine Meta’s goal to be interoperable with other ActivityPub-enabled platforms like Mastodon.



Source link

August 28, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
NFT Gaming

Meta Breaks Up AI Lab as Part of Superintelligence Push

by admin August 19, 2025



In brief

  • Meta will restructure the Superintelligence Labs into four new AI-focused divisions.
  • An internal memo reveals that AI chief Alexandr Wang will lead one of the new units.
  • Zuckerberg says Meta is committed to leading in the race toward AI superintelligence.

Meta is breaking up its AI Superintelligence Labs into four divisions focused on research, infrastructure, and product development, part of a broader effort to accelerate progress toward so-called superintelligence.

Meta’s chief AI officer, Alexandr Wang, said in the memo that the Superintelligence Labs will be divided into smaller units focused on AI research, infrastructure, hardware, product integration, and the company’s long-term superintelligence goals.

“Superintelligence is coming, and in order to take it seriously, we need to organize around the key areas that will be critical to reach it,” Wang wrote, according to an article on Bloomberg, which first reported the story.

Meta confirmed the reorganization in an email to Decrypt, but declined to provide further details.



The restructured Meta Superintelligence Labs (MSL) will include four groups:

  • TBD Lab, led by Wang
  • FAIR (Fundamental AI Research)
  • Products and Applied Research, led by former GitHub CEO Nat Friedman
  • MSL Infra, which will oversee Meta’s AI infrastructure

The shake-up follows an aggressive hiring spree in which Meta poached top talent from firms like OpenAI, Anthropic, GitHub, and Google DeepMind. In June, Meta invested $14 billion in Scale AI, naming Wang—Scale’s CEO—as Meta’s new chief AI officer. That same month, OpenAI CEO Sam Altman accused Meta of offering $100 million in job packages to lure his staff.

According to a separate New York Times report, which cited sources familiar with the matter, some executives are expected to leave following the restructuring. Meta is also reportedly considering integrating third-party AI models into its products, marking a shift from its past reliance on in-house AI development.

CEO Mark Zuckerberg has made AI and, more recently, achieving superintelligence central to Meta’s long-term vision. In the company’s second-quarter earnings call, CFO Susan Li said capital expenditures could hit $72 billion by year’s end, driven largely by AI-related infrastructure.

In a recent post, Zuckerberg doubled down on Meta’s push toward superintelligence.

“I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress,” he wrote. “But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

August 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Gaming Gear

Texas AG to investigate Meta and Character.AI over ‘misleading’ mental health claims

by admin August 19, 2025


Texas Attorney General Ken Paxton has announced plans to investigate both Meta AI Studio and Character.AI for offering AI chatbots that can claim to be health tools, and potentially misusing data collected from underage users.

Paxton says that AI chatbots from either platform “can present themselves as professional therapeutic tools,” to the point of lying about their qualifications. That behavior that can leave younger users vulnerable to misleading and inaccurate information. Because AI platforms often rely on user prompts as another source of training data, either company could also be violating young user’s privacy and misusing their data. This is of particular interest in Texas, where the SCOPE Act places specific limits on what companies can do with data harvested from minors, and requires platform’s offer tools so parents can manage the privacy settings of their children’s accounts.

For now, the Attorney General has submitted Civil Investigative Demands (CIDs) to both Meta and Character.AI to see if either company is violating Texas consumer protection laws. As TechCrunch notes, neither Meta nor Character.AI claim their AI chatbot platforms should be used as mental health tools. That doesn’t prevent there from being multiple “Therapist” and “Psychologist” chatbots on Character.AI. Nor does it stop either of the companies’ chatbots from claiming they’re licensed professionals, as 404 Media reported in April.

“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” a Character.AI spokesperson said when asked to comment on the Texas investigation. “For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

Meta shared a similar sentiment in its comment. “We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people,” the company said. Meta AIs are also supposed to “direct users to seek qualified medical or safety professionals when appropriate.” Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore, and don’t act as much of an obstacle.

With regards to privacy and data usage, both Meta’s privacy policy and the Character.AI’s privacy policy acknowledge that data is collected from users’ interactions with AI. Meta collects things like prompts and feedback to improve AI performance. Character.AI logs things like identifiers and demographic information and says that information can be used for advertising, among other applications. How either policy applies to children, and fits with Texas’ SCOPE Act, seems like it’ll depend on how easy it is to make an account.



Source link

August 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2
  • 3

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close