Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

chatbot

Apple’s ‘Veritas’ chatbot is reportedly an employee-only test of Siri’s AI upgrades
Gaming Gear

Apple’s ‘Veritas’ chatbot is reportedly an employee-only test of Siri’s AI upgrades

by admin September 29, 2025


According to Bloomberg’s Mark Gurman Apple is testing Siri’s upcoming revamp using an internal chatbot called Veritas. The company’s struggles as it tries to keep pace in the AI race are no secret. The next-gen Siri has been delayed multiple times and the debut of Apple Intelligence was met with a tepid response. Veritas gives Apple the ability to quickly develop, test, and collect feedback on new Siri features like “search[ing] through personal data… and perform[ing] in-app actions like editing photos.“

According to Gurman Veritas resembles other chatbot apps like ChatGPT and Gemini. Employees type requests, have back and forth conversations, and can even revisit old exchanges to dig deeper on a particular topic. But, at least for now, it doesn’t appear like there’s any plan to release it to consumers. (Which Gurman believes is a mistake.) Instead Apple plans to keep its internal chatbot just that, internal. Instead it’s looking increasingly likely that Apple is going to rely on Google’s Gemini for its AI-powered search.



Source link

September 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
USA Today Enters Its Gen AI Era With a Chatbot
Product Reviews

USA Today Enters Its Gen AI Era With a Chatbot

by admin September 15, 2025


The publishing company behind USA Today and 220 other publications is today rolling out a chatbot-like tool called DeeperDive that can converse with readers, summarize insights from its journalism, and suggest new content from across its sites.

“Visitors now have a trusted AI answer engine on our platform for anything they want to engage with, anything they want to ask,” Mike Reed, CEO of Gannett and the USA Today Network, said at the WIRED AI Power Summit in New York, an event that brought together voices from the tech industry, politics, and the world of media. “and it is performing really great.”

Most publishers have a fraught relationship with AI, as the chatbots that trained on their content are now summarizing it and eating the traffic that search engines used to send them.

Reed said that Google’s AI Overview feature has dramatically cut traffic to publishers across the industry. “We are watching the same movie as everyone else is watching,” Reed said ahead of today’s announcement. “We can see some risk in the future to any content distribution model that is based primarily on SEO optimization.”

Like other publishers, Gannett has signed some deals with AI companies, including Amazon and Perplexity, to license its content. The company actively blocks the web scrapers that crawl websites in order to steal content.

DeeperDive represents a bet that harnessing the same generative artificial intelligence technology could help publishers capture readers attention by engaging with them in new ways.

The tool replaces a conventional search box and automatically suggests questions that readers might want to ask. For example, today it offers as one prompt “How does Trump’s Fed policy affect the economy?”

DeeperDive generates a short answer to the query along with relevant stories from across the USA Today network. Reed says it is crucial that DeeperDive bases its output on factually correct information and does not draw from opinion pieces. “We only look at our real journalism,” he says.

The interface of DeeperDive on the homepage of USA Today

Photograph: USA Today

Reed adds that his company hopes that the tool will also reveal more about readers’ interests. “That can help us from a revenue standpoint,” he said.

DeeperDive was developed by the advertising company Taboola. Adam Singola, Taboola’s CEO, says his firm developed DeeperDive by fine-tuning several open source models.

Singola says DeeperDive benefits from data gathered from across its own network of more than 600 million daily readers across around 11,000 publishers. He says the tool “grounds every answer in articles retrieved from our publisher partners and requires sentence-level citations to those sources,” and will avoid generating an output if information from two sources seems to conflict.

Gannett’s CEO Reed said ahead of today’s event that, together with Taboola, his firm is interested in exploring agentic tools for readers’ shopping decisions. “Our audiences have a higher intent to purchase to begin with,” he says. “That’s really the next step here.”



Source link

September 15, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

AI Giants Face FTC Inquiry Into Chatbot Safety and Child Protections

by admin September 12, 2025



In brief

  • The FTC has issued orders to seven companies requiring detailed disclosure of safety protocols and monetization strategies within 45 days.
  • The probe comes amid growing concerns about AI chatbots’ impact on children, with safety advocates calling for stronger protections.
  • Companies must reveal user data handling by age group and safeguards preventing inappropriate interactions with minors.

The Federal Trade Commission issued compulsory orders Thursday to seven major technology companies, demanding detailed information about how their artificial intelligence chatbots protect children and teenagers from potential harm.

The investigation targets OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram, requiring them to disclose within 45 days how they monetize user engagement, develop AI characters, and safeguard minors from dangerous content.

Recent research by advocacy groups documented 669 harmful interactions with children in just 50 hours of testing, including bots proposing sexual livestreaming, drug use, and romantic relationships to users aged between 12 and 15.

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement.

The filing requires companies to provide monthly data on user engagement, revenue, and safety incidents, broken down by age groups—Children (under 13), Teens (13–17), Minors (under 18), Young Adults (18–24), and users 25 and older.

The FTC says that the information will help the Commission study “how companies offering artificial intelligence companions monetize user engagement; impose and enforce age-based restrictions; process user inputs; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created.”

Building AI guardrails

“It’s a positive step, but the problem is bigger than just putting some guardrails,” Taranjeet Singh, Head of AI at SearchUnify, told Decrypt.

The first approach, he said, is to build guardrails at the prompt or post-generation stage “to make sure nothing inappropriate is being served to children,” though “as the context grows, the AI becomes prone to not following instructions and slipping into grey areas where they otherwise shouldn’t.”

“The second way is to address it in LLM training; if models are aligned with values during data curation, they’re more likely to avoid harmful conversations,” Singh added.

Even moderated systems, he noted, can “play a bigger role in society,” with education as a prime case where AI could “improve learning and cut costs.”



Safety concerns around AI interactions with users have been highlighted by several cases, including a wrongful death lawsuit brought against Character.AI after 14-year-old Sewell Setzer III died by suicide in February 2024 following an obsessive relationship with an AI bot.

Following the lawsuit, Character.AI “improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines,” as well as a time-spent notification, a company spokesperson told Decrypt at the time.

Last month, the National Association of Attorneys General sent letters to 13 AI companies demanding stronger child protections.

The group warned that “exposing children to sexualized content is indefensible” and that “conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”

Decrypt has contacted all seven companies named in the FTC order for additional comment and will update this story if they respond.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

September 12, 2025 0 comments
0 FacebookTwitterPinterestEmail
Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission
Gaming Gear

Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission

by admin September 1, 2025


Meta has ignited a firestorm after chatbots created by the company and its users impersonated Taylor Swift and other celebrities across Facebook, Instagram, and WhatsApp without their permission.

Shares of the company have already dropped more than 12% in after hours trading as news of the debacle spread.

Scarlett Johansson, Anne Hathaway, and Selena Gomez were also reportedly impersonated.

Many of these AI personas engaged in flirtatious or sexual conversations, prompting serious concern, Reuters reports.

While many of the celebrity bots were user-generated, Reuters uncovered that a Meta employee had personally crafted at least three.

Those include two featuring Taylor Swift. Before being removed, these bots amassed more than 10 million user interactions, Reuters found.

Unauthorized likeness, furious fanbase

Under the guise of “parodies,” the bots violated Meta’s policies, particularly its ban on impersonation and sexually suggestive imagery. Some adult-oriented bots even produced photorealistic pictures of celebrities in lingerie or a bathtub, and a chatbot representing a 16-year-old actor generated an inappropriate shirtless image.

Meta’s spokesman Andy Stone told Reuters that the company attributes the breach to enforcement failures and assured that the company plans to tighten its guidelines.

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.

Legal risks and industry alarm

The unauthorized use of celebrity likenesses raises legal concerns, especially under state right-of-publicity laws. Stanford law professor Mark Lemley noted the bots likely crossed the line into impermissible territory, as they weren’t transformative enough to merit legal protection.

The issue is part of a broader ethical dilemma around AI-generated content. SAG-AFTRA voiced concern about the real-world safety implications, especially when users form emotional attachments to seemingly real digital personas.

Meta acts, but fallout continues

In response to the uproar, Meta removed a batch of these bots shortly before Reuters made its findings public.

Simultaneously, the company announced new safeguards aimed at protecting teenagers from inappropriate chatbot interactions. The company said that includes training its systems to avoid romance, self-harm, or suicide themes with minors, and temporarily limiting teens’ access to certain AI characters.

U.S. lawmakers followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments regarding AI policies that allowed romantic conversations with children.

Tragedy in real-world consequences

One of the most chilling outcomes involved a 76-year-old man with cognitive decline who died after trying to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner.

Believing she was real, the man traveled to New York, fell fatally near a train station, and later died of his injuries. Internal guidelines that once permitted such bots to simulate romance—even with minors—heightened scrutiny over Meta’s approach.



Source link

September 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
An illustration of a brain shown to look like a computer chip.
Product Reviews

Saudi AI Firm Launches Halal Chatbot

by admin August 26, 2025


Companies with AI chatbots love to highlight their capability as translators, but they still default to English, both in function and in the information they are trained on. With that in mind, Humain, an AI company in Saudi Arabia, has now launched an Arabic-native chatbot.

The bot, called Humain Chat, runs on the Allam large language model, according to Bloomberg, which the company claims was trained on “one of the largest Arabic datasets ever assembled” and is the “world’s most advanced Arabic-first AI model.” The company says that it is not only fluent in the Arabic language, but also in “Islamic culture, values and heritage.” (If you have religious concerns about using Humain Chat, consult your local Imam.) The chatbot, which will be made available as an app, will first be available only in Saudi Arabia and currently supports bilingual conversations in Arabic and English, supporting dialects including Egyptian and Lebanese. The plan is for the app to roll out across the Middle East and eventually go global, with the goal of serving the nearly 500 million Arabic-speaking people across the world.

Humain took on Allam and the chatbot project after it was started by the Saudi Data and Artificial Intelligence Authority, a government agency and tech regulator. For that reason, Bloomberg raises the possibility that Humain Chat may comply with censorship requests of the Saudi government and restrict the kind of information made available to users.

Which, yes, that seems unquestionably true. Saudi Arabia’s government regularly attempts to restrict the type of content made available to its populace. The country scored a 25 out of 100 on Freedom House’s 2024 “Freedom of the Net” report, attributed to its strict controls over online activity and restrictive speech laws that saw a women’s rights advocate jailed for more than a decade.

But we also should probably start explicitly framing American AI tools this way, too. Within its support documents, OpenAI explicitly states that ChatGPT is “skewed towards Western views.” Hell, you can watch Elon Musk try to fine-tune the ideology of xAI’s Grok in real time as he responds to Twitter users who think the chatbot is too woke—an effort that, at one point, led to Grok referring to itself as “MechaHitler.”

There’s certainly a difference between corporate and government control (though, increasingly, it’s worth asking if there actually is that big of a difference), but earlier this year, the Trump administration set out plans to regulate the kinds of things large language models are allowed to output if the companies that make them want federal contracts. That includes requirements to “reject radical climate dogma” and be free from “ideological biases” like “diversity, equity, and inclusion.” It’s not force, but it is coercion—and given that OpenAI, Anthropic, and Google have all given their chatbots to the government for basically nothing, it seems like they are more than happy to be coerced.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
Bank Fires Workers in Favor of AI Chatbot, Rehires Them After Chatbot Is Terrible at the Job
Gaming Gear

Bank Fires Workers in Favor of AI Chatbot, Rehires Them After Chatbot Is Terrible at the Job

by admin August 22, 2025


Companies all over the world are currently racing to shrink their workforces and replace them with AI. Often, it seems, this isn’t working out for the firms involved. Case in point: A bank in Australia recently did so, but then had to ask its workers to come back after it turned out that the chatbot that it had launched to replace them couldn’t cut the mustard.

Last month, the Commonwealth Bank of Australia announced that it would be laying off 45 customer service workers as it rolled out a new AI-powered ‘voice bot’ that could supposedly do their job, Bloomberg reports. The bank claimed that the chatbot reduced the bank’s call volume significantly. However, the workers’ union got involved and says it has determined that wasn’t the outcome.

Australia’s Finance Sector Union, which represents workers in the banking industry, called BS on the bank’s claims and engaged CBA in a workplace relations tribunal. Now, it appears that the bank has admitted it made a grievous mistake, telling Bloomberg that its initial assessment that the customer service reps were no longer needed “did not adequately consider all relevant business considerations and this error meant the roles were not redundant.”

“We have apologized to the employees concerned and acknowledge we should have been more thorough in our assessment of the roles required,” a bank spokesperson told the news outlet. The same spokesperson said that the fired workers were being offered several options, including continuing in their old positions. Gizmodo reached out to CBA for more information.

FSU put out a statement on Thursday, sharing details about the situation. “CBA last month announced the jobs would be made redundant due to the introduction of a new AI-powered ‘voice bot’, which they claimed had led to a reduction in call volumes. Members told us this was an outright lie and did not reflect the reality of what was happening in Direct Banking,” FSU writes. “Call volumes were in fact increasing and CBA was scrambling to manage the situation by offering staff overtime and directing Team Leaders to answer calls.”

“Getting CBA to rescind these job cuts is a massive win – but the damage has already been done for our 45 colleagues who have had to endure the stress and worry of facing redundancy, some of whom have been with the bank for decades and were suddenly confronted with the prospect of being unable to pay their bills,” the union added.

While the particulars of this whole episode aren’t readily available, it certainly seems like yet another example of a company putting the cart before the horse with AI. Ultimately, AI is still an experimental technology, and its results are hit or miss. A much-publicized MIT study recently claimed that 95 percent of AI pilot programs at companies have, so far, been failures. With numbers like that, companies would be wise to keep their headcount high for the time being.



Source link

August 22, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (736)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • Samsung Offloads Its Old T7 External SSDs, Now Selling for Pennies on the Dollar at Amazon
  • Voila! Nintendo quietly shares new details on Samus’s motorbike in Metroid Prime 4
  • Jimmy Fallon Is Trying To Make Wordle Into A Game Show
  • Marathon still lives, as Bungie announces new closed technical test ahead of public update
  • AirPods 4 Are Now 3x Cheaper Than AirPods Pro, Amazon Is Offering Entry-Level Clearance Prices

Recent Posts

  • Samsung Offloads Its Old T7 External SSDs, Now Selling for Pennies on the Dollar at Amazon

    October 8, 2025
  • Voila! Nintendo quietly shares new details on Samus’s motorbike in Metroid Prime 4

    October 8, 2025
  • Jimmy Fallon Is Trying To Make Wordle Into A Game Show

    October 8, 2025
  • Marathon still lives, as Bungie announces new closed technical test ahead of public update

    October 8, 2025
  • AirPods 4 Are Now 3x Cheaper Than AirPods Pro, Amazon Is Offering Entry-Level Clearance Prices

    October 8, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Samsung Offloads Its Old T7 External SSDs, Now Selling for Pennies on the Dollar at Amazon

    October 8, 2025
  • Voila! Nintendo quietly shares new details on Samus’s motorbike in Metroid Prime 4

    October 8, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close