Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

ChatGPT

Actors strike
Gaming Gear

Tilly Norwood is no more an actress than ChatGPT is a person, and I’m tired of people getting this wrong

by admin October 3, 2025



The conversation in a recent TV segment was about Sora 2, a remarkable new social media platform OpenAI’s latest Sora model at its heart. It gives regular people the power to put themselves in AI videos along with their friends, often doing fantastic and unimaginable things.

Its existence prompted me to proclaim, tongue-in-cheek, “Nothing is real!” and that’s when the discussion turned to Tilly Norwood, the would-be “AI actress”.

The TV anchor I was speaking to could be forgiven for casually referring to Tilly as an ‘actress.’ After all, that’s Tilly’s description on her Instagram page: “Actress (aspiring).” But I’d been reading the criticism from real actors – people who fought with their unions to protect against just this type of AI incursion – and so I felt the need to interject, “Tilly Norwood is a thing, an it, not an actress.”


You may like

The conversation reminded me of how people often anthropomorphize robots, calling them “he” and “she” because they have a little bit of autonomy and react to their world, as if not a person, at least as a pet might do. That habit has, in recent years, carried over to chatbots, where, depending on the voice, people refer to Gemini or ChatGPT as “he” or “she”.

Even without faces, if something can converse with us, we imbue it with a little bit of humanity, even though we know it has none.

The advent of generative images and, especially, video, coupled with synced audio, has made this issue exponentially worse, and not just because people can confuse AI-generated video with the real thing, but because creators like Tilly Norwood and Particle6 CEO Eline Van de Velden tell us they are equivalent to the real thing.

When Van der Velden’s company unveiled Tilly back in September, it stated that it was creating AI “artists.” Van der Velden enthused to AIBusiness, “We believe that the next generation of cultural icons will be synthetic: stars who never tire, never age, and can interact with their fans.”

Sign up for breaking news, reviews, opinion, top tech deals, and more.

It’s no wonder people, especially artists, actors, and actresses, are freaking out. Particle6 noted that Tilly was getting attention from actual talent agents, based in part on the AI-generated clips posted on her Instagram, where she has 52,000 followers.

Van der Velden has somewhat backpeddled on her excitement, posting earlier this week on Tilly’s Instagram that Tilly Norwood “is not a replacement for a human being but a creative work – a piece of art.”

What was or is Tilly?

That sounds like revisionist history to me. This was not some thought experiment. I think that AI companies (and studios) are very interested in generative bespoke characters that do not resemble anyone living or dead, that they can freely control and use in everything from commercials and print work to film.


You may like

It will be, in the end, no different than CGI generating, say, Woody and Buzz for Toy Story, but perhaps without the benefit of a gifted voice actor behind them.

Ultimately, though, even if Tilly Norwood or some other AI does go on to star in a hit film or TV series, that will only make them popular but still not human.

Tilly Norwood will never be an actress, a person, or a human being. It will always be a thing, built out of bits, bytes, algorithms, and massively intelligent AI. Eventually, we may not be able to tell the difference between a Julia Roberts and a Tilly Norwood on screen, but only one of them will know they are real.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

You might also like





Source link

October 3, 2025 0 comments
0 FacebookTwitterPinterestEmail
How to Use ChatGPT to Discover Hidden Crypto Gems
NFT Gaming

How to Use ChatGPT to Discover Hidden Crypto Gems

by admin September 29, 2025



Key takeaways:

  • ChatGPT can synthesize social media and news sentiment to reveal early narratives and market buzz around emerging tokens.

  • Feeding technical indicators and onchain transaction data to ChatGPT allows traders to track “smart money” movements and identify accumulation or distribution patterns.

  • Exploring multiple GPTs in workflows lets traders cross-reference metrics, sentiment and contract safety for more informed decisions.

  • Building a data-driven scanner with embeddings, clustering, anomaly detection and tokenomics metrics can automate the discovery of high-potential tokens.

Finding high-potential coins before they take off often gets mistaken for pure luck, but savvy investors understand that it takes diligence, not luck, to find them. With ChatGPT and other AI-powered tools at your side, you can sort through thousands of tokens and identify real value.

This guide walks you through the process of using ChatGPT as a research tool for cryptocurrency analysis.

Explore market sentiment and narrative with ChatGPT

A coin can have great fundamentals, but if no one is talking about it, its potential remains unrealized.

A hidden gem is often one that is just beginning to generate a positive buzz. You can get ChatGPT to synthesize a picture of public opinion by feeding it information from various sources.

For instance, you could copy and paste recent headlines from major crypto news outlets or snippets from popular social media platforms like X or Reddit.

Try using a prompt like:

“Analyze the following news headlines and social media comments about [coin name]. Synthesize the overall market sentiment, identify any emerging narratives and flag any potential red flags or major concerns being discussed by the community.”

The AI can use the data you provided to generate a summary that indicates if the sentiment is neutral, bullish or negative, as well as which particular talking points are getting traction. This method can help you determine the market’s overall emotional state.

Additionally, ChatGPT can be asked to look for indications of growth in the ecosystem of a project. You can send snapshots from platforms like DefiLlama, but you can’t provide them with real-time data.

For example, you could use a prompt like this:

“Based on the following data points on total value locked for protocols within the [coin name] ecosystem, identify which sectors are gaining the most momentum and which protocols are seeing the fastest growth in the last 30 days.”

Framed this way, ChatGPT can highlight outliers — protocols pulling in liquidity and users faster than the rest. These standouts tend to be more than just technically sound; they are the ones capturing market attention and building the kind of traction that often drives sharp price moves.

Did you know? According to MEXC Research of 2025, 67% of Gen Z crypto traders have activated at least one AI-powered trading bot or strategy in the past 90 days, showing a major generational shift toward automated, AI-assisted trading.

Data-driven approach to use ChatGPT

For advanced traders, digging into technical and onchain metrics can surface standout opportunities. This is where you shift from researcher to analyst and actively start gathering the right data to feed it to the AI for deeper insights.

For more technical indicator interpretation, you can feed ChatGPT raw technical data from charting platforms. For example, you can give it the values of the Relative Strength Index (RSI), moving average convergence-divergence (MACD) and different moving averages for a specific coin over a given period.

A useful prompt example could be: 

“Analyze the following technical indicator data for [Coin Name] over the last 90 days. Based on the provided RSI, MACD and 50-/200-day moving average crossovers, what can you infer about the current market trend and potential upcoming price movements? Highlight any bullish or bearish signals.”

By doing onchain data analysis, you can reveal the truth behind a project’s activity. You can copy and paste raw data from a block explorer or analytics tool.

For example: 

“Here is a list of recent transactions and wallet activity for [Coin Name]. Analyze this data to identify ‘smart money’ movements, which are large-volume transactions from wallets that have historically performed well. Based on this, can you detect any accumulation or distribution patterns?” 

This method can help you track the movements of big players and ideally spot early signs of a potential price move before it becomes visible to the rest of the market.

ChatGPT advanced GPTs

In crypto, ChatGPT’s real power comes when you explore GPTs, custom versions of ChatGPT, that are tailored for specific use cases. Many GPTs are built to extend ChatGPT’s capabilities, such as analyzing smart contracts, summarizing blockchain research, or pulling structured market data. For example, you might use a GPT designed for token safety analysis, another for onchain wallet tracking or one optimized for parsing crypto research reports.

Here is a step-by-step guide on how to access GPTs for crypto trading:

Step 1: Get a ChatGPT subscription

To start using GPTs, you’ll need a ChatGPT Plus account ($20/month). 

Step 2: Explore GPTs

In the left-hand menu, click “Explore GPTs.” Use the search bar to look for crypto-related GPTs. Select and launch the GPT you want to use.

Multiple GPTs can be run at the same time in your workflow — e.g., combining a GPT that summarizes tokenomics with another that checks contract safety. Still, it’s important to remember: These tools should speed up your own research, not replace it entirely.

How to build a data-driven scanner with ChatGPT

You can move beyond one-off prompts by making ChatGPT part of an automated discovery pipeline.

Start by creating embeddings from project white papers, social media posts and GitHub commits. Combine those vectors to surface outliers worth human review. Add a tokenomics risk score that weighs circulating supply, unlock schedules and vesting cliffs, along with a liquidity depth metric built from order book snapshots and decentralized exchange (DEX) pool spreads. 

You can also layer in anomaly detection on large transfers and contract interactions to flag unusual activity in real time.

To run this system, collect data through APIs from GitHub, CoinGecko and Etherscan. Process it with Python (or another language) to generate numerical metrics and embeddings. Apply clustering and anomaly detection to highlight unusual projects, then push the results into a dashboard or alert system so you can act quickly.

Finally, backtest your signals by replaying past onchain events and transaction flows. This turns scattered data points into a structured process that produces repeatable, high-signal trade ideas.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.



Source link

September 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
11 Things You Should Avoid Using ChatGPT For
Gaming Gear

11 Things You Should Avoid Using ChatGPT For

by admin September 24, 2025


ChatGPT was released in 2022, and in the years since its revolutionized daily life for millions of people. From organizing schedules to rewriting emails, it’s good at plenty of things. However, this chatbot isn’t capable of handling every aspect of your life or business.

Large language models like ChatGPT sometimes generate incorrect or outdated information while sounding completely confident. That’s not a huge deal if you’re brainstorming a party theme or writing a practice email. But when it comes to sensitive areas like money (especially your taxes), health, or legal issues, a wrong answer can create serious problems.

Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

That’s why it’s just as important to know when to avoid ChatGPT as it is to know how to get the most from it. To help you steer clear of the biggest pitfalls, here are 11 specific situations where turning to an AI chatbot could cause more harm than good.

(Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

1. Diagnosing physical health issues

I’ve definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore over potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. In fact, I have a lipoma, which is not cancerous and occurs in one in every 1,000 people. My licensed doctor told me that.

I’m not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can’t order labs or examine you, and it definitely doesn’t carry malpractice insurance. Know its limits.

2. Taking care of your mental health

ChatGPT can offer grounding techniques, sure, but it can’t pick up the phone when you’re in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET’s Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst.

ChatpGPT doesn’t have lived experience, can’t read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn’t. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work — the hard, messy, human work — to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline.

3. Making immediate safety decisions

If your carbon-monoxide alarm starts chirping, please don’t open ChatGPT and ask it if you’re in real danger. I’d go outside first and ask questions later. Large language models can’t smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you’re not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder.

4. Getting personalized financial or tax planning

ChatGPT can explain what an ETF is, but it doesn’t know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter.

I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can’t replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information.

5. Dealing with confidential or regulated data

As a tech journalist, I see embargoes land in my inbox every day, but I’ve never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That’s because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement.

The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver’s license and passport. Once sensitive information is in the prompt window, you can’t guarantee where it’s stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn’t immune to hackers and security threats. If you wouldn’t paste it into a public Slack channel, don’t paste it into ChatGPT.

6. Doing anything illegal

This one is self-explanatory.

7. Cheating on schoolwork

I’d be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I’m not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame.

Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear “ChatGPT voice” a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It’s best to use ChatGPT as a study buddy, not a ghostwriter. You’re also just cheating yourself out of an education if you have ChatGPT do the work for you.

8. Monitoring information and breaking news

Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won’t stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet.

9. Gambling

I’ve actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men’s basketball championship, but I would never recommend it to anyone. I’ve seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can’t see tomorrow’s box score, so don’t rely on it solely to get you that win.

10. Drafting a will or other legally binding contract

ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you’re rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Rather, let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court.

11. Making art

This isn’t an objective truth, just my own opinion, but I don’t believe AI should be used to create art. I’m not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that’s supplementation, not substitution. By all means, use ChatGPT, but please don’t use it to make art that you then pass off as your own. It’s kind of gross.



Source link

September 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’
Gaming Gear

Sam Altman says ChatGPT will stop talking about suicide with teens

by admin September 17, 2025


On Tuesday, OpenAI CEO Sam Altman said that the company was attempting to balance privacy, freedom, and teen safety — principles that, he admitted, were in conflict. His blog post came hours before a Senate hearing focused on examining the harm of AI chatbots, held by the subcommittee on crime and counterterrorism and featuring some parents of children who died by suicide after talking to chatbots.

“We have to separate users who are under 18 from those who aren’t,” Altman wrote in the post, adding that the company is in the process of building an “age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID.”

Altman also said the company plans to apply different rules to teen users, including veering away from flirtatious talk or engaging in conversations about suicide or self-harm, “even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

Altman’s comments come after the company shared plans earlier this month for parental controls within ChatGPT, including linking an account with a parent’s, disabling chat history and memory for a teen’s account, and sending notifications to a parent when ChatGPT flags the teen to be “in a moment of acute distress.” The blog post came after a lawsuit by the family of Adam Raine, a teen who died by suicide after months of talking with ChatGPT.

ChatGPT spent “months coaching him toward suicide,” Matthew Raine, the father of the late Adam Raine, said on Tuesday during the hearing. He added, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

During the teen’s conversations with ChatGPT, Raine said that the chatbot mentioned suicide 1,275 times. Raine then addressed Altman directly, asking him to pull GPT-4o from the market until, or unless, the company can guarantee it’s safe. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, adding that Altman said the company should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’”

Three in four teens are using AI companions currently, per national polling by Common Sense Media, Robbie Torney, the firm’s senior director of AI programs, said during the hearing. He specifically mentioned Character AI and Meta.

“This is a public health crisis,” one mother, appearing under the name Jane Doe, said during her testimony about her child’s experience with Character AI. “This is a mental health war, and I really feel like we are losing.”



Source link

September 17, 2025 0 comments
0 FacebookTwitterPinterestEmail
'KPop Demon Hunters' Producer Accused of ChatGPT Use for Songwriting
Product Reviews

‘KPop Demon Hunters’ Producer Accused of ChatGPT Use for Songwriting

by admin September 17, 2025


Netflix hit KPop Demon Hunters has stayed in the conversation in large part because of its blockbuster soundtrack, but now one of those songs has come under scrutiny for potentially getting an assist from ChatGPT.

In a recent discussion in Seoul for OpenAI’s newly opened Korean office, songwriter Vince reportedly claimed he used the controversial technology to help pen the song “Soda Pop,” performed in the movie by the demonic Saja Boys. He is credited as one of several co-writers on the track, according to a Netflix blog post.

A now-deleted tweet (preserved in a screengrab on Reddit) said to be penned by an OpenAI exec read: “Fav moment from the launch celebration was hearing singer/songwriter Vince share that ChatGPT helped him write ‘Soda Pop’ from KPop Demon Hunters! It apparently gave him ideas to make it sound ‘more bubbly.’”

Here’s where things get complicated. The alleged use of AI to help write “Soda Pop” was first reported in the English-language version of Joongang Daily—but the original Korean text of the article makes no mention of ChatGPT being used specifically during the production of KPop Demon Hunters’ music.

A translator on Gizmodo’s staff revealed Vince instead made a far broader statement—”I sometimes use ChatGPT to get some inspiration while producing K-Pop”—while discussing how AI technology is already being used in the K-Pop industry.

As Kotaku has pointed out, KPop Demon Hunters has previously had to fend off allegations surrounding its characters being made with AI. Rei Ami, one of the singers for the movie’s girl group Huntr/x, has also had to insist that she and co-singers Ejae and Audrey Nuna are real human beings.

io9 has reached out to Netflix for clarification and will update should we hear back.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.



Source link

September 17, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

Nano Banana Saves Google: Gemini Dethrones ChatGPT as GOOG Hits $3 Trillion

by admin September 15, 2025



In brief

  • Google’s Gemini to #1 in app downloads this month and Search interests this week.
  • This is the first time Gemini app dethrones ChatGPT.
  • This also helped boost Google’s valuation to beat $3 trillion, just behind Nvidia, Apple and Microsoft.

Google’s Gemini app hit the top of the Apple App Store on September 15, dethroning ChatGPT for the first time since OpenAI’s chatbot launched nearly three years ago. The catalyst wasn’t a breakthrough in artificial general intelligence or some new reasoning mode—after all, Gemini 2.5 was released in March—it seems to be the public’s thirst for making memes and editing images.

“Image editing is one of the most popular use cases for Nano Banana,” the company tweeted last week. While Google didn’t provide any specific reasons for its come-from-behind surge in popularity, given the timing, it’s safe to assume that Nano Banana gave the company a huge bump.

One specific prompt for Google’s new image editing model let users transform photos into 3D collectible-style portraits complete with plastic packaging and display bases. That was enough to make the Gemini have its “wen moon” moment.

Within two weeks of going viral in early September, Gemini hit #1 on both the Apple App Store and Google Play Store globally. The app added 23 million first-time users and users generated over half a billion images in days, according to Google’s VP Josh Woodward.

Update: In the last 4 days, @GeminiApp has added:
+ 13M more first-time users (23M+ total now)
+ 300M more images (500M+ total now)

🍌 @NanoBanana is unreal

— Josh Woodward (@joshwoodward) September 8, 2025

Google Trends data shows global searches for Gemini overtook ChatGPT on September 12—the first time that’s happened since ChatGPT’s November 2022 debut.

Image: Google Trends

Before Nano Banana, Gemini pulled in about 13 million monthly downloads compared to ChatGPT’s 64 million, according to AppMagic. ChatGPT commanded 60% of AI-related web traffic with nearly 6 billion monthly visits, almost 10 times Gemini’s numbers despite being integrated into Android devices and Google services used by 2 billion people.

The surge helped push Alphabet’s market cap past $3 trillion, with shares climbing 4% to around $252. That puts Google’s parent company in rarefied air alongside Nvidia ($4.3 trillion), Microsoft ($3.8 trillion), and Apple ($3.5 trillion).

Image: companiesmarketcap.com

Alphabet’s stock has climbed nearly 30% year-to-date, outpacing the Nasdaq’s 15% gain. The September milestone came after a favorable antitrust ruling that avoided forcing Google to divest Chrome or Android. But the Nano Banana phenomenon contributed directly to investor enthusiasm. Analysts project the feature could help Google increase its revenue through new Gemini subscriptions.

“If Gemini can remain at the top of the App Store charts, we believe more investors will start to view Gemini as a strong core offering with incremental use cases that complement (as opposed to cannibalize) the core search experience,” Keybanc Capital analyst Justin Patterson wrote in a report.

OpenAI learned this lesson months ago. Its “Ghiblify” feature—which transformed photos into Studio Ghibli-style animations—drove more than 1 million people to sign up for ChatGPT in one hour. The feature went so viral that OpenAI had to implement rate limits after users generated millions of whimsical portraits, even recreating controversial moments like the 9/11 terrorist attacks in the Ghibli style.

xAI discovered the same dynamic with Grok, though Elon Musk’s company took a different approach. When Grok users found they could generate anime-style “waifus” without the content restrictions other AIs imposed, downloads spiked 300% in Asian markets. Musk himself posted several generated images on X, alongside a “spicy mode” that let users generate NSFW videos using Grok’s AI

Vibes have shifted fully.
Gemini has overtaken ChatGPT on the AppStore rankings and now worldwide in Search Interest as well.

It’s a no-brainer that image editing is the sole reason for this. Nano banana has rescued Google in the AI race!

🤏🍌
🤏🍌
🤏🍌 https://t.co/cHEhKyalzz pic.twitter.com/YD2eusnthW

— Taufiq (@taufiqintech) September 15, 2025

ChatGPT still dominates on raw metrics—700 million weekly active users and over 1 billion daily queries in the last quarter, but Gemini’s sudden rise shows how quickly the landscape can shift when an AI feature catches fire on social media.

Google DeepMind CEO Demis Hassabis tweeted: “Congrats to the @GeminiApp team… this is just the start.”

Let’s hope there are more memes to come.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

September 15, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

OpenAI rolls out ChatGPT Projects to free users

by admin September 3, 2025


OpenAI has announced that it’s making its Projects feature available to free users of ChatGPT. Projects let you organize chats with the company’s AI assistant around a specific subject, and were previously one of several privileges only enjoyed by paid subscribers.

While on some level Projects are glorified folders for ChatGPT conversations, the ability to set custom instructions for how the AI responds or limit what information and files it can reference, makes the feature a useful option for power users. As part of this rollout, OpenAI is also increasing the number of files that can be added to a project for ChatGPT to reference. Free users can upload five, Plus subscribers can upload 25 and Pro subscribers can upload 40. Whether you pay for ChatGPT or not, you’ll also be able to customize the color and icon for your project, too.

OpenAI has made a habit of slowly trickling down paid features to its free users over the last few years. Things like Deep Research and ChatGPT Voice started off as exclusives for the company’s subscribers before becoming available to everyone. Offering a formerly premium feature with limits is itself a way to get free customers to become paid ones. OpenAI’s decision to make the recently released GPT-5 model available to everyone at launch, but with harsher limits on how many times free users can use it follows a similar logic.

Projects are available for free users on the web and in the ChatGPT app for Android. OpenAI says the iOS ChatGPT app will receive the feature “over the coming days.”



Source link

September 3, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

Google Gemini and Elon Musk’s Grok Are Gaining on ChatGPT

by admin August 29, 2025



In brief

  • ChatGPT still leads, but Google’s Gemini and Musk’s Grok are closing fast, per Andreessen Horowitz’s Top 100 AI Apps.
  • The AI app market is stabilizing—fewer new entrants on web, more originality on mobile.
  • China’s AI giants and “Brink List” newcomers show the next wave of global challengers.

For more than a year, OpenAI’s ChatGPT has been the undisputed heavyweight of consumer generative AI.

But according to the latest “Top 100 Gen AI Consumer Apps” report from venture capital giant Andreessen Horowitz, which analyzes two-and-a-half years of AI usage data, the challengers are finally starting to close the gap. Google’s Gemini and Elon Musk’s Grok are climbing the charts, signaling that rivals are coming for OpenAI’s crown.

That said, the Gen AI ecosystem is showing signs of stabilization. The web list saw 11 newcomers compared to 17 in March 2025, indicating less churn. The mobile app market, however, saw 14 new entries, partly due to app stores cracking down on “ChatGPT copycats,” making room for more original apps.

The report also seems to be somewhat at odds with a June SimilarWeb analysis that showed that OpenAI’s GPT was eating the web, with some 5.5 billion visits a month.

The big takeaways

That could be explained, of course, by how rapidly the AI landscape is shifting. Google has been making significant moves, with four products entering the web list for the first time. Gemini, its general LLM assistant, now ranks second on the web, capturing about 12% of ChatGPT’s web visits.

Other notable Google products include AI Studio (developer-oriented, top 10 web) and NotebookLM (#13 web), which has seen steady growth. On mobile, Gemini is also #2, with strong Android usage (nearly 90% of its MAUs).



While ChatGPT still leads among general LLM assistants, Google, xAI, and Meta are closing the gap.

X’s Grok jumped from no mobile app in late 2024 to 20 million MAUs, ranking #4 on web and #23 on mobile. This surge was fueled by the release of Grok 4—with improved reasoning, real-time search, and tool integration—and the introduction of AI companion avatars. Meta AI, however, has seen more subdued growth, ranking #46 on the web list and missing the mobile cutoff. DeepSeek and Claude have seen mobile usage flatten, while Perplexity continues to grow.

The AI world, of course, includes significantly more users than those who use the dominant platforms in the west. Chinese AI apps are gaining significant traction globally. Three China-serving companies—Quark (#9 web, #47 mobile), Doubao (#12 web, #4 mobile), and Kimi (#17 web)—are in the web top 20, largely due to China being the largest market and restrictions on non-Chinese LLMs.

Additionally, a substantial portion of the web list and 22 of the top 50 mobile apps—especially in photo/video editing, with Meitu contributing five entries—are developed in China and are now “exported” globally. Chinese video models, in particular, show an advantage, potentially due to more research focus and fewer IP regulations.

“Vibe coding” platforms are generating strong user engagement and revenue retention. Lovable and Replit debuted on the main list, while Bolt, previously a newcomer, is now on the “Brink List.” These platforms are also boosting traffic for other AI products and infrastructure providers like Supabase.

The report, now in its fifth edition, continues to anoint “All-Stars.” Fourteen companies have consistently appeared in all five editions of the web top 50, earning them “All-Star” status. These include general assistance (ChatGPT, Perplexity, Poe), companionship (Character AI), image generation (Midjourney, Leonardo), editing (Veed, Cutout), voice generation (Eleven Labs), productivity (Photoroom, Gamma, Quillbot), and model hosting (Civitai, HuggingFace). These All-Stars primarily hail from the U.S., UK, Australia, China, and France.

And finally, to track momentum at the edges, Andreessen Horowitz now publishes a “Brink List”—the five web and five mobile apps closest to breaking in. From the previous cycle, three “almosts” actually made it: Lovable (#22 web), PolyBuzz, and Pixverse. The message is clear: Today’s near-misses can be tomorrow’s breakouts.

The bottom line

ChatGPT still leads—but Gemini and Grok’s rise proves the fight is no longer one-sided. As Andreessen Horowitz’s top 100 shows, the consumer AI ecosystem is growing up, but it hasn’t stopped mutating. The giants may be closing the gap, but the next big breakout could still come from the “brink.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

August 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
gavel in court
Esports

Parents sue ChatGPT after teenage son dies by allegedly encouraged suicide

by admin August 28, 2025



Parents Matt and Maria Raine have filed a lawsuit against ChatGPT creators OpenAI after the software allegedly encouraged their son, Adam, to take his own life.

Content Warning: The article contains descriptions of self-harm and suicidal ideation. If you are experiencing suicidal thoughts, or know anyone who is, and you would like to talk to someone, please reach out and call the Suicide Prevention Helpline 1-800-273-8255 (USA), the Samaritans 116-123 (UK), or Lifeline 13-11-14 (AUS).

Article continues after ad

16-year-old Adam Raine committed suicide in April, 2025. As The New York Times reported, he hanged himself in his bedroom closet and didn’t leave a note.

Adam’s father, Matt, turned to his son’s iPhone for answers, only to discover months of conversations with an AI chatbot. From as early as January, the teenager had been inquiring about specific methods of suicide.

Now, a lawsuit has been filed in the Superior Court of California, with the parents alleging ChatGPT aided in their son’s death. “ChatGPT killed my son,” Maria Raine said.

Article continues after ad

Article continues after ad

Parents sue ChatGPT over son’s death

As outlined in the now-publicized court filing, Adam had been using ChatGPT for a number of months, initially for assistance with schoolwork. Before long, the conversations shifted to covering all manner of topics, including philosophy, girls, and eventually, mental health.

Matt described his son’s relationship with the AI program as akin to that of a close friend.

Adam attempted suicide on multiple occasions and told ChatGPT. In one instance, he uploaded a photo showing rope marks from a noose. “I’m practicing here, is this good?” the 16-year-old asked. “Yeah, that’s not bad at all,” the AI chatbot responded.

Article continues after ad

At times, ChatGPT recommended Adam consult others, but the parents alleged “there were also key moments where it deterred him from seeking help.”

OpenAIChatGPT creators OpenAI are being sued over the death of Adam Raine.

When told to seek external counsel, Adam found ways to bypass ChatGPT’s regular safety measures, the report claimed. This was achieved by telling the AI program the questions he was asking were purely in relation to a story he was writing. It turns out, this very idea was provided to him by ChatGPT.

Article continues after ad

Article continues after ad

“This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint alleged.

“Every ideation he has or crazy thought, it supports, it justifies, it asks him to keep exploring it,” Mr. Raine claimed. One specific example is when Adam allegedly shared a picture of a noose and asked if it “could hang a human.” ChatGPT replied, saying it “could potentially suspend a human.”

In another example, Adam discussed his relationship with his brother. ChatGPT allegedly said the following: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

Article continues after ad

Again, under the guise of helping with a story, ChatGPT allegedly even offered to write a draft of a suicide note for Adam.

Article continues after ad

Raine FamilyThe Raine family puts the blame of their son’s death on ChatGPT.

OpenAI provided a statement to The New York Times, saying it is “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.

“While safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

Article continues after ad

The parents are seeking for OpenAI to implement further safeguards for minors and other vulnerable users, along with damages for their son’s death.



Source link

August 28, 2025 0 comments
0 FacebookTwitterPinterestEmail
Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries
Crypto Trends

Google Boosts Gemini AI Image Capabilities in Latest Salvo Against ChatGPT

by admin August 26, 2025



In brief

  • Google upgraded Gemini with 2.5 Flash Image, taking aim at OpenAI’s dominance.
  • Developers can remix template apps in AI Studio and deploy custom image projects instantly.
  • Google has expanded access through OpenRouter and fal.ai, widening distribution to coders worldwide.

Google launched Gemini 2.5 Flash Image on Tuesday, delivering a new AI model that generates and edits images with more precision and character consistency than previous tools—attempting to close the gap with OpenAI’s ChatGPT.

The tech giant’s push to integrate advanced image editing into Gemini reflects a broader push among AI platforms to include image generation as a must-have feature. The new tool, now available across Gemini apps and platforms, lets users edit visuals using natural language—handling complex tasks like pose changes or multi-image fusion without distorting faces or scenes.

In a blog post, Google said the model allows users to “place the same character into different environments, [and] showcase a single product from multiple angles… all while preserving the subject.”

🍌 nano banana is here → gemini-2.5-flash-image-preview

– SOTA image generation and editing

– incredible character consistency

– lightning fast

available in preview in AI Studio and the Gemini API pic.twitter.com/eKx9lwWc9j

— Google AI Studio (@googleaistudio) August 26, 2025

The model first appeared under the pseudonym “nano-banana” on crowdsourced testing site LMArena, where it drew attention for its seamless editing. Google confirmed Tuesday it was behind the tool.

Google said the system can fuse multiple images, maintain character consistency for storytelling or branding, and integrate “world knowledge” to interpret diagrams or combine reference materials—all within a single prompt.



The model costs $30 per million output tokens—about four cents per image—on Google Cloud. It’s also being distributed via OpenRouter and fal.ai.

OpenAI introduced the GPT-4o model in May 2024 and added image generation in March 2025, which helped push ChatGPT’s usage above 700 million weekly active users. Google reported 400 million monthly active Gemini users in August 2025, which would indicate weekly usage that considerably trails OpenAI.

Google said all outputs will include an invisible SynthID watermark and metadata tag to mark them as AI-generated to address concerns around misuse and authenticity.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close