Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Altman

Sam Altman Says the GPT-5 Haters Got It All Wrong
Product Reviews

Sam Altman Says the GPT-5 Haters Got It All Wrong

by admin October 4, 2025


OpenAI’s August launch of its GPT-5 large language model was somewhat of a disaster. There were glitches during the livestream, with the model generating charts with obviously inaccurate numbers. In a Reddit AMA with OpenAI employees, users complained that the new model wasn’t friendly, and called for the company to restore the previous version. Most of all, critics griped that GPT-5 fell short of the stratospheric expectations that OpenAI has been juicing for years. Promised as a game changer, GPT-5 might have indeed played the game better. But it was still the same game.

Skeptics seized on the moment to proclaim the end of the AI boom. Some even predicted the beginning of another AI Winter. “GPT-5 was the most hyped AI system of all time,” full-time bubble-popper Gary Marcus told me during his packed schedule of victory laps. “It was supposed to deliver two things, AGI and PhD-level cognition, and it didn’t deliver either of those.” What’s more, he says, the seemingly lackluster new model is proof that OpenAI’s ticket to AGI—massively scaling up data and chip sets to make its systems exponentially smarter—can no longer be punched. For once, Marcus’ views were echoed by a sizable portion of the AI community. In the days following launch, GPT-5 was looking like AI’s version of New Coke.

Sam Altman isn’t having it. A month after the launch he strolls into a conference room at the company’s newish headquarters in San Francisco’s Mission Bay neighborhood, eager to explain to me and my colleague Kylie Robison that GPT-5 is everything that he’d been touting, and that all is well in his epic quest for AGI. “The vibes were kind of bad at launch,” he admits. “But now they’re great.” Yes, great. It’s true the criticism has died down. Indeed, the company’s recent release of a mind-bending tool to generate impressive AI video slop has diverted the narrative from the disappointing GPT-5 debut. The message from Altman, though, is that naysayers are on the wrong side of history. The journey to AGI, he insists, is still on track.

Numbers Game

Critics might see GPT-5 as the waning end of an AI summer, but Altman and team argue that it cements AI technology as an indispensable tutor, a search-engine-killing information source, and, especially, a sophisticated collaborator for scientists and coders. Altman claims that users are beginning to see it his way. “GPT-5 is the first time where people are, ‘Holy fuck. It’s doing this important piece of physics.’ Or a biologist is saying, ‘Wow, it just really helped me figure this thing out,’” he says. “There’s something important happening that did not happen with any pre-GPT-5 model, which is the beginning of AI helping accelerate the rate of discovering new science.” (OpenAI hasn’t cited who those physicists or biologists are.)

So why the tepid initial reception? Altman and his team have sussed out several reasons. One, they say, is that since GPT-4 hit the streets, the company delivered versions that were themselves transformational, particularly the sophisticated reasoning modes they added. “The jump from 4 to 5 was bigger than the jump from 3 to 4,” Altman says. “We just had a lot of stuff along the way.” OpenAI president Greg Brockman agrees: “I’m not shocked that many people had that [underwhelmed] reaction, because we’ve been showing our hand.”

OpenAI also says that since GPT-5 is optimized for specialized uses like doing science or coding, everyday users are taking a while to appreciate its virtues. “Most people are not physics researchers,” Altman observes. As Mark Chen, OpenAI’s head of research, explains it, unless you’re a math whiz yourself, you won’t care much that GPT-5 ranks in the top five of Math Olympians, whereas last year the system ranked in the top 200.

As for the charge about how GPT-5 shows that scaling doesn’t work, OpenAI says that comes from a misunderstanding. Unlike previous models, GPT-5 didn’t get its major advances from a massively bigger dataset and tons more computation. The new model got its gains from reinforcement learning, a technique that relies on expert humans giving it feedback. Brockman says that OpenAI had developed its models to the point where they could produce their own data to power the reinforcement learning cycle. “When the model is dumb, all you want to do is train a bigger version of it,” he says. “When the model is smart, you want to sample from it. You want to train on its own data.”



Source link

October 4, 2025 0 comments
0 FacebookTwitterPinterestEmail
Copyright Violations, Sam Altman Shoplifting, and More
Gaming Gear

Copyright Violations, Sam Altman Shoplifting, and More

by admin October 1, 2025



On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

One shotted Sora 2 output 🤯 pic.twitter.com/vavyEo3CLJ

— Bryan Kim (@kirbyman01) September 30, 2025

Holy copyright infringement Batman, Sora 2 is kinda absurd yet fun , sorry @HIDEO_KOJIMA_EN

“Metal Gear Stranding Relivery” pic.twitter.com/bpzvBykxNi

— Michael Lucas Poage 🐝 (@RubyBrewsday) October 1, 2025

However, often it is quite clear on which movies Sora 2 has been trained on. pic.twitter.com/7pVFMj4l7x

— TestingCatalog News 🗞 (@testingcatalog) September 30, 2025

There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

Sora just dropped Sam Altman into Rick & Morty.

We’ve officially crossed into the multiverse of AI slop, and I love it. pic.twitter.com/HwJdE9GF8V

— SamAlτcoin.eth 🇺🇸 (@SamAltcoin_eth) September 30, 2025

Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

Sora 2 – Not what I expected.

(5) Dumb / awesome examples.

Prompt:
Love Island reveal scene. A young woman sits on a plush villa sofa during a tense “Movie Night” scene. She watches a large TV screen showing grainy CCTV-style footage: Real-life Ronald McDonald, dashing into a… pic.twitter.com/vNg609MaIJ

— Rory Flynn (@Ror_Fly) October 1, 2025

the gabagool cannon.. absolute PEAK being made with Sora 2

prompt:

“Kirby teams up with The Soprano’s in an all new video game. Japanese TV advertisement” pic.twitter.com/bKyq20Sd3z

— HeavensLastAngel (@HvnsLstAngel) October 1, 2025

Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

⚡ Got access to Sora 2.

“A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors.”

Not sure if I should call this overfitting 😋 pic.twitter.com/NPAyjZqtTV

— Javi Lopez ⛩️ (@javilopen) October 1, 2025

“People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

i have the most liked video on sora 2 right now, i will be enjoying this short moment while it lasts

cctv footage of sam stealing gpus at target for sora inference pic.twitter.com/B86qzUGlMq

— gabriel (@GabrielPeterss4) September 30, 2025

Sam Altman is playing 4D chess. Sora 2 is about to take over social media, the virality is guaranteed once this scales. Billions in ad revenue will flow straight into more compute, fueling the flywheel. In a year Sora 2 will be so efficient and cheap that margins explode. You… pic.twitter.com/cUbmePkwDG

— VraserX e/acc (@VraserX) October 1, 2025

I love Sora 2

Sam Altman must love this too pic.twitter.com/YexwVEoBKQ

— Adyseus (@Adyseku) October 1, 2025

Lmao, Sam Altman stealing art from Miyazaki in the Studio Ghibli HQ.

Sora 2 is wilddddddd. pic.twitter.com/qzhfMs0A2t

— PJ Ace (@PJaccetturo) October 1, 2025

There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

The new Sora is amazing, albeit a bit scary for copyright holders @OpenAI pic.twitter.com/ESfzXbYr2z

— Rich Greenfield, LightShed 🔦 (@RichLightShed) September 30, 2025

OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.





Source link

October 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
Text-to-video AI tech Sora 2 in action.
Gaming Gear

OpenAI’s new video generation tool Sora 2 is here, but don’t worry, Sam Altman says it will avoid the ‘degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed’

by admin October 1, 2025



Sora 2, the latest model of OpenAI’s text-to-video tech, has now launched alongside a dedicated app. Besides spitting out all of the soulless, AI-generated Studio Ghibli-style animation one could ever want, Sora 2 can now generate live action clips with both sound and a frankly scary level of visual accuracy.

Granted, not all of the clips OpenAI shares in its announcement are flawless, with its AI-generated snippet of a practicing martial artist featuring a warping bo staff and smooshed phalanges. Still, OpenAI is keen to highlight Sora 2’s gains in depicting consistent body mechanics that adhere to the rules of the physical world; the twirling body horror of earlier models generated gymnastics clips may be a thing of the past.

The company also touts Sora 2’s ability to “directly inject elements of the real world” into its AI-generated clips. It elaborates, “For example, by observing a video of one of our teammates, the model can insert them into any Sora-generated environment with an accurate portrayal of appearance and voice. This capability is very general, and works for any human, animal or object.” If you’re so inclined to descend into the realm of deepfakes, the Sora app, powered by Sora 2, is available on the iOS store now.


Related articles

OpenAI touts the app as not just a video generator but also a social environment.

“You can create, remix each other’s generations, discover new videos in a customizable Sora feed, and bring yourself or your friends in via cameos,” the company writes. “With cameos, you can drop yourself straight into any Sora scene with remarkable fidelity after a short one-time video-and-audio recording in the app to verify your identity and capture your likeness.”

One can see the whimsical appeal of sharing AI-generated clips of yourself riding ostriches and pulling off extremely dangerous stunts, but I also can’t ignore the risk posed by deepfakes. For one thing, US president Donald Trump shared an expletive-laden deepfake video on Truth Social literally the day before Sora 2’s launch (via Ars Technica).

The sombrero superimposed over representative Hakeem Jeffries is hopefully a telltale sign for most viewers that the remarks senator Chuck Schumer is depicted as saying in this clip (which was not created using Sora 2) are wholly fabricated. However, given that a Microsoft study suggests folks struggle to accurately identify AI-generated still images 62% of the time, it’s hard not to be concerned about deepfakes’ capacity for disinformation.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

Videos generated with Sora 2 don’t even feature a tiny AI watermark, like those introduced in Gemini’s ‘Nano Banana’ image-editing update. OpenAI say they are ‘launching responsibly,’ with in-app features designed to “maximize creation, not consumption,” and address “concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds.” But comments made by company CEO Sam Altman on his own blog read contrapuntal even to this stated feed philosophy.

“It is easy to imagine the degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed,” Altman first admits.

As such, he shares that the app has various “mitigations to prevent someone from misusing someone’s likeness in deepfakes, safeguards for disturbing or illegal content, periodic checks on how Sora is impacting users’ mood and wellbeing, and more.”


Related articles

Altman even goes as far as to say that, if OpenAI cannot sufficiently address aspects of the app that lead to negative social outcomes, then the company would discontinue the service.

But Altman also caps off a longer passage regarding how the Sora feed aims to show content that users are interested in by writing, “And if you truly just want to doom scroll and be angry, then ok, we’ll help you with that.” To me, this reads not only as a shrugging off of responsibility, but also fairly nihilistic; for all OpenAI’s talk about the Sora app’s safety features, what can be done if its users still choose to gaze into the abyss?

(Image credit: OpenAI)

I’d be remiss if I didn’t also reference the existentialism and labour concerns the launch of the Sora 2 model will no doubt inspire in my freelance creative friends. Altman writes on his blog, “Creativity could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase.” And I would like to suggest that he may be right, just not how he thinks.

While Altman wants OpenAI’s app to be at the forefront of a tidal wave of creativity, my personal hope is that audiences get sick of realistic, computer generated imagery as a result of Sora 2’s proliferation. My blue sky thinking—however naive it may be—is the hope that, in response to audiences seeking out visual art that could only ever be made by humans, practical effects and puppets make a comeback in a big way.

Best PC build 2025

All our favorite gear



Source link

October 1, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
GameFi Guides

Why Elon Musk, Sam Altman and Other Billionaires Are Betting On Brain-Computer Interfaces

by admin September 28, 2025



In brief

  • Elon Musk’s Neuralink and the Sam Altman-backed Merge Labs are driving a new wave of billionaire-backed brain-computer interface ventures.
  • Current BCI progress remains medical, with only five Neuralink patients implanted as of September 2025.
  • Experts warn BCIs are far from “thought reading,” and billionaire ambitions risk overshadowing real therapeutic potential.

Elon Musk already has rockets, cars, AI, and humanoid robots. Musk’s rival Sam Altman runs OpenAI, the company behind the leading AI chatbot, ChatGPT. Now, both men and other billionaires want a piece of the human brain.

Their latest bets on brain-computer interfaces, or BCIs, reveal less about today’s medical breakthroughs and more about a looming contest over who owns the neural on-ramp to digital life. As founders and experts in the space told Decrypt, billionaire attention “elevates the whole industry” even as it distorts priorities.

For billionaires, brain-computer interfaces are not just medical devices—they represent the next potential platform shift, a way to control the gateway between human thought and digital systems.

Owning that interface could mean owning the future of computing. That is why some of the most powerful people in the world are pouring money into BCIs: They see them as a hedge against artificial intelligence, a new control point in the tech stack, and perhaps the ultimate frontier for profit and influence.

Musk and Altman make moves

Musk founded Neuralink in 2016 with the goal of merging with machines, which he claimed may be the only way to keep pace with artificial intelligence. The company recently raised a $650 million Series E, placing it among the best-funded players in the field. Neuralink’s first patient, Noland Arbaugh, has shown he can control a cursor and browse the internet by thought alone.

The results have been mixed, but so far, five patients have now been implanted, with trials expanding to speech impairment and vision restoration. Musk keeps framing BCIs as not just medical devices, but a safeguard for humanity in an AI-dominated future.

Meanwhile, Altman has surfaced as a co-founder of Merge Labs, a new venture aiming to raise around $250 million at a valuation that could reach $850 million. Early reports suggest Merge may pursue non-invasive interfaces, a different path than Neuralink’s brain implants.

For Altman, who already commands one of the most powerful AI companies, the move signals that the next battle is not only about who builds the smartest models but who controls the pipeline that connects them to humans.

Other major bets

The circle extends beyond Musk and Altman. Prominent biohacker Bryan Johnson, who made his fortune in payments, poured $100 million into Kernel in 2016. Kernel develops neurotech platforms for measuring brain activity, positioning itself as an infrastructure play rather than a flashy implant company.

Neuralink’s investors also include Peter Thiel’s Founders Fund, evidence that Silicon Valley’s venture elite is preparing for the possibility that brain-computer links become the next foundational layer of computing.

“For me, their involvement is a good sign,” Tetiana Aleksandrova, CEO and co-founder of neurotechnology startup Subsense, told Decrypt. “When billionaires step into BCI, they bring visibility and capital that elevate the whole industry. Suddenly, more funds are planning to allocate resources to neurotechnology, more companies are founded, and more engineers discover that this is an exciting space worth dedicating their careers to.”



But Aleksandrova cautioned that billionaire involvement cuts both ways.

“Their funding can accelerate progress at a pace public funding rarely allows,” she explained. “At the same time, the pressure to deliver at startup speed can lead to unrealistic promises that put trust at risk. And in science, trust is just as critical as capital.”

Andreas Melhede, co-founder of neuroscience DAO Elata Bioscience, told Decrypt that while billionaire involvement accelerates interest and funding, it also narrows the agenda.

“The priorities tend to reflect the vision of a single individual or a gatekept corporate agenda, rather than the broader scientific community,” he said. “That means research often skews toward ‘moonshot’ projects designed to capture attention, rather than significant collaborative advances that actually move the field forward.”

Melhede agreed that billionaire rhetoric can both be good for and do harm to the industry, risking overshadowing important but less glamorous work. The bigger risk, he said, is centralization of power over something as important as human brains.

“If one company owns the infrastructure, code, and data, they own the keys to an individual’s thoughts and intentions,” he said. “This discourages transparency [and] slows independent validation and scientific progress. Access to BCI technology—and cognitive autonomy—is subject to the business decisions of a handful of high-profile figures. That is too much risk in too few hands.”

Speculation vs. reality

That tension defines the field. The billionaire pitch is sweeping—control the neural interface, control the future. But the present reality is narrower: coarse signals, fragile hardware, and systems that cannot “read thoughts” in the way public rhetoric sometimes suggests.

Still, such a breakthrough could occur “conceivably some day,” Gary Marcus, a cognitive scientist and professor emeritus of psychology and neural science at New York University, told Decrypt. “For now, we just don’t understand the neural code well enough. Of course, there are already interventions that make sense for people who are paralyzed and with few other options.”

Companies like Synchron and Inbrain continue pilot trials, with Inbrain’s graphene-based BCI platform receiving FDA Breakthrough Device designation. But these remain early-stage efforts, far from mass-market enhancement.

The stakes

The question is less whether brain-computer interfaces will work at scale, and more whose vision defines them. Musk frames BCIs as an existential safeguard. Altman positions them as strategic control points. Johnson and Thiel treat them as infrastructure bets.

For patients, the technology is about restoring lost abilities. For billionaires, it is about shaping the next human-machine platform—one where whoever owns the gateway may one day set the rules for how thought itself becomes data.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

September 28, 2025 0 comments
0 FacebookTwitterPinterestEmail
Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’
Gaming Gear

Sam Altman says ChatGPT will stop talking about suicide with teens

by admin September 17, 2025


On Tuesday, OpenAI CEO Sam Altman said that the company was attempting to balance privacy, freedom, and teen safety — principles that, he admitted, were in conflict. His blog post came hours before a Senate hearing focused on examining the harm of AI chatbots, held by the subcommittee on crime and counterterrorism and featuring some parents of children who died by suicide after talking to chatbots.

“We have to separate users who are under 18 from those who aren’t,” Altman wrote in the post, adding that the company is in the process of building an “age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID.”

Altman also said the company plans to apply different rules to teen users, including veering away from flirtatious talk or engaging in conversations about suicide or self-harm, “even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

Altman’s comments come after the company shared plans earlier this month for parental controls within ChatGPT, including linking an account with a parent’s, disabling chat history and memory for a teen’s account, and sending notifications to a parent when ChatGPT flags the teen to be “in a moment of acute distress.” The blog post came after a lawsuit by the family of Adam Raine, a teen who died by suicide after months of talking with ChatGPT.

ChatGPT spent “months coaching him toward suicide,” Matthew Raine, the father of the late Adam Raine, said on Tuesday during the hearing. He added, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

During the teen’s conversations with ChatGPT, Raine said that the chatbot mentioned suicide 1,275 times. Raine then addressed Altman directly, asking him to pull GPT-4o from the market until, or unless, the company can guarantee it’s safe. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, adding that Altman said the company should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’”

Three in four teens are using AI companions currently, per national polling by Common Sense Media, Robbie Torney, the firm’s senior director of AI programs, said during the hearing. He specifically mentioned Character AI and Meta.

“This is a public health crisis,” one mother, appearing under the name Jane Doe, said during her testimony about her child’s experience with Character AI. “This is a mental health war, and I really feel like we are losing.”



Source link

September 17, 2025 0 comments
0 FacebookTwitterPinterestEmail
Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’
Gaming Gear

Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’

by admin September 13, 2025


Carlson: “…he was definitely murdered, I think… there were signs of a struggle, of course. The surveillance camera, the wires had been cut. He had just ordered take-out food, come back from a vacation with his friends on Catalina Island. No indication at all that he was suicidal. No note and no behavior. He had just spoken to a family member on the phone.

And then he’s found dead with blood in multiple rooms. So that’s impossible. Seems really obvious he was murdered. Have you talked to the authorities about it?”

Altman: I have not talked to the authorities about it.

Carlson: “Um, and his mother claims he was murdered on your orders. “

Altman: “Do you believe that?”

Carlson: “I- I’m, Well, I’m I’m asking.”

Altman: “I mean… you, you just said it, so do you, do you believe that?

Carlson: “I think that it is, um, worth looking into. And I don’t… I mean, if a guy comes out and accuses your company of committing crimes, I have no idea if that’s true or not, of course. Um, and then he is found killed, and there are signs of a struggle. I… I don’t think it’s worth dismissing it…I don’t think we should say, well, he killed himself when there’s no evidence that the guy was depressed at all. Um, I think… and if he was your friend, I would think he would want to speak to his mom or…

Altman: “I did offer, she didn’t want to.”

Carlson: “So, do you feel that, you know, when people look at that and they’re like, you know, it’s possible that happened. Do you feel that that reflects the worries they have about what’s happening here? Like people are afraid that this is like…”

Altman: “I haven’t done too many interviews where I’ve been accused of, like…”

Carlson: “Oh, I’m not accusing you at all. I’m just saying his, his mother says that.”



Source link

September 13, 2025 0 comments
0 FacebookTwitterPinterestEmail
Sam Altman Seeks Texts From Shivon Zilis, Elon Musk's Employee/Mother of His Child
Product Reviews

Sam Altman Seeks Texts From Shivon Zilis, Elon Musk’s Employee/Mother of His Child

by admin September 12, 2025


Last year, Elon Musk sued his rival Sam Altman’s company, tech superstar OpenAI. In his lawsuit, Musk claimed that the company had violated federal racketeering laws because, having once promised to stay a nonprofit research lab, had since converted itself into a for-profit company. Musk, who initially poured tens of millions of dollars into the startup, claims he was deceived. OpenAI and Altman have since countered that Musk also wanted OpenAI to become a for-profit venture. This week, the legal battle was ratcheted up a notch, as OpenAI sought to bring those close to Musk into the mix.

Business Insider writes that Altman has now asked a judge to order Shivon Zilis and Jared Birchall to turn over key correspondence as part of the legal case.

Zilis, an executive at Musk’s brain-computer interface startup Neuralink, has had four children with Musk via IVF. They first met back in 2016 when Zilis, who formerly worked for OpenAI, joined the company. Birchall, meanwhile, has often been described as the billionaire’s “right-hand man” and his “fixer,” and often seems to be tasked with critical behind-the-scenes affairs (like managing his money), while also dealing with the less savory aspects of managing Musk’s empire (like interfacing with other women that Musk has had children with).

Additionally, Birchall occupies several important executive roles at Musk-related orgs. Specifically, he runs Musk’s family office, Excession, directs the Musk Foundation, and is the CEO of Neuralink. The inclusion of the two Musk allies in the legal case is described thusly:

As part of his defense against Musk’s 2024 racketeering lawsuit, Altman wants a judge in California to order Birchall and Zilis to turn over key texts and emails in 72 hours. If either blows that deadline, they should be required to sit for one additional, preliminary deposition prior to their primary depositions in the case.

The OpenAI CEO’s legal team has noted that communications with Zilis, in particular, should have relevance to the case. BI reports that attorneys have argued, “She was a conduit between Musk and OpenAI’s co-founders on matters central to this case, including discussions about a potential 2017 restructuring that would have given Musk a large equity stake in OpenAI.”

Altman’s targeting of Zilis and Birchall, two people with close personal and professional ties to Musk, could indicate a broader escalation of the legal fight, as both sides seek to gain an advantage. “Birchall and Zilis should not be forced to sit for two depositions each,” Musk’s attorneys have argued. “If their texts and Gmails cannot be produced in time, their depositions should be rescheduled.” Gizmodo reached out to Neuralink, OpenAI, and Tesla for comment.

The lawsuit against OpenAI is the culmination of a long-running feud between the two billionaires. More recently, Musk sued OpenAI again (along with Apple), alleging that the two companies had colluded to exert anticompetitive control over the AI market.

The suit seeks “billions” in damages. “Apple and OpenAI’s exclusive arrangement has made ChatGPT the only generative AI chatbot integrated into the iPhone,” the suit says. “This means that if iPhone users want to use a generative AI chatbot for key tasks on their devices, they have no choice but to use ChatGPT, even if they would prefer to use more innovative and imaginative products like xAI’s Grok.” In the past, OpenAI has characterized the litigation as being “consistent with Mr. Musk’s ongoing pattern of harassment.” Musk also previously tried to buy OpenAI, although Altman turned him down.

Where did the feud between Musk and Altman start? God only knows, but one thing’s for sure: it shows no signs of simmering down. In the before times, Altman and Musk were chums and business partners, but that all imploded, and for the past several years, it’s been increasingly bad. Can it all be traced back to the fact that Musk was once a co-founder and board member of OpenAI but now, having acrimoniously fallen out with Altman, must be forced to watch it soar without him? All we really know for sure is that personal animosity has transmogrified into a nasty legal war that could ultimately hurt both men more than it helps anyone.



Source link

September 12, 2025 0 comments
0 FacebookTwitterPinterestEmail
UK flag plus OpenAI logo mix
Gaming Gear

Sam Altman and UK government minister reportedly discussed giving ChatGPT Plus to all Brits for free

by admin August 24, 2025



OpenAI CEO, Sam Altman, and the UK government Technology Secretary, Peter Kyle, have discussed a deal which would see the UK’s entire population given premium access to ChatGPT, according to a Guardian report this weekend. However, the bill, which would have to be covered by the government, may have stymied any chance of the deal going official, with Guardian sources indicating ChatGPT Plus for every Brit would cost as much as £2 billion ($2.7b).

Government AI advocate

Kyle is a well-known AI advocate, with previous reports citing evidence that he has used this online tool for advice and work related questions. The minister has also characterized ChatGPT as being great for unpicking complex topics and as a “very good tutor.”

Altman and Kyle have met a number of times this year. Kyle dined with Altman in March and April, according to the source. Then, in July, the minister signed an agreement with OpenAI. This memorandum of understanding (MoU) would open up the use of OpenAI services, like ChatGPT, in the UK’s public sector. Particular mention was given to education, defense, security, and justice departments. In exchange, OpenAI would have access to a range of government data.


You may like

Naturally, there remain concerns over ChatGPT’s (and other similar LLMs) accuracy, as well as privacy and security. We hope that these challenges and pitfalls were addressed in some way.

Prohibitive cost doesn’t add up

Bringing us up to date, sources speaking to The Guardian have shared information about talks between Altman and Kyle that took place recently in San Francisco.

While ChatGPT Plus costs $20 a month for an individual subscription, the cost for the whole populace (69.6 million by most recent estimates) should be provided at some kind of bulk discount. Multiplying the populace by a $20 fee results in a sum of around $1.4 billion. Thus, the $2.7 billion ‘proposal’ seems vastly overpriced.

That fact aside, the ChatGPT Plus for every Brit idea seems to have been a nonstarter, with sources saying “Kyle never really took the idea seriously,” writes The Guardian.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

The UK is already a top five nation for paid ChatGPT use, says the source. Some in government feel AI can help UK citizens unlock economic opportunities. Sizable waves of prosperity can uplift the population as a whole, so visionary government shouldn’t be put off by investments like this. However, the jury is still out regarding the usefulness of AI, and whether it is a passing fad, or a bubble.

To underline that no ‘ChatGPT Plus for every Brit deal’ is on the way, the Guardian report ends by relaying a statement from the UK’s science and technology department, which says no such proposal or deal has been taken forward.

Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.



Source link

August 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Sam Altman testifying on capital hill.
Gaming Gear

‘Someone is going to lose a phenomenal amount of money’ says OpenAI CEO Sam Altman about unwise AI investment. ‘When bubbles happen, smart people get overexcited about a kernel of truth’

by admin August 18, 2025



OpenAI CEO Sam Altman spoke to assembled reporters at a dinner in San Francisco late last week on the topic of, you guessed it, AI, the applications of AI, and the vast sums of money moving behind the scenes to fund it. Despite being one of the most vocal advocates of the tech, Altman had some words of caution for investors jumping on the artificial intelligence train.

According to The Verge, Altman said it was “insane” that AI startups consisting of “three people and an idea” are receiving huge amounts of funding off the back of incredibly high company valuations, describing it as “not rational behaviour.”

“Someone is going to lose a phenomenal amount of money. We don’t know who, and a lot of people are going to make a phenomenal amount of money,” said Altman.


Related articles

“When bubbles happen, smart people get overexcited about a kernel of truth. If you look at most of the bubbles in history, like the tech bubble, there was a real thing.” said Altman, referencing the infamous dot-com bubble of the late 1990s. “Tech was really important. The internet was a really big deal. People got overexcited.”

That being said, Altman stopped short of calling investment in AI overall a bad idea for the economy in general: “My personal belief, although I may turn out to be wrong, is that, on the whole, this would be a huge net win.”

At the same dinner, Altman confirmed that OpenAI would still be spending vast amounts of money (partially provided, presumably, by the likes of Softbank and the Dragoneer Investment Group in OpenAI’s latest $8.3 billion funding round) to keep the company at the top of the AI financial leaderbooks.

“You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman said. “You should expect a bunch of economists to wring their hands.”

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

Well, it certainly appears to cost a whole lot of moolah just to keep the good ship OpenAI afloat. The company has raised staggering sums of cash over the past decade to develop and run its various AI implementations, the most famous of which being ChatGPT. Reports last year indicated that OpenAI had spent $8.5 billion on LLM training and staffing for its generative AI efforts, while other analysts have predicted it costs $700,000 a day to run ChatGPT alone.

The Information recently projected that OpenAI would be burning through $20 billion in cash flow by 2027, with the company said to be hopeful that investors like Softbank would stump up another $30 to $40 billion to continue funding its operations.

A CG render of Meta’s planned Hyperion data center, superimposed over Manhattan. (Image credit: Meta)

Still, those spending figures don’t appear to be in the trillions yet, although that estimated sum is perhaps of little surprise to those of us that keep an eye on AI data center expansion.

Given that Altman’s rival, Elon Musk, has been booting up and expanding xAI’s Colossus supercomputer with incredible speed, and with the news that Meta is expanding its data center operations at such a rate it’s currently having to house a significant portion of its racks in nearby tents, OpenAI will feel the need to keep up—and to do that it needs to spend (and raise) huge amounts of cash over the next few years.

One would assume that Altman is confident enough in his company’s efforts to place its investors on the “going to make phenomenal sums of money” side of things, but his comments should perhaps serve as a warning to those looking to jump in with both feet without correctly judging the landing. Someone has to lose in the great AI race, I suppose. And as to which companies survive, and which come to a sticky end? That remains very much an open question for now.

Best graphics card 2025

All our current recommendations



Source link

August 18, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (733)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • Marathon still lives, as Bungie announces new closed technical test ahead of public update
  • AirPods 4 Are Now 3x Cheaper Than AirPods Pro, Amazon Is Offering Entry-Level Clearance Prices
  • Wildgate Review – A Shipshape Space Race
  • Battlefield 6 physical copies are content complete and require no initial install, according to early copy holders
  • KPop Demon Hunters Uploaded A New Song, But Something’s Off

Recent Posts

  • Marathon still lives, as Bungie announces new closed technical test ahead of public update

    October 8, 2025
  • AirPods 4 Are Now 3x Cheaper Than AirPods Pro, Amazon Is Offering Entry-Level Clearance Prices

    October 8, 2025
  • Wildgate Review – A Shipshape Space Race

    October 8, 2025
  • Battlefield 6 physical copies are content complete and require no initial install, according to early copy holders

    October 8, 2025
  • KPop Demon Hunters Uploaded A New Song, But Something’s Off

    October 8, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • Marathon still lives, as Bungie announces new closed technical test ahead of public update

    October 8, 2025
  • AirPods 4 Are Now 3x Cheaper Than AirPods Pro, Amazon Is Offering Entry-Level Clearance Prices

    October 8, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close