Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

ChatGPT

Why do lawyers keep using ChatGPT?
Gaming Gear

Why do lawyers keep using ChatGPT?

by admin June 2, 2025


Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?

The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.

Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.

In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said.

But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.

In one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.

Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.

Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation.

These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.

Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.

But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)

Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.

Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law.

Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.

“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”

Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.”

AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools.

Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.

Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”

Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”





Source link

June 2, 2025 0 comments
0 FacebookTwitterPinterestEmail
DeepSeek Claims Upgraded Model Approaching ChatGPT, Gemini
Crypto Trends

DeepSeek Claims Upgraded Model Approaching ChatGPT, Gemini

by admin May 29, 2025



DeepSeek, a China-based artificial intelligence company, has announced an upgrade to its AI chatbot, saying it can now offer enhanced overall logic, mathematics and programming with a reduced hallucination rate.

According to DeepSeek, the upgraded model — DeepSeek-R1-0528 — has “significantly improved its depth of reasoning and inference capabilities.” The startup said the model’s overall performance is now “approaching that of leading models, such as O3 and Gemini 2.5 Pro.”

Performance comparison of language models across six benchmarks. Source: DeepSeek

DeepSeek’s debut of its R1 chatbot in January sent shockwaves through the AI industry and further established China as an AI force. The company’s first AI model had a training cost of $6 million and similar performance to leading AI models trained on significantly larger sums of capital.

According to data from Business of Apps, DeepSeek has been downloaded 75 million times since its launch and had 38 million monthly active users (MAU) as of April. In a recent antitrust lawsuit, Google estimated that Gemini reached 350 million active users in March, while OpenAI’s ChatGPT claimed 600 million active users in the same month.

Related: China’s DeepSeek launches new open-source AI after R1 took on OpenAI

Chinese-American AI race heats up

The United States government is planning to restrict the sale of advanced chip design software to China. According to a Bloomberg report, the move seeks to limit China’s ability to advance its domestic semiconductor manufacturing capabilities.

Semiconductors are critical for a wide range of technologies, including AI, where they serve as the hardware backbone for training and running complex models.

New China AI models, such as Tencent’s T1 and Alibaba’s Qwen3, have also emerged in the first few months of 2025, spurring the AI race along.

Magazine: AI Eye: 9 curious things about DeepSeek R1



Source link

May 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
Let's Talk About ChatGPT and Cheating in the Classroom
Gaming Gear

Let’s Talk About ChatGPT and Cheating in the Classroom

by admin May 24, 2025


Michael Calore: That’s pretty good. Katie?

Katie Drummond: My recommendation is very specific and very strange. It is a 2003 film called What a Girl Wants, starring Amanda Bynes and Colin Firth.

Michael Calore: Wow.

Katie Drummond: I watched this movie in high school, where I was cheating on my math exams. Sorry. For some reason, just the memory of me cheating on my high school math exams makes me laugh, and then I rewatched it with my daughter this weekend, and it’s so bad and so ludicrous and just so fabulous. Colin Firth is a babe. Amanda Bynes is amazing, and I wish her the best. And it’s a very fun, stupid movie if you want to just disconnect your brain and learn about the story of a 17-year-old girl who goes to the United Kingdom to meet the father she never knew.

Michael Calore: Wow.

Lauren Goode: Wow.

Katie Drummond: Thank you. It’s really good.

Lauren Goode: I can’t decide if you’re saying it’s good or it’s terrible.

Katie Drummond: It’s both. You know what I mean?

Lauren Goode: It’s some combination of both.

Katie Drummond: It’s so bad. She falls in love with a bad boy with a motorcycle but a heart of gold who also happens to sing in the band that plays in UK Parliament, so he just happens to be around all the time. He has spiky hair. Remember 2003? All the guys had gel, spiky hair.

Lauren Goode: Yes, I still remember that. Early 2000s movies, boy, did they not age well.

Katie Drummond: This one, though, aged like a fine wine.

Michael Calore: That’s great.

Katie Drummond: It’s excellent.

Lauren Goode: It’s great.

Katie Drummond: Mike, what do you recommend?

Lauren Goode: Yeah.

Michael Calore: Can I go the exact opposite?

Katie Drummond: Please, someone. Yeah.

Michael Calore: I’m going to go literary.

Katie Drummond: OK.

Michael Calore: And I’m going to recommend a novel that I read recently that it just shook me to my core. It’s by Elena Ferrante, and it is called The Days of Abandonment. It’s a novel written in Italian, translated into English and many other languages, by the great pseudonymous novelist, Elena Ferrante. And it is about a woman who wakes up one day and finds out that her husband is leaving her and she doesn’t know why and she doesn’t know where he’s going or who he’s going with, but he just disappears from her life and she goes through it. She accidentally locks herself in her apartment. She has two children that she is now all of a sudden trying to take care of, but somehow neglecting because she’s-

Katie Drummond: This is terrible.

Michael Calore: But it’s the way that it’s written is really good. It is a really heavy book. It’s rough, it’s really rough subject-matter-wise, but the writing is just incredible, and it’s not a long book, so you don’t have to sit and suffer with her for a great deal of time. I won’t spoil anything, but I will say that there is some resolution in it. It’s not a straight trip down to hell. It is a, really, just lovely observation of how human beings process grief and how human beings deal with crises, and I really loved it.



Source link

May 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Is She Really Mad at Me? Maybe ChatGPT Knows
Gaming Gear

Is She Really Mad at Me? Maybe ChatGPT Knows

by admin May 19, 2025


Kate’s real-life therapist is not a fan of her ChatGPT use. “She’s like, ‘Kate, promise me you’ll never do that again. The last thing that you need is more tools to analyze at your fingertips. What you need is to sit with your discomfort, feel it, recognize why you feel it.’”

A spokesperson for OpenAI, Taya Christianson, told WIRED that ChatGPT is designed to be a factual, neutral, and safety-minded general-purpose tool. It is not, Christianson said, a substitute for working with a mental health professional. Christianson directed WIRED to a blog post citing a collaboration between the company and MIT Media Lab to study “how AI use that involves emotional engagement—what we call affective use—can impact users’ well-being.”

For Kate, ChatGPT is a sounding board without any needs, schedule, obligations, or problems of its own. She has good friends, and a sister she’s close with, but it’s not the same. “If I were texting them the amount of times I was prompting ChatGPT, I’d blow up their phone,” she says. “It wouldn’t really be fair. I don’t need to feel shame around blowing up ChatGPT with my asks, my emotional needs.”

“I was just like, hey, did she break up with me?”

Andrew, who uses ChatGPT to navigate relationships.

Andrew, a 36-year-old man living in Seattle, has increasingly turned to ChatGPT for personal needs after a tough chapter with his family. While he doesn’t treat his ChatGPT use “like a dirty secret,” he’s also not especially forthcoming about it. “I haven’t had a lot of success finding a therapist that I mesh with,” he says. “And not that ChatGPT by any stretch is a true replacement for a therapist, but to be perfectly honest, sometimes you just need someone to talk to about something sitting right on the front of your brain.”

Andrew had previously used ChatGPT for mundane tasks like meal planning or book summaries. The day before Valentine’s Day, his then girlfriend broke up with him via text message. At first, he wasn’t completely sure he’d been dumped. “I think between us there was just always kind of a disconnect in the way we communicated,” he says. The text “didn’t actually say, ‘Hey, I’m breaking up with you’ in any clear way.”

Puzzled, he plugged the message into ChatGPT. “I was just like, hey, did she break up with me? Can you help me understand what’s going on?” ChatGPT didn’t offer much clarity. “I guess it was maybe validating, because it was just as confused as I was.”

Andrew has group chats with close friends that he would typically turn to in order to talk through his problems, but he didn’t want to burden them. “Maybe they don’t need to hear Andrew’s whining about his crappy dating life,” he says. “I’m kind of using this as a way to kick the tires on the conversation before I really kind of get ready to go out and ask my friends about a certain situation.”

“If I were texting them the amount of times I was prompting ChatGPT, I’d blow up their phone.”

Kate, a ChatGPT user

In addition to the emotional and social complexities of working out problems via AI, the level of intimate information some users are feeding to ChatGPT raises serious privacy concerns. Should chats ever be leaked, or if people’s data is used in an unethical way, it’s more than just passwords or emails on the line.

“I have honestly thought about it,” Kate says, when asked why she trusts the service with private details of her life. “Oh my God, if someone just saw my prompt history—you could draw crazy assumptions around who you are, what you worry about, or whatever else.”



Source link

May 19, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2

Categories

  • Crypto Trends (1,003)
  • Esports (754)
  • Game Reviews (692)
  • Game Updates (883)
  • GameFi Guides (995)
  • Gaming Gear (950)
  • NFT Gaming (978)
  • Product Reviews (939)
  • Uncategorized (1)

Recent Posts

  • This viral coin could outperform top coins in the crypto space
  • Pokemon TCG Pocket’s New Expansion Features A Legendary Trio, Launches August 28
  • Today’s Wordle clues, hints and answer for August 25 #1528
  • Philippines to Consider Strategic Bitcoin Reserve With 20-Year Lockup
  • 5 altcoins poised for massive growth potential in 2025

Recent Posts

  • This viral coin could outperform top coins in the crypto space

    August 25, 2025
  • Pokemon TCG Pocket’s New Expansion Features A Legendary Trio, Launches August 28

    August 25, 2025
  • Today’s Wordle clues, hints and answer for August 25 #1528

    August 25, 2025
  • Philippines to Consider Strategic Bitcoin Reserve With 20-Year Lockup

    August 25, 2025
  • 5 altcoins poised for massive growth potential in 2025

    August 25, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This viral coin could outperform top coins in the crypto space

    August 25, 2025
  • Pokemon TCG Pocket’s New Expansion Features A Legendary Trio, Launches August 28

    August 25, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close