Noti.Group RSS Feed
  • Contact Us
Tuesday, March 17, 2026
Noti Group Logo
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
Noti Group
No Result
View All Result
ADVERTISEMENT

‘Clinical-grade AI’: a new buzzy AI word that means absolutely nothing

in Technology
Reading Time: 12 mins read
399 12
A A
0
Robert Hart
137
SHARES
6.9k
VIEWS
ShareShareShareShareShare

Earlier this month, Lyra Health announced a “clinical-grade” AI chatbot to help users with “challenges” like burnout, sleep disruptions, and stress. There are eighteen mentions of “clinical” in its press release, including “clinically designed,” “clinically rigorous,” and “clinical training.” For most people, myself included, “clinical” suggests “medical.” The problem is, it doesn’t mean medical. In fact, “clinical-grade” doesn’t mean anything at all.

“Clinical-grade” is an example of marketing puffery designed to borrow authority from medicine without the strings of accountability or regulation. It sits alongside other buzzy marketing phrases like “medical-grade” or “pharmaceutical-grade” for things like steel, silicone, and supplements that imply quality; “prescription-strength” or “doctor-formulated” for creams and ointments denoting potency; and “hypoallergenic” and “non-comedogenic” suggesting outcomes — lower chances of allergic reactions and non-pore blocking, respectively — for which there are no standard definitions or testing procedures.

Lyra executives have confirmed as much, telling Stat News that they don’t think FDA regulation applies to their product. The medical language in the press release — which calls the chatbot “a clinically designed conversational AI guide” and “the first clinical-grade AI experience for mental health care” — is only there to help it stand out from competitors and to show how much care they took in developing it, they claim.

Lyra pitches its AI tool as an add-on to the mental healthcare already provided by its human staff, like therapists and physicians, letting users get round-the-clock support between sessions. According to Stat, the chatbot can draw on previous clinical conversations, surface resources like relaxation exercises, and even use unspecified therapeutic techniques.

The description raises the obvious question of what does “clinical-grade” even mean here? Despite leaning heavily on the term, Lyra doesn’t explicitly say. The company did not respond to noti.group’s requests for comment or a specific definition of “clinical-grade AI.”

“There’s no specific regulatory meaning to the term ‘clinical-grade AI,’” says George Horvath, a physician and law professor at UC Law San Francisco. “I have not found any sort of FDA document that mentions that term. It’s certainly not in any statutes. It’s not in regulations.”

As with other buzzy marketing terms, it seems like it’s something the company coined or co-opted themselves. “It’s pretty clearly a term that’s coming out of industry,” Horvath says. “It doesn’t look to me as though there’s any single meaning … Every company probably has its own definition for what they mean by that.”

Though “the term alone has little meaning,” Vaile Wright, a licensed psychologist and senior director of the American Psychological Association’s office of healthcare innovation, says it’s obvious why Lyra would want to lean on it. “I think this is a term that’s been coined by some of these companies as a marker of differentiation in a very crowded market, while also very intentionally not falling under the purview of the Food and Drug Administration.” The FDA oversees the quality, safety, and effectiveness of an array of food and medical products like drugs and implants. There are mental health apps that do fall under its remit and to secure approval, developers must meet rigorous standards for safety, security, and efficacy through steps like clinical trials that prove they do what they claim to do and do so safely.

The FDA route is expensive and time consuming for developers, Wright says, making this kind of “fuzzy language” a useful way of standing out from the crowd. It’s a challenge for consumers, Wright says, but it is allowed. The FDA’s regulatory pathway “was not developed for innovative technologies,” she says, making some of the language being used for marketing jarring. “You don’t really see it in mental health,” Wright says. “There’s nobody going around saying clinical-grade cognitive behavioral therapy, right? That’s just not how we talk about it.”

Aside from the FDA, the Federal Trade Commission, whose mission includes protecting consumers from unfair or deceptive marketing, can decide something has become too fuzzy and is misleading the public. FTC chairman Andrew Ferguson announced an inquiry into AI chatbots earlier this year, with a focus on their effects on minors – though maintaining a priority of “ensuring that the United States maintains its role as a global leader in this new and exciting industry.” Neither the FDA nor the FTC responded to noti.group’s requests for comment.

While companies “absolutely are wanting to have their cake and eat it,” Stephen Gilbert, a professor of medical device regulatory science at the Dresden University of Technology in Germany, says regulators should simplify their requirements and make enforcement clearer. If companies can make these kinds of claims legally (or get away with doing so illegally), they will, he says.

The fuzziness isn’t unique to AI — or to mental health, which has its own parade of scientific-sounding “wellness” products promising rigor without regulation. The linguistic fuzz is spread across consumer culture like mold on bread. “Clinically-tested” cosmetics, “immune-boosting” drinks, and vitamins that promise the world all live inside a regulatory gray zone that lets companies make broad, scientific-sounding claims that don’t necessarily hold up to scrutiny. It can be a fine line to tread, but it’s legal. AI tools are simply inheriting this linguistic sleight of hand.

Companies word things carefully to keep apps out of the FDA’s line of fire and convey a degree of legal immunity. It shows up not just in marketing copy but in the fine print, if you manage to read it. Most AI wellness tools stress, somewhere on their sites or buried inside terms and conditions, language saying they are not substitutes for professional care and aren’t intended to diagnose or treat illness. Legally, this stops them being classed as medical devices, even though growing evidence suggests people are using them for therapy and can access the tools with no clinical oversight.

Ash, a consumer therapy app from Slingshot AI, is explicitly and vaguely marketed for “emotional health,” while Headspace, a competitor of Lyra’s in the employer-health space, touts its “AI companion” Ebb as “your mind’s new best friend.” All emphasize their status as wellness products rather than therapeutic tools that might qualify them as medical devices. Even general-purpose bots like ChatGPT carry similar caveats, explicitly disavowing any formal medical use. The message is consistent: talk and act like therapy, but say it’s not.

Regulators are starting to pay attention. The FDA is scheduled to convene an advisory group to discuss AI-enabled mental health medical devices on November 6th, though it’s unclear whether this will go ahead given the government shutdown.

Lyra might be playing a risky game with their “clinical-grade AI,” however. “I think they’re going to come really close to a line for diagnosing, treating, and all else that would kick them into the definition of a medical device,” Horvath says.

Gilbert, meanwhile, thinks AI companies should call it what it is. “It’s meaningless to talk about ‘clinical-grade’ in the same space as trying to pretend not to provide a clinical tool,” he says.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Robert Hart

    Robert Hart

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Robert Hart

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Health

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Health

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Science

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Science

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]

Tags: AIhealthReportscienceTech
Previous Post

OnePlus 15 arrives in China, global launch ‘coming soon’

Next Post

Neil Cole exonerated of accounting fraud after $150m legal battle

Related Posts

Now everyone in the US is getting Google’s personalized Gemini AI
Technology

Now everyone in the US is getting Google’s personalized Gemini AI

March 17, 2026
Nvidia’s DLSS 5 is like motion smoothing for video games, but worse
Technology

Nvidia’s DLSS 5 is like motion smoothing for video games, but worse

March 17, 2026
Antigravity’s 360-degree A1 drone is 15 percent off
Technology

Antigravity’s 360-degree drone is 20 percent off ahead of its next update

March 17, 2026
Intel announces Core Ultra 200HX Plus CPUs for high-end gaming laptops
Technology

Intel announces Core Ultra 200HX Plus CPUs for high-end gaming laptops

March 17, 2026
Load More
Next Post
Neil Cole exonerated of accounting fraud after $150m legal battle

Neil Cole exonerated of accounting fraud after $150m legal battle

No Result
View All Result

Recent Posts

  • NFL season is starting on Wednesday in new quirk
  • Remedy’s live-service shooter Firebreak is getting its final major update
  • NBA picks, odds, best bet Tuesday
  • Ryan Gosling’s $248 million Amazon movie is an outer-space blast
  • Now everyone in the US is getting Google’s personalized Gemini AI

Recent Comments

  • Stefano on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Van Hens on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Ioannis K on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Panagiotis Nikolaos on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • John Miele on UK government suggests deleting files to save water

Noti Group All rights reserved

No Result
View All Result
Noti Group

What’s New Here

  • NFL season is starting on Wednesday in new quirk
  • Remedy’s live-service shooter Firebreak is getting its final major update
  • NBA picks, odds, best bet Tuesday

Topics to Cover!

  • Business (4,751)
  • Entertainment (1,865)
  • General News (326)
  • Health (327)
  • Investigative Journalism (11)
  • Lifestyle (4)
  • Sports (8,191)
  • Technology (6,094)
  • World News (1,336)
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • RSS
  • Contact News Room
  • Code of Conduct
  • Careers
  • Values
  • Advertise
  • DMCA

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.