Earlier this month, Lyra Health announced A “clinical-grade” AI chatbot to help users with “challenges” like burnout, sleep disruption and stress. There are eighteen mentions of “clinical” in its press release, including “clinically designed,” “clinically rigorous,” and “clinically trained.” For most people, including myself, “diagnostic” means “medical.” The problem is that it doesn’t happen Meaning Treatment. In fact, “clinical-grade” doesn’t mean anything at all.
“Clinical-grade” marketing is an example of puffery designed to borrow authority from a drug without conditions of accountability or regulation. This sits alongside other common marketing phrases like “medical-grade” or “pharmaceutical-grade” for things like steel, silicone, and supplements that indicate quality; “Prescription-strength” or “doctor-formulated” for creams and ointments indicating potency; And “Hypoallergenic” And “non comedogenic” Suggesting results – less chance of allergic reactions and non-pore blocking, respectively – for which there are no standard definitions or testing procedures.
Lyra officials have confirmed this, Say state news They do not feel that FDA regulation applies to their product. The medical language in the press release – which calls the chatbot a “clinically designed conversational AI guide” and “the first clinical-grade AI experience for mental health care” – is just to help it stand out from competitors and show how much care they took in developing it, they claim.
Lyra offers its AI tools as an add-on to the mental health care already provided by its human staff, such as therapists and physicians, giving users round-the-clock support between sessions. according to stateThe chatbot can also use superficial resources such as past clinical conversations, relaxation exercises, and even unspecified therapeutic techniques.
The description raises the obvious question, what does “clinical-grade” mean here? Despite leaning heavily on the word, Lyra doesn’t say anything explicitly. The company did not give any answer The VergeComments or requests for a specific definition of “clinical-grade AI”.
“The term ‘clinical-grade AI’ has no specific regulatory meaning,” says George Horvath, a physician and law professor at UC Law San Francisco. “I can’t find any FDA document that mentions that term. It’s certainly not in any statute. It’s not in the regulations.”
Like other buzzy marketing terms, it seems like it’s something the company itself coined or co-opted. “It’s clearly a term that’s coming out of the industry,” Horvath says. “I don’t feel like there’s one single meaning…probably every company has their own definition of what they mean by it.”
Although “the word alone means nothing,” says Vale Wright, a licensed psychologist and senior director of the American Psychological Association’s Office of Healthcare Innovation, it’s clear why Lyra would want to rely on it. “I think it’s a term that has been coined by some of these companies as a marker of discrimination in a very crowded market, while intentionally not falling within the scope of the Food and Drug Administration.” The FDA oversees the quality, safety, and effectiveness of a range of food and medical products, such as drugs and implants. There are mental health apps that fall within its scope and to secure approval, developers must meet rigorous standards for safety, security, and efficacy through steps such as clinical trials that prove they do what they claim to do and do it safely.
Wright says the FDA route is expensive and time-consuming for developers, making this kind of “vague language” a useful way to stand out from the crowd. It’s a challenge for consumers, but it’s allowed, Wright says. She says the FDA’s regulatory path “was not developed for innovative technologies,” making some of the language used in marketing troubling. “You don’t really see it in mental health,” says Wright. “Nobody’s talking about clinical-grade cognitive behavioral therapy, right? We don’t talk about it like that.”
In addition to the FDA, the Federal Trade Commission, whose mission includes protecting consumers from unfair or misleading marketing, can decide that something has become too vague and misleading to the public. FTC Chairman Andrew Ferguson announced An investigation into AI chatbots earlier this year focused on their effects on minors – although the priority of “ensuring that the United States maintains its role as a global leader in this new and exciting industry” has been maintained. Neither the FDA nor the FTC responded The VergeRequest for comment.
Stefan Gilbert, professor of medical device regulatory science at the Dresden University of Technology in Germany, says that while companies “certainly want to have their cake and eat it,” regulators should simplify their requirements and clarify enforcement. If companies can legally make such claims (or get away with doing so illegally), they will, he says.
Ambiguity is not unique to AI – or to mental health, which has its own parade of scientific-sounding “wellness” products promising rigor without regulation. Linguistic confusion is widespread in consumer culture like mold on bread. “Clinically tested” cosmetics, “immunity-boosting” drinks, and vitamins that promise the world all live inside a regulatory gray zone that allows companies to make sweeping, scientific-sounding claims that don’t necessarily hold up to scrutiny. It may be a fine line to walk, but it’s legal. AI tools are simply inheriting this linguistic sleight of hand.
Companies write things carefully to keep apps out of the way of the FDA and to give them some legal immunity. If you manage to read it, it appears not only in the marketing copy but also in the fine print. Most AI wellness tools stress that somewhere on their sites or buried in the terms and conditions, they are not a substitute for professional care and are not intended to diagnose or treat disease. However, legally this prevents them from being classified as medical devices. mounting evidence Suggests that people are using them for treatment and can access the devices without any clinical supervision.
Ash, a consumer therapy app from Slingshot AI, is explicitly and implicitly marketed for “emotional health,” while Headspace, a competitor to Lyra in the employer-health sector, describes its “AI companion” Ab as “your brain’s new best friend.” All emphasize their status as wellness products rather than therapeutic devices which might qualify them as medical devices. Even general-purpose bots like ChatGPT carry similar warnings, explicitly disclaiming any formal medical use. The message is consistent: Talk and act like therapy, but say it’s not like it is.
Regulators have started to pay attention. is fda scheduled to convene an advisory group on Nov. 6 to discuss AI-enabled mental health therapy tools, though it’s unclear whether this will move forward given the government shutdown.
However, Lyra may be playing a risky game with her “clinical-grade AI.” “I think they’re really going to get closer to a line for diagnosis, treatment and all the other things that will bring them into the definition of a medical device,” Horvath says.
Meanwhile, Gilbert believes AI companies should call it what it is. “Talking about ‘clinical-grade’ is just as meaningless as trying to pretend we don’t have clinical equipment available,” he says.

