La IA en la asistencia sanitaria
May 1, 2026
8 min read

The FDA's 2026 general wellness update and what it means for health monitoring

The FDA’s 2026 update expands what qualifies as general wellness, but applying it correctly is where most teams struggle. This webinar recap breaks down the key changes, common compliance pitfalls, and how product, marketing, and AI decisions shape regulatory risk in digital health.

The FDA’s 2026 update to its General Wellness guidance provides additional clarity on how certain low-risk health technologies are evaluated. It expands the safe harbor for certain physiological monitoring tools, refines the rules around user notifications, and signals how the agency is thinking about a new wave of AI-driven health products.

But reading the guidance carefully and applying it correctly are two different things. And some of the most common mistakes companies make have nothing to do with their technology.

How the FDA actually evaluates your product

Before getting into what changed, it helps to understand the framework the FDA uses to evaluate digital health products in the first place.

Rebecca Gwilt, who advises digital health and virtual care companies on regulatory strategy, described three guidance documents that together define how the FDA will approach any health software or hardware product in 2026. First is the Device Software Functions Guidance, the top-level framework that determines whether a software function meets the broad legal definition of a medical device. Second is the General Wellness Guidance, updated this year. Third is the Clinical Decision Support Guidance, which governs software that supports clinical decision-making by healthcare providers.

The practical stakes are significant. If your product meets the definition of a medical device and doesn't qualify for the general wellness exemption or enforcement discretion, you're looking at a 510(k) clearance process – roughly 18 months of work and potentially millions of dollars. The wellness exemption exists to give genuinely low-risk products a faster, lighter path to market.

There are two main mechanisms for staying outside full device regulation. The first is a formal exemption: your software technically meets the definition of a device, but if it qualifies as general wellness or non-device CDS, the FDA essentially decides it won't treat it as one. The second is enforcement discretion – where the product meets the device definition but is low enough risk that the agency declines to enforce, even though it could. As Rebecca noted: "It's a different legal mechanism, but a similar commercial outcome."

What's new in the 2026 guidance

The 2026 update doesn't rewrite the rules from scratch, but it meaningfully expands what qualifies for the general wellness exemption. Three changes stand out.

The first is an expanded safe harbor for physiological parameter monitoring. The guidance now specifically names optical sensing – the kind of technology used to extract heart rate, blood oxygen, and other signals from video or light sensors – as potentially qualifying for the wellness exemption. Products producing clinical-type values like SpO2 and blood pressure trends are explicitly addressed. This is directly relevant to Shen AI, whose camera-based health monitoring platform generates exactly these kinds of signals.

The second change involves notifications. Under the updated guidance, products that alert users when a reading falls outside a specified range can, under certain conditions, still fall within general wellness. This gives consumer-facing health products considerably more room to be genuinely useful without triggering device classification.

The third thing to note is what didn't change: there is still a meaningful gray area. Rebecca was clear-eyed about this:

"Because the interpretation of the Cures Act has been less than clear and comprehensive, even after this guidance, it is the case that there is still some gray area out there. I don't think that's a bad thing necessarily."  

What it does mean is that interpretation will continue to be shaped by enforcement actions and litigation over time as we're already seeing with companies like Whoop.

The line is crossed with words, not code

Here is what most teams get wrong, and it's worth stating plainly: the majority of companies that end up in regulatory trouble don't get there because of what their product does. They get there because of what they say it does. As Anna Szopa points out:

"The biggest compliance challenges usually come from claims and user experience. Most companies cross the line with words and not with code." 

For the FDA, the central question is not whether your product is measuring something physiological. It's what you tell users that measurement means. A wellness product can display values and trends. It can add lifestyle context around sleep, activity, or recovery. But the moment it presents itself as screening for a condition, diagnosing something, monitoring a disease, or guiding treatment, it has moved into medical device territory, regardless of whether a single line of code has changed.

The classic example: a smartwatch displaying blood pressure trends is potentially fine as a wellness product, provided the values are validated and the product doesn't suggest any connection to hypertension. As Rebecca explained, "Blood pressure is fine, but if you say that this has something to do with hypertension — not fine." The same reading, the same algorithm, framed differently in the UI or on the marketing site, becomes a different regulatory category.

Anna added a sobering illustration of just how quickly this can happen:

"One landing page message like 'detect hypertension' can change the intended use and change the product from wellness to medical device.”

Disclaimers don't fix everything

Many teams believe that adding a disclaimer – something like "this is not medical advice" or "not intended to diagnose" – is sufficient protection. It isn't. "A common misconception is that a disclaimer fixes everything. But the FDA says that overall labeling, UI, and marketing matter", says Anna Szopa.

If the rest of your product experience reads as clinical – if your notifications sound like diagnostic alerts, if your landing page positions the product as a way to detect or manage a health condition, if your UI uses clinical-grade language – a disclaimer at the bottom doesn't offset that. The FDA evaluates intended use based on everything a reasonable user would encounter, taken together.

The AI problem nobody has fully solved

Generative AI introduces a layer of complexity that the current guidance doesn't cleanly address – and that every digital health team building with AI needs to think carefully about.

Rebecca raised what might be called regulatory drift: the risk that a product launches with every wellness signal correctly aligned, but as its AI improves and personalizes, it begins generating outputs that are functionally clinical advice, even if the language in the UI hasn't changed. Rebecca Gwilt shares:

"AI models tend toward precision. The better the model is at identifying what's happening with the person, the more tempting it is for the product team to say, 'We have this amazing insight — we can produce this output for this patient.' The problem is, the more precise you get, the more bespoke the output, the more clinical it seems. There's not really a framework for that right now." 

The honest answer is that regulation cannot keep pace with how quickly these models are developing. The responsibility falls on the companies building these products to actively monitor for drift and to build guardrails in from the start.

Rebecca's practical advice: build the intended use as a hard constraint into your AI systems. "Put the intended use in as a gating mechanism, add that to your context or put it in a markdown file or whatever, and run all of your code up against it and ask the system to alert you when something has drifted outside of that intended use."

What enforcement actually looks like

For companies that find themselves on the wrong side of the line, the practical risk profile matters. Rebecca walked through what enforcement typically looks like:

"Realistically, it starts with a warning letter. That's public. Goes into the FDA database. It surfaces in investor due diligence. Gets noticed by your partners, gets noticed by other regulators. It's not 'we're gonna come to shut your company down,' but it's not nothing."

For companies that have done a genuine, documented analysis of their regulatory position and respond cooperatively, the path forward is manageable. Her core advice: document your reasoning. Make sure everyone who touches the product – engineering, product, marketing, legal – knows the intended use and could explain it if asked. Companies can also, in some cases, legitimately disagree with the FDA's interpretation and choose to litigate but as Rebecca noted, "It's about risk appetite as well."

Getting product, legal, and marketing aligned

One of the most practically useful parts of the webinar was about organizational structure – specifically, how the three teams most responsible for regulatory exposure (product, legal, and marketing) tend to operate in silos, and what to do about it.

Rebecca observed that in early-stage companies especially, marketing is often treated as a sales function and rarely interacts with legal. "It is really important for these companies for marketing, product, and legal to be speaking to each other. That can look like a committee, that can look like a Slack channel, that can look like whatever."

The intended use document should be the shared anchor. Anna described how Shen AI approaches this: "We use one shared approval language list and one shared UX checklist — because when the team knows the safety barrier, we can move faster inside it."

A well-defined intended use isn't just a compliance tool – it's a creative constraint that enables teams to build more boldly, because everyone knows where the edges are. Rebecca added a useful counterpoint: the goal isn't to make everyone in your organization a regulatory expert, because that would strangle creativity. The goal is a collaborative structure where visionary ideas are shaped by compliance thinking, not stopped by it.

Building for multiple markets

For companies serving a global customer base, the picture gets more complicated. In Europe, for example, there is no direct equivalent to the general wellness exemption. Products are classified either as medical devices under the MDR, or as general consumer products, a significantly higher bar for anything involving health metrics.

Anna described Shen AI's own approach:

"We create different products for different markets. In the European market, we are currently going through MDR certification. We are also working on products dedicated to the US market – a general wellness product, and separately, a medical device with a different intended use."

A common strategy in the US, Rebecca noted, is the parallel track approach: ship a wellness product under the exemption to build market share and brand, while simultaneously developing a more clinical product that will eventually go through formal FDA review. "It gives you a runway to get to market while you're doing something more significant in the background."

The questions every team should be asking

Anna’s takeaway was direct: start with claims and UX review. Write intended use in simple English. Track every screen and notification against it. Keep marketing aligned.

Before launching or expanding, work through these questions:

  • Is your intended use clearly written in plain, non-clinical language?
    Does it focus on self-awareness, lifestyle, and trends without referencing diagnosis, treatment, or disease?

  • Have you reviewed every user-facing touchpoint against that intended use?
    Not just the product UI, but your website, app store listing, and marketing materials.

  • If you display numerical health values (blood pressure, SpO2, HRV), are they appropriately validated?
    Clinical-style numbers without supporting validation introduce material regulatory risk.

  • If you use AI to generate outputs, could a reasonable user interpret them as diagnostic or treatment advice?
    Even without explicit clinical language.

  • Have you documented your regulatory reasoning?
    A written, good-faith analysis strengthens credibility with regulators, partners, and investors.

The 2026 guidance gives digital health companies more room than they had before. The teams that benefit most will be the ones who understand the edges of that room clearly enough to build confidently inside them and who have the structures in place to make sure they don't drift past.

Share this post

More blog posts

Sagittis et eu at elementum, quis in. Proin praesent volutpat egestas sociis sit lorem nunc nunc sit.

Blog
April 16, 2026
3 min read

Wellness or medical product? In 2026, it’s all in the framing

The FDA's 2026 General Wellness guidance update changes the conversation for digital health teams. Here's what it means - and why it matters right now.

Read more
Blog
March 27, 2026
4 min read

How digital-first life insurers are rethinking underwriting

Digital underwriting often relies on self-reported data that limits accuracy from the start. This webinar recap explores how Lyfery integrated camera-based health measurement to improve data quality, strengthen risk models, and enable more reliable, behavior-based insurance.

Read more
Blog
March 8, 2026
5 min read

The heart attack risk women are told isn’t there

Heart disease is still the leading cause of death in women, yet awareness is falling and research gaps persist. This article explores why cardiovascular risk in women is often underestimated, how symptoms and diagnosis differ, and what new research reveals about the unique factors shaping women’s heart health.

Read more