AI in Healthcare
September 23, 2025
5 min read

Inside how Shen AI works with CTO Przemek Jaworski

Shen AI transforms any camera into a health companion, delivering 30+ metrics in 30 seconds. CTO Przemek Jaworski explains how multimodal sensing, clinical validation, and on-device processing make this breakthrough accurate, private, and accessible.

In an era where personal health is becoming increasingly digital, the smartphone is evolving from a communication tool into a powerful medical companion. Shen AI is at the forefront of this transformation, with technology that can analyze a short video of your face to read over 30 health metrics in just 30 seconds.

To better understand the science, engineering, and vision behind this innovation, we sat down with Przemek Jaworski, CTO and co-founder of Shen AI. A seasoned computer scientist and entrepreneur, Przemek has dedicated the recent years of his career to the field of computational physiology, aiming to make advanced health insights simple, accessible, and clinically reliable.

Przemek, thank you for joining us. To begin, could you provide an overview of Shen AI’s core technology and its mission?

At its heart, Shen AI is about empowering individuals to take control of their health in a non-invasive and accessible way. We have developed a software development kit that transforms any device with a camera - a smartphone, tablet, or even a smart mirror - into a tool for health monitoring. Our mission is to propel healthcare's shift from a reactive, treatment-based model to one that is proactive and predictive. By simplifying access to real-time health data, we can empower people to track their well-being and help providers deliver more personalized and effective care.

Let’s get specific. How is Shen AI able to detect blood pressure and other vital signs from just a face scan?

We chose the face because it's a perfect combination of biology and behavior. From a biological standpoint, the face has a dense network of blood vessels located very close to the skin's surface. These vessels subtly change color with each heartbeat as blood is pumped through them.While imperceptible to the naked eye, a camera can pick up these tiny, fleeting signals. From a behavioral perspective, the face is where people naturally hold their phones for selfies, video calls, or face unlock. This familiarity means we’re not asking users to adopt a new, cumbersome behavior; we are simply adding a new purpose to an action they already take every day.

The technology detects signals that the human eye can't see. Can you elaborate on the specific signals you are analyzing?

Our technology combines two streams of information to create a reliable health picture. The first is remote photoplethysmography, which analyzes the subtle color changes in the skin caused by blood flow. The camera sensor captures this information frame by frame, and our algorithms process it to preserve key data like the wave morphology and time shifts. The second is remote ballistocardiography, which detects the micro-movements of the head. These motions are caused by the mechanical activity of the heart—specifically, the ejection of blood. So, the same event, a single heartbeat, is observed in two distinct ways.

You refer to this as a "multimodal" approach. Why is this so critical for the technology’s reliability in the real world?

Multimodality is our secret to reliability. I often describe it as having two pairs of eyes looking at the same thing. This creates a powerful redundancy. If one signal is degraded—for example, due to poor lighting, a user's motion, or noise—the other stream of information can compensate. This dual-signal processing significantly improves accuracy, ensuring that the technology works consistently for everyone, everywhere. It’s how we address the challenge of creating a universal solution that works for all skin tones and a majority of lighting conditions.

Cutting the scan time from 60 to 30 seconds was a major leap. How did you achieve it?

Our earlier versions required a 60-second scan, which was useful but not always practical. In 60 seconds, people can get distracted, phones can shake, and noise can interrupt the process. The breakthrough came from fully integrating our multimodal approach. By combining the optical color signals with the micro-movement data, we essentially doubled the information collected per scan. This allowed us to extract the most critical data points in half the time. By having both modalities working together, we were able to cut the scan time from 60 seconds to 30 seconds and, remarkably, improve performance at the same time.

How reliable is it? Can you share specific accuracy metrics or validation results?

Science is at the core of what we do. We believe that if you want a user to trust a health product, it has to be built on a foundation of rigorous scientific methods and principles. Every single metric is validated against certified, gold-standard medical equipment. For heart rate and heart rate variability, we use an ECG. For blood pressure, we use auscultatory methods. These are the benchmarks for clinical accuracy. We follow reproducible, peer-reviewed protocols, and we have undergone extensive validation with independent researchers. This disciplined approach is how we ensure that our software provides data that is both reliable and actionable.

When talking about accessibility, we should also address the importance of reducing bias. Is that a critical concern for Shen AI?

Bias isn’t just a technical flaw in an algorithm; it's a significant health risk. If a technology doesn't perform equally well for people of all skin tones, ages, and backgrounds, it fails its mission of providing accessible healthcare. To combat this, our models are trained on a truly diverse dataset, with over half a million face videos spanning a wide range of demographics and environments. We are committed to ensuring that our system performs reliably for every kind of person, which is why our multimodal approach is so important—it is designed to adapt to real-world diversity.

Privacy is a huge concern in digital health. Can you walk us through what happens to user data during a scan?

This is something we designed for from day one. All processing happens locally on the user’s device. The video never leaves the phone—it isn’t sent to a server or stored in the cloud. What the user gets is the output: their health metrics, generated in real time. This way, we ensure complete privacy, no dependency on network connections, and instant results.

Let’s talk about real-world usage. What conditions does Shen AI need to work properly?

Like any camera-based technology, there are a few basics. The user’s face should be visible and reasonably well-lit. But thanks to our multimodal approach, we’re able to handle much more noise than single-signal systems. If the lighting is uneven or the user moves slightly, the second modality compensates.
That’s how we ensure reliability across skin tones, environments, and user behavior. Essentially, if you can take a selfie, Shen AI can run a scan.

Looking ahead, what is the vision for Shen AI? How do you see this technology changing healthcare?

A: We believe that healthcare should be proactive, not just reactive. Our vision is a future where every individual can monitor their vital signs effortlessly, turning a simple, familiar behavior into a powerful opportunity for health insight. By giving users clinically tested information, we can help people take charge of their well-being, facilitate more informed conversations with their healthcare providers, and ultimately contribute to a healthier, more conscious, and connected global community. What started as an experiment in computational physiology is now a proven technology, putting the power of agency back in the user's hands.

Share this post

More blog posts

Sagittis et eu at elementum, quis in. Proin praesent volutpat egestas sociis sit lorem nunc nunc sit.

Blog
September 17, 2025
4 min read

How Dr. Digital and Shen AI are transforming preventive care

Discover how Dr. Digital moved beyond hardware and partnered with Shen AI to make health monitoring simple, accurate, and proactive. The result: higher patient engagement, insurer value, and lives protected through early detection.

Read more
Blog
August 15, 2025
3 min read

Proactive heart health: Why early detection and prevention are your best defense against cardiovascular disease

Cardiovascular disease often develops silently, but prevention and early screening can save lives. From proven public health programs to AI-powered tools, here’s how proactive care helps protect your heart health.

Read more
Blog
August 11, 2025
3 min read

Why early detection of hypertension is critical

Hypertension is a leading cause of heart disease, stroke, and kidney failure — yet it often goes undetected. Learn why early detection matters, how to measure blood pressure accurately, and how AI-powered monitoring like Shen AI is making it easier than ever.

Read more