Thu, 19 Feb 2026

Hi friend,

Investors want evidence. Evidence that you can actually improve health outcomes. But what exactly are they looking for and how do you make sure you have it when they come looking?

That’s what we cover in today’s Hemingway Guide.

Guides are our new, expert-led series where we share practical playbooks on the biggest challenges in mental health innovation. Today’s expert is Jennifer Huberty, PhD.

Jen has over twenty years of experience in both academic research and commercial digital health. She’s was Head of Science at Calm and has advised several of the world’s leading digital health business. She knows how to do great science that advances both commercial and impact goals. She also knows what investors (and payers) want to see, and how to make it happen.

So, over to Jen…


What Investors Mean When They Ask for Validated Outcomes

by Jennifer Huberty, PhD

You get a meeting with an investor you’re excited about. You present your metrics showing how many active users you have, what your retention rate is and how many sessions users complete per week. The investor likes it, but then asks about your validated outcomes. Then what do you do? 

In Seed rounds you are rewarded for traction but in Series A and beyond you have to have proof of impact. Most founders don’t start their company thinking about how they will measure proof of impact, but when they hit Series A funding, they find themselves scrambling for those answers.  The purpose of this founder guide is to explain what “validated outcomes” actually means and how to prepare before investors ask. 

Section 1: When Engagement Numbers Stop Working

Engagement metrics and validated outcomes are not the same thing. Engagement metrics are important, yes, but on their own, they are no longer sufficient. Founders navigating Series A conversations and beyond need to understand that difference, but also why investors are now looking for much more.

Why This Shift Happened

Four forces converged to change what investors expect:

First, investor evaluation has matured. Tighter capital markets mean investors are more selective about where their capital goes. As the market has evolved, investors have clearer standards for what predicts durable growth. Early traction still matters, but it is no longer enough to signal long-term value.

Second, digital health is crowded and investors have learned that engagement metrics alone leave gaps. They've funded companies with impressive usage numbers that couldn't convert those numbers into enterprise revenue or sustainable business models. 

Third, health systems and payers have been burned. They've implemented engagement-first tools that didn't deliver measurable health improvements.

Fourth, reimbursement pathways require clinical evidence. If your business model depends on getting paid by health plans or qualifying for CPT codes, engagement data won't get you there. Period.

What Changed Between Seed and Series A

At different funding stages, investors ask different questions.

Seed question: "Do people want this?" (Engagement signals demand, not efficacy. UX research helps validate the need). 

Series A question: "Does this create measurable health value?" (Requires outcome data)

Why does this matter? Engagement is compelling but as you grow you need to be able to show impact. Enterprise buyers, health systems, and payers need proof that your product works, not just that people use it.  Investors funding your Series A are betting you can access those revenue channels. Without validated outcomes, you can't. This is especially true for companies targeting enterprise, payer, or clinical channels. A pure consumer model may face a lower evidence bar at Series A, but the requirement catches up as you scale. Either way, the earlier you start building toward validated outcomes, the stronger your position.


🔔 Join our live Session with Jen 🔔

Jen is joining us for a live, Hemingway Session on Feb 26th. We’ll discuss how to use science to create defensible value in digital health and the top questions Jen hears from health leaders. So if you have specific questions for Jen or want to go deeper on this topic, make sure to sign up.

Please note that spots at Hemingway Sessions are reserved for Hemingway Pro members. So if you would like to join, you can learn more here.


When Founders Realize They Need Different Data

This realization tends to show up during a few predictable moments for founders:

  • Your first enterprise sales call where the buyer asks for clinical evidence, and you realize your engagement dashboard doesn't answer their questions.

  • A health system partnership opportunity that requires published research or outcome data before they'll even pilot your solution.

  • An investor meeting where someone asks: "What percentage of users show clinically significant improvement?" and you genuinely don't know how to answer because you haven't been measuring clinical significance.

  • Or losing a deal to a competitor who can point to clearer or more credible evidence. 

These are all moments when you realise your engagement metrics are not enough.

Why Engagement Alone Doesn't Answer These Questions

Engagement measures use, not health impact. And use is not always correlated to health impact. There are even studies that suggest you can experience health benefits with what the industry considers "low" usage. For example, in a mixed methods study, University students were asked to participate in a 7-day mindfulness course with a meditation app. Those who intermittently engaged (3-5 out of 7 days) had comparable effect sizes to those that participated daily (1). In other cases, evidence suggests benefits may only be seen when doses are “just right”, as frequent engagement in mental health apps may lead to fatigue (2). 

Satisfaction scores also don't demonstrate clinical impact. Net promoter scores, or NPS, reflect whether someone would recommend a product. But it is not a health outcome. I know companies that have confused these, and it creates real problems in enterprise conversations.

A user can love your app, use it frequently, recommend it to friends, and still not experience measurable health improvement. Or conversely, they might use it sporadically but achieve significant clinical gains. Engagement metrics can't distinguish between these scenarios.

Section 2: What Investors Actually Mean by "Validated Outcomes"

This terminology consistently confuses founders so let’s get some clear definitions.

When investors say "validated outcomes," they mean health improvements measured using established instruments. Think PHQ-9 for depression, GAD-7 for anxiety, or blood pressure for cardiovascular health. These are published, peer-reviewed  measures with known reliability and validity.

When investors say "clinical evidence," they mean systematic pre-post measurement showing change. Actual data demonstrating that health status improved from baseline to follow-up.

When they say "proof it works," they mean data that convinces skeptical enterprise buyers. Because investors know those buyers hold the keys to your revenue growth.

Investors aren't asking for health impact data out of scientific curiosity. They're evaluating whether you can expand beyond individual consumer revenue into enterprise contracts. Health systems, employers, and payers pay significantly more than individuals, and validated outcomes are the evidence that unlocks this B2B revenue.

An Example of What Counts (vs. What Doesn't):

Counts: "PHQ-9 scores decreased 5.2 points (clinically significant) over 8 weeks among 300 users who completed both baseline and follow-up assessments."

Doesn't count: "Users report feeling 35% better on our in-app wellness check-in"

Recognized: Enterprise buyers, health systems, investors, and researchers all recognize these measures. You don't need to defend your choice or explain your measurement approach. Everyone knows what a 5-point reduction in PHQ-9 means.

Interpretable: Comparable to competitors and research literature. Using validated instruments lets stakeholders compare your outcomes to published benchmarks, standard care, and competitor data. This context is crucial for demonstrating meaningful impact. 

Defensible: These instruments have been tested extensively and published in peer-reviewed literature. Their measurement properties are known and documented.

Established reliability and validity: Established measures are the gold standard, have known measurement properties and wide recognition. Investigator-developed measures - tools created by a study or product team for a specific study - can work for investor conversations, especially when presented with scientific rigor and transparency. 

A Note on Investigator-Developed Measures

When your intervention addresses something existing measures don't capture well, you may use an investigator-developed measure. While these are not as well recognised as established measures, they aren't automatically disqualifying. I've seen companies successfully use them in Series A conversations. For example, Self-reported stress decreased significantly (p<0.001) on our investigator-developed 10-item stress assessment, which we're preparing for publication and validation.

What matters is that these measures are designed with intention, grounded in a clear understanding of what you’re trying to learn, and informed by basic scientific principles rather than guesswork. There also needs to be clear ownership and rigor behind the measure.

In practice, this often means involving a scientist or research lead in the design, review, and interpretation of the measure, even if you are not yet publishing. It also means asking questions in ways that allow the data to be interpretable and useful for decisions later, rather than collecting information that won’t inform future decisions.  

When presenting investigator-developed measures, how you frame them is also important:

  • Be transparent about your science/research plans and how you will be evaluating what you are doing as you grow. 

  • Show the rigor behind your development (why the questions were chosen, what they are meant to capture, and how they connect decisions, not just made up questions)

  • Supplement with established measures where possible

  • Frame it strategically: "We measure X with PHQ-9, and we've also developed a measure for Y, which existing instruments don't capture"

Investigator-developed measures can absolutely count, especially early on or when they are the best available option. The strategic advantage of established instruments is comparability and immediate recognition. But for early-stage companies,  data showing meaningful change matters more than measurement pedigree alone.

Section 3: Building Your Measurement Strategy

Founders consistently underestimate how long it takes to build outcome measurement. 

If you're planning to raise a Series A in 12 months, your measurement strategy should start now. Not in six months. Not "once we hit product-market fit." Now.

Users need to engage long enough for health changes to manifest. You need adequate sample sizes. Technical integration requires development cycles. By the time you're preparing your Series A deck, you need existing data to present, not a plan to collect future data.

Working Backward from Funding

The founders who show up to Series A conversations with compelling outcome data started building their measurement infrastructure 12 to 18 months earlier, as they were building their tool. They didn't wait until they "needed" the data. They recognized that evidence compounds over time. The earlier you start, the more you unlock later.

This requires thinking strategically about your stakeholders before investors ask for data.

Strategic Questions to Answer First

Before choosing measures or building infrastructure, answer these questions:

  • Who are your stakeholders for the next 18 months? Not just investors. Enterprise buyers, health system partners, potential acquirers, payers. Each has different evidence expectations.

  • What outcomes matter to them? Cost reduction? Productivity gains? Clinical improvement? Different stakeholders care about different endpoints.What validated measures capture those outcomes? Match your measurement approach to stakeholder priorities, not just what feels interesting to measure.

  • Can you maintain engagement long enough to measure meaningful change? Or can you demonstrate how quickly health impacts become evident with your intervention?

This last question matters more than founders realize. If your solution requires 12 weeks to show clinical impact, but your 8-week retention is 30%, you have a measurement problem before you have a data problem. (You probably also have a product problem, but that's a different conversation.)

How to Prioritize

Don't try to measure everything. Start with 1 to 2 primary outcomes, not a comprehensive assessment battery.

Choose measures aligned with your value proposition. If you're a mental health solution, PHQ-9 and GAD-7 are standard. The measures should connect logically to what your product does. 

When possible, consider whether one of your measures can be a point of differentiation.

This is not about measuring more. It’s about intentionally choosing one outcome that captures what your product does differently or better than alternatives, while still meeting stakeholder expectations.

This might be an outcome others don’t measure well, or a dimension of change that closely reflects how your solution uniquely creates value. One well-chosen measure here can help your story stand out in investor and enterprise conversations, not just satisfy validation requirements.

For example, two mental health solutions may both report improvements in depression or anxiety. However, one may also measure an outcome like emotional regulation capacity or perceived control during stress, because that dimension closely reflects how the product works. While the standard measures establish credibility, the differentiating outcome helps clarify what the product uniquely does compared to competitors, strengthening interpretation in investor and enterprise conversations.

It’s also important to select measures that are feasible within your user journey. A 200-item assessment battery sounds thorough, but if completion rates are 15%, you won't have usable data. Balance comprehensiveness with pragmatic completion rate. Consider what enterprise buyers expect to see. Talk to potential customers about what data influences their purchasing decisions. Build measurement around their requirements, not academic ideals.

Implementation Realities Founders Underestimate

There are four operational challenges that consistently surprise founders when it comes to capturing feedback:

  • Assessment burden on users. Completion rates drop dramatically with lengthy surveys. Even clinically validated instruments can reduce engagement if implementation feels cumbersome. Design your measurement touchpoints carefully.

  • Technical integration. Building assessment infrastructure takes development time. APIs, data storage, consent workflows, results reporting. This isn't trivial engineering work. Budget accordingly, and do it early. 

  • Retention requirements. Pre-post measurement requires users to complete both baseline and follow-up assessments. If someone completes baseline but churns before follow-up, you can't include them in outcome analysis. This affects sample sizes more than founders anticipate. (Though retention doesn't have to be perfect to see health impacts, more on this shortly.)

  • Privacy and consent requirements. Health outcome data has different regulatory requirements than engagement metrics. HIPAA considerations, informed consent protocols, data security standards. These aren't optional. Budget time for compliance infrastructure.

What Credible Evidence Looks Like for Series A

  • Pre-post data showing improvement. Baseline and follow-up assessments that demonstrate change in established outcomes over time.

  • Systematic measurement using validated instruments. Not one-off surveys. Consistent data collection using established measures, from the start

  • Honest presentation of limitations. These define what your evidence can and cannot support. Investors respect transparency about sample sizes, dropout rates, and methodological constraints.

  • Clear plan for expanding evidence as you scale. Show investors how measurement evolves from retrospective analysis to prospective studies to potentially published research.

While early evidence is often generated internally, engaging an external scientific advisor or independent contributor at this stage helps reinforce rigor, address potential conflicts of interest, and strengthen investor confidence.

Note: Published research can be a significant credibility accelerator in Series A conversations. While it’s not always required at this stage, investors view it as a strong signal of scientific rigor, execution capability, and long-term defensibility. At minimum, they expect credible outcome data paired with a clear, intentional roadmap for how evidence will deepen and strengthen over time. 

What if You're Not Ready for Full Outcome Studies

If you're earlier in your measurement journey, the goal is not perfection. It’s to begin generating outcome data in a way that is structured, defensible, and appropriate for your stage. 

Here are pragmatic starting points:

  • Conduct a retrospective analysis of existing data. If you've been collecting any validated measures (even inconsistently), analyze what you have. Preliminary data beats no data.

  • Start collecting validated measures now, analyze later. Even if you're not ready to present outcomes today, begin systematic collection immediately. Future-you will be grateful for historical data.

  • Develop research partnerships. Academic collaborations or partnerships with clinical organizations can add methodological rigor and credibility  when they align with your stage and strategy. These partnerships sometimes introduce formal research oversight, which should be navigated with appropriate scientific guidance. When oversight is required, a commercial IRB is typically the appropriate path. We’ll address this in more detail in a future brief.  

The key is to start deliberately. Outcome measurement doesn't have to be perfect initially, but it needs to exist. Too many founders wait for ideal conditions that never materialize, only to realize later they’ve delayed evidence generation for longer than intended.

Section 4: Preparing for Investor Questions

Investors evaluating digital health companies now ask specific questions about outcomes, meaning  you need to have specific answers ready.

Questions Investors Ask:

"What percentage of users showed clinically significant improvement?"
This requires knowing clinical significance thresholds for your chosen measures. For PHQ-9, a 5-point reduction is clinically meaningful. For GAD-7, it's 4 points. You need to calculate what proportion of your users achieved these thresholds, not just report average changes.

"How does this compare to standard care?"
Investors want context. Is your 5-point PHQ-9 reduction better than therapy? Better than medication? Comparable to published benchmarks? If you don't provide context, they'll assume your outcomes are insignificant.

"What was your dropout rate? How did completers differ from non-completers?"
This addresses selection bias. If only your most engaged users completed follow-up assessments, your outcomes might not generalize to typical users. Investors recognize this limitation and want to know you do too.

“How do your outcomes compare to your competitors?”
If your competition has published data, you should be aware of it. If not, this is a great opportunity to highlight the validity of your solution. 

How to Be Transparent

It's completely acceptable to say: "We're six months into our first systematic outcome study. Here's what we're seeing so far, and here are the limitations of our current data." It's also acceptable to say: "We built a science roadmap for the next 18 months. We're currently here [describe your stage], and here's what we're generating next."

What doesn't work is overselling preliminary data or claiming certainty you don't have. Investors respect honesty and strategic planning more than inflated claims. They've seen enough companies overpromise on outcomes to be skeptical of anything that sounds too good.

What to Have Ready

Before investor meetings make sure you have these materials prepared:

  • One-pager on your measurement approach and timeline. Include what you're measuring, why you chose these instruments, your current sample size, and your roadmap for expanding evidence. If you're working with recognized scientists or clinicians, include their credentials. This adds credibility.

  • Data summary showing outcome trends. Even if preliminary, show what you're observing. Charts or tables work well. Include confidence intervals or standard deviations if you have them. This demonstrates statistical literacy.

  • Explanation of why you chose these validated measures. Connect your measurement approach to your value proposition and target market. This shows strategic thinking, not just compliance with investor expectations.

  • Comparison context. How do your results relate to published benchmarks? If you're showing a 6-point PHQ-9 reduction, note that this exceeds the 5-point clinical significance threshold and is comparable to published therapy outcomes. Context matters enormously.

You should also position your measurement strategy in a way that focuses on generating long term enterprise value. For example, you should say that you’re “building evidence infrastructure alongside product development" and that your "measurement approach supports enterprise sales, not just fundraising conversations."

Position outcome data as a strategic investment in sales enablement. Because that's what it is. Most investors understand business strategy better than they understand research methodology. Connect your measurement efforts to your go-to-market and suddenly they become much more interested.

Conclusion

Engagement metrics that secured seed funding don't satisfy Series A investors in 2026. This reflects digital health's maturation as an industry and how investors now distinguish clearly between usage and health impact. The companies that recognize this distinction early have a significant competitive advantage over those scrambling to build evidence during fundraising processes.

Outcome measurement takes longer to build than founders expect.  So start earlier than feels necessary.

By then, they're trying to retrofit measurement systems, rush data collection, or explain to investors why they don't have outcome data yet. None of these positions are strong.

Data you collect today becomes the foundation for enterprise sales conversations next year and published research the year after. This compounding effect is why early investment in measurement infrastructure pays returns far beyond your immediate fundraising needs.  And as you grow so will your data, the measures you use, and the stories you can tell with your data. 

This isn't about abandoning engagement metrics. Engagement data remains important. It answers questions about product-market fit, user experience, and retention. But it's about understanding what questions each type of data answers. Engagement tells you people use your product. Outcomes tell you your product works. Investors need both.

As funding becomes more selective and digital health competition intensifies, validated outcomes are shifting from "nice to have" to "must have." The founders who internalize this shift now (who build measurement infrastructure before they need to present data) will have substantially easier investor conversations than those who wait.

Start building your evidence strategy today. Your future self will thank you.

Key Takeaways

  • "Validated outcomes" means health improvements measured with established instruments. Not engagement scores, satisfaction ratings, or proprietary wellness scales.

  • Investors are really asking: "Can you prove value sufficient for enterprise partnerships and sustainable revenue growth?"

  • Start measurement as soon as possible. If you're planning Series A in 12 months, your measurement strategy should start now.

  • Choose 1 to 2 validated instruments aligned with your value proposition. Don't try to measure everything. Measure what matters to your stakeholders.

  • Transparency about your measurement journey beats overstating preliminary results. Investors respect honest assessment of where you are and where you're going more than inflated claims about incomplete data.


Hey, Steve here again. I hope you found this both insightful and actionable. Remember, if you want to learn more from Jen and discuss these topics with her, sign up for our Hemingway Session on Feb 26th.

As always, please get in touch and let me know what you think of this Guide. I just want to make content that is helpful for you, so your feedback is always a gift.

But for now…

Keep fighting the good fight!

Steve

Founder of Hemingway

References 

  1. Clarke, J., & Draper, S. (2020). Intermittent mindfulness practice can be beneficial, and daily practice can be harmful: An in depth, mixed methods study of the "Calm" app's (mostly positive) effects. Internet Interventions, 19, 100293. https://doi.org/10.1016/j.invent.2019.100293

  2. Zhang, R., Nicholas, J., Knapp, A. A., Graham, A. K., Gray, E., Kwasny, M. J., Reddy, M., & Mohr, D. C. (2019). Clinically meaningful use of mental health apps and its effects on depression: Mixed methods study. Journal of Medical Internet Research, 21(12), e15644. https://doi.org/10.2196/15644

Latest Articles