Thu, 14 Aug 2025 15:00:00 +0000

Hi friends,

Imagine this. It’s 2035, and the two of us are sitting down for a chat.

The conversation turns to what it was like working in mental health back in 2025. The topic of AI comes up. What do our 2035 selves say about it?

Will we look back and see 2025 as a time when we got carried away with the hype of a new technology? Will we rue that we ever allowed it to come into existence? Or will we see it as the dawn of something that changed healthcare forever?

Answering that question is hard. But I think about it a lot. We should not be passive about the role technology plays in our lives. We should gather evidence and insight, form viewpoints based on strong ethical grounds, and act on those views

One way I get insight into this question is by talking to smart people.

Ross Harper is perhaps one of the smartest people in this space. He holds a Ph.D. in Computational Neuroscience and a Master's in Mathematical Modelling from UCL. He also holds a Master’s in Natural Sciences from the University of Cambridge.

For the last seven years, Ross has been running Limbic, one of the hottest AI mental healthcare businesses. He’s at the coalface of AI in mental health, with deep insights into the technology, how it’s changing care and how the ecosystem is adopting (and paying for) it all.

Ross recently wrote a post sharing his thesis on the opportunity of AI in mental health. He’s optimistic, and I really enjoyed the post - it’s highly logical and supported by data.

But I also had questions. So I caught up with him for a conversation about all things Limbic and AI in mental healthcare. We chat about clinical-grade LLMs, AI commercialisation, how the therapist’s role may change, the importance of trust, and even discuss if AI can actually care. It’s a fascinating conversation.

So let’s get into it.

Lessons from Ross (summary:

  1. Clinical AI agents could finally unlock scalable care. After scribes and non-clinical AI agents, Ross sees the third wave of AI adoption as AI agents capable of performing clinical services. Clinical services account for 70% of all healthcare services and are massively under-resourced. Cracking this problem is what could truly scale mental healthcare.

  2. The role of the therapist may be changing. If therapy services are unbundled and clinical AI agents are adopted at scale, the role of the therapist may change substantially. Ross sees a world where therapists may act as clinical supervisors to specialised AI agents. In this world, the therapist would focus on the most complex cases, treatment planning, managing quality and maintaining the therapeutic relationship.

  3. LLMs must be paired with clinical reasoning AI systems. For AI to deliver quality care, LLMs must be paired with a more structured AI system with clinical reasoning. This way, you get the conversational benefits of LLMs but with clinical certainty and, importantly, transparency into how clinical decisions are made.

  4. Trust must be a priority. Building trust takes time and relentless effort. It requires peer-reviewed evidence, regulatory approvals, accreditations and real-world impact. That trust is the only way to give stakeholders the confidence to adopt new technologies in this space. Over time, high trust will be a major asset.

Steve: Hey Ross, in your recent post, you described healthcare as a massive services market. Clinical services account for about 70% of that market.

One hypothesis I hold is that AI will lead to the unbundling of therapy services - in fact, I think it already is. Most people agree that AI will replace at least some of the jobs done by a therapist. The real question is what proportion of a therapist’s role it will replace. What’s your take on where that line should be?

Ross: I think you and I agree. For a therapist to scale from a panel of 30 patients to 300, AI will need to take on around 90% of the jobs they currently do.

Some of that is administrative: note-taking, scheduling, documentation, follow-ups, etc. But the bigger opportunity is in the clinical workload: triage, assessment, and even elements of therapy delivery.

We already have strong peer-reviewed evidence from Limbic showing that AI can reliably handle triage. Large parts of the assessment process can be automated too, e.g. gathering the right information and using statistical models to accurately predict the primary presenting issue.

And because so much of talk therapy is skills-based, a lot of the “homework” and practice adherence can be supported by an empathetic, knowledgeable AI, rather than consuming hours of clinician time.

I see the future therapist role as a clinical supervisor to a fleet of specialised AI agents. The AI does most of the front-line work; the therapist focuses on the most complex cases, ensures quality, steers treatment plans, and crucially maintains the therapeutic relationship (but with far less time needed per patient).

That’s how you scale without losing quality.


Steve: When we think about adoption by the healthcare system, it seems like we started with admin tasks/scribes, then intake and assessment tools and now perhaps some clinical decision support. 

On the consumer side, we’re seeing huge demand for conversational AI agents delivering support - lots are pursuing that avenue, but it seems mostly limited to D2C channels for now.

Going forward, what do you think the adoption roadmap for different AI use-cases looks like in mental healthcare?

Ross: I see AI in healthcare arriving in three waves.

  • Wave one was the copilots: scribes like Abridge in the US. They sit alongside the clinician, capturing notes and streamlining documentation.

  • Wave two is non-clinical AI agents. These don’t just assist; they complete whole tasks end-to-end, but in the admin domain (scheduling, intake calls, follow-ups, etc.), freeing up clinician time without touching the clinical workload.

  • Wave three is the real prize: clinical AI agents. Around 70% of healthcare services are clinical, and these tasks currently require credentialed clinicians (the scarcest resource in the system). If AI can safely deliver care directly, as an autonomous member of the care team, we can break the supply constraint entirely.

We’re starting to see this final wave of adoption now. Limbic is already a Class IIa-approved medical device, with five peer-reviewed trials showing direct clinical impact in patient work. That’s wave three in action! And I believe this is the only way we can truly scale outcomes, not just admin efficiency.

On the consumer side, you’re right, there’s a parallel market of D2C wellness chatbots, and there are probably thousands of them in the app stores. But that’s a completely different space, with different adoption dynamics. Healthcare is slower to move for good reason. This is a regulated sector that demands deep workflow integration, clinical evidence, robust data security, strict information governance, and, above all, trust from clinicians and health system leaders.


Steve: When chatting about generative AI in mental healthcare, you previously said; “What we actually need is conversational flexibility with clinical certainty. An experience that feels magical, but is grounded in clinical precision.” 

At the time, you also said you won’t get into how you’ve solved that. Would you like to get into it now??

Ross: Go on then! But fair warning, I did my PhD in this area, so you might regret asking.

Large language models (LLMs) are extraordinary linguists. They’ve easily passed the Turing test, they adapt fluidly to context, and they’ve transformed how we design patient-facing experiences. But the very thing that makes them so powerful - billions of parameters enabling incredible conversational flexibility - is also their Achilles’ heel in healthcare.

With so many parameters, you can’t truly see how an LLM is reasoning. It’s a black box. Yes, you can ask it to “explain” its reasoning, but that’s just another generated output, not an objective window into its decision-making. And in high-stakes healthcare, this isn’t good enough.

So while LLMs give us conversational magic, they can’t really give us clinical certainty. The answer is to separate the two: keep language generation with the LLM, but put clinical reasoning into a different, more structured AI system. At Limbic, we call this the Limbic Layer. It’s a cognitive architecture that explicitly represents each step of clinical reasoning in a way anyone can inspect, interrogate, and regulate.

The Limbic Layer guides the LLM with proven clinical logic, ensuring protocol adherence and safety. It’s why we can regulate our system as a medical device, and why in peer-reviewed studies we’ve shown improved patient outcomes and higher clinical compliance.


THR Pro: A Membership for Mental Health Operators

Quick one for you, we’ve recently opened applications for the next cohort of our vetted community, a space for those shaping the future of mental health. One member described it as “a way to get through the inevitable discouragement and exhaustion of a healthcare career”. Lol. This may be true, but it’s also full of smart, passionate people, collaborating on important problems.

Applications are open to THR Pro members, so sign up today and apply. Of course, as a THR Pro Member, you’ll also get access to all of our Pro Tier content, analysis and events.


Steve: Going back to your post for a second, it was titled “The Future of Healthcare Belongs to Those Who Teach AI to Care”. I really want to get your take on this topic… Can AI care? And if it can, how do you teach it?

Ross:  Full disclosure, the title of my article was a bit of wordplay. I left it open as to whether “care” meant being caring or delivering care. What I’m really focused on is the latter. When AI can deliver patient-facing care (i.e. actual clinical services), we’re firmly in the third wave of healthcare AI adoption. That’s when healthcare can truly scale, and patient outcomes can dramatically improve. I believe AI can, and will, deliver patient-facing care at scale. In that sense, Limbic is teaching AI to care.

Now, can AI care in the emotional sense? That’s a deeper philosophical question. Is caring about feeling with someone (empathy), or feeling for someone (sympathy)? Humans don’t even experience emotions the same way, so the bar for “true” caring is hard to define. If we accept that advanced biological computers (human brains) can experience feelings, then in theory, a sufficiently advanced non-biological computer could too. It’s a fascinating thought experiment; one I spent more than a few late nights wrestling with during my PhD.

But in healthcare today, the more practical question is: can AI create a sense of care for the patient? Can it make them feel safe, build trust, and support them in getting better? On all counts, yes. Across five clinical trials, with more results on the way, we’ve shown that Limbic’s AI can build therapeutic alliances, keep patients engaged with treatment, and support clinicians in driving recovery. Whether or not it “feels” anything itself, it can help people heal. And that’s what matters right now.


Steve: Let’s talk about the commercialisation of this technology. AI agents in mental healthcare are still an emerging phenomenon. What’s your mental model for the market you are playing in? Do you see yourself as creating a new category here, or is there enough of an existing market that you feel you can slot into it?

Ross:  We’re creating a new category. And that’s very hard. But someone has to do it. The good news is that healthcare is adopting AI at an unprecedented rate. Look at how quickly AI scribes have gone from novelty to mainstream. It’s amazing. This is an industry that resisted tech for decades, yet somehow went from fax machines to foundational models in about five minutes.

With clinical AI agents in mental healthcare, the value is obvious: there aren’t enough therapists, which means hundreds of millions of vulnerable people go without care. AI agents can change that. But yes, the category doesn’t exist. And if it was hard to get stakeholders comfortable with AI doing back-office documentation, imagine their reaction to AI directly delivering a clinical effect. That’s a whole new level of scrutiny. Our clinical evidence and regulatory approvals have helped us here.

Commercialisation is also tricky because most healthcare systems pay for (human) services rendered, not for outcomes. That perversely means there’s little financial incentive for providers to adopt something that lets them deliver more outcomes per unit time. Value-based arrangements help here: if the provider is risk-bearing, then clinical AI agents like Limbic become hugely valuable. Indeed, it’s no coincidence we scaled rapidly into nearly half of the NHS’s Talking Therapies services, which are inherently value-based.

So yes, this is a new category, with all the friction that comes with that. But the size of the problem is enormous, and I believe mental healthcare will be the first sector to embrace patient-facing AI agents at scale. The professionals in this field will be the architects of a new era in healthcare, and that’s a very exciting prospect. One that I want to be a part of.


Steve: What’s the hardest part of the commercial conversation right now? Is it proving the product works, convincing buyers it’s safe, showing it’s worth paying for… something else?

Ross: The hardest part commercially is navigating the misaligned financial incentives between those who provide care and those who pay for it.

Limbic has no issue convincing healthcare stakeholders that we are clinically rigorous, safe, validated, and deliver meaningful outcomes. We are the largest ever deployment of a patient-facing clinical AI agent, and every marketing claim we make is backed by peer-reviewed research in reputable journals. (I would encourage your readers to look at our Nature Medicine paper, which shows the impact of our AI on diversity, equity and inclusion, and has inspired me).

I often talk to other founders about three types of risk: technology risk, product risk, and market risk. Clinical AI agents face all three. The opportunity, however, is enormous. Limbic has removed the technology and product risk entirely, and we are scaling commercial traction in both the UK and the US at speed. That gives me every reason to be confident about the future. These are uncharted waters, and they demand bravery from early adopters. We’re fortunate to work with some of the most forward-thinking, mission-driven leaders in healthcare. The right partners don’t just embrace change, but actively shape the future.


Steve: You have a pretty compelling vision for the future, one where individual practitioners become clinical team leaders, treating hundreds instead of dozens of patients using the leverage of an agentic AI workforce. 

I’ve got a few questions for you on this. First, do you think there’s a risk therapists may not want to work in these kinds of roles - managing AI workforces? For many therapists that I know, the face-to-face element of the job is the most rewarding. AI that reduces note-taking/billing time is definitely helpful. But is there a risk that that time just gets replaced with admin time managing their AI agents? 

Ross: First point to make clear: no clinician is losing their job. The supply-demand gap in mental healthcare is so vast that we could 10x clinician productivity and still not meet the need. In that sense, it’s unethical not to pursue this.

You’re right that many therapists value face-to-face work, and that will remain a core part of the role. And I’d further concede that it’s an easier sell to automate back-office admin work, rather than clinical work. You know, note-taking, billing, paperwork is universally disliked. But the reality is that not every patient interaction requires the clinician’s full presence either. Often it’s re-explaining a CBT exercise, checking homework, or gathering updates; important work, but repetitive and not necessarily the best use of a highly trained human expert. This is why I believe we should see this as a large-scale promotion of the clinical workforce. Already some of the more repetitive clinical work is completed by lower-skilled clinicians, overseen by clinical supervisors with doctorates and tenure. This is just an extension of that model. We’re adding a new, infinite, scalable layer to the staffing pyramid. And the industry sorely needs it.

Think of AI agents as your clinical workforce. The therapist becomes the supervisor - still calling the shots, exercising judgment, and building relationships with patients - but now with the reach to help ten times as many people. It’s still their impact, their care, and their standards. The difference is they finally get to spend more time doing the work they came into the profession for: helping people get better.


Steve: One topic on which we are both very aligned is the importance of trust in building an AI business in healthcare. I like your framework for building trust at Limbic. What’s the hardest part about executing on that framework?

Ross: The hardest part is that doing it right takes time and relentless effort. At Limbic, every claim is backed by peer-reviewed evidence, we hold regulatory approvals and international IG accreditations, and our AI has been proven in 500,000 patients. We did all this, not because it’s quick or easy, but because in healthcare, trust is everything.

In a regulated market, those foundations pay dividends. They create confidence among clinicians, health systems, and patients that this technology is safe, effective, and here for the long term. And personally, I want history to remember us as the good guys. God knows there are already enough bad actors in this space.


Steve: Your vision for the future is bold, but it’s also very logical. However, if we’re having this chat in ten years' time and your vision for the future hasn’t come true, why do you think that would have been? 

Ross: If my vision hasn’t come true in ten years, it will likely be because the industry couldn’t get out of its own way. In the US, that could mean payers refusing to reimburse AI care delivery, or state policies with blanket bans that failed to distinguish proven from unproven AI (shutting out the good for fear of the bad). In the UK, I’ve seen rigid budgets locked into the same ineffective approaches, with no flexibility to fund new solutions to the same problems. Death by bureaucracy. I hope that’s not the future. The people in healthcare have as much appetite for improvement as I do. We just have to be brave enough to work together and make it happen.

Steve: Ross, as always, I really enjoyed this conversation. ‘Til next time!


That’s all for this week. As always, I hope you found it insightful. Shoot me an email to let me know what you thought.

Keep fighting the good fight!

Steve

Founder of The Hemingway Group

P.S. If you want to become a THR Pro member, you can learn more here.

Latest Articles