How to Build Better Health Chatbots for the Global Majority
A blog by Luke Heinkel and Apoorva Handigol, Frontier Tech Hub
Chatbots in health: promise and pitfalls
The use of AI-powered health chatbots is growing rapidly in low- and middle-income countries (LMICs), promising to bridge gaps in information, access, and care. From sexual and reproductive health education in Kenya to post-surgery support in Peru, chatbots can offer private, on-demand guidance to people who might otherwise be excluded from health systems. But their success depends on much more than technological sophistication.
The Frontier Technologies Hub has supported four pilots across Peru, Kenya, and Nigeria that illustrate both the potential and the complexity of rolling out chatbot innovations in real-world settings. Drawing from these experiences, the new report Lessons from the Frontier: Health Chatbots in LMICs provides critical insights for policymakers, implementers and funders looking to responsibly scale chatbot tools in healthcare.
Designing for the real world
One key takeaway: health chatbots must be designed for—and with—the people who will actually use them. All pilots surfaced the need for deeply contextual user research early in the process. For instance, while the EmpatIA chatbot in Peru successfully helped post-surgery cancer patients track symptoms and adhere to care plans, it also revealed major gaps in digital literacy, language inclusion, and interface accessibility. In some cases, patients needed carers just to access the app.
In Kenya, young adults used the SRHR chatbot Nena to privately explore topics like contraception and sexual pleasure. But a structured decision-tree model limited users' freedom to ask questions, highlighting the gap between user expectations and tech constraints. And in Nigeria, users of the mDoc wellness chatbot Kem responded positively to its engaging tone and tailored coaching—but not everyone had the language skills or smartphones needed to access it easily.
Equity, inclusivity, and real-world constraints must be baked into chatbot design—not added as an afterthought.
Partnerships and systems thinking matter
No chatbot operates in a vacuum. The most successful pilots established strong partnerships across the health ecosystem—from clinics and ministries to frontline workers and research bodies. For example, EmpatIA’s collaboration with Detecta Clinic was crucial for ethical testing, training the AI on approved data, and tailoring the chatbot to clinicians’ needs.
But other partnerships, such as with public health institutions, were harder to secure. This reflects a wider challenge: health systems are often not structured to accommodate early-stage innovation. To scale effectively, chatbots must be embedded in existing workflows, supported by policy, and paired with training and capacity building for providers.
Guardrails, governance and responsible innovation
Every pilot faced tradeoffs between functionality and risk. None used chatbots to diagnose conditions or provide patient-specific medical advice—rightly so. Instead, they focused on general information, coaching, triage, and referrals. All took care to train models on curated, clinically approved datasets. But even then, risks remain.
If not carefully implemented, chatbots can deepen inequities, spread misinformation, or erode trust in care systems. The report makes clear: strong governance frameworks, ethical oversight, and clear hand-off points to human providers are essential. So is investing in regionally relevant LLMs, local language models, and offline alternatives.
Read the report in full by clicking below:
If you’d like to dig in further…
🚀 Explore the pilot pages for the pilot cited…
📚 Behaviour Change chatbot to encourage vaccine uptake in Nigeria
📚 Sexual and Reproductive Health and Rights (SRHR) chatbot in Kenya
📚 EmpatIA chatbot to enhance healthcare in remote areas of Peru
Publish date: 27/06/2025