A scene I keep thinking about

A few years ago, someone close to me was diagnosed with CLL, chronic lymphocytic leukemia. The part that stuck with me was that it almost didn't get caught. A resident happened to re-read blood work from months earlier while looking into something else entirely, and noticed markers that were clearly abnormal. The results had been sitting in the system the whole time. They just hadn't been flagged or followed up on. If that resident had been doing something different that day, the diagnosis would have come later, maybe much later.

What followed was almost stranger. Once the diagnosis was confirmed, the standard of care for early-stage CLL turned out to be, essentially, waiting. The clinical term is "watch and wait." You monitor the bloodwork. You come back every few months. The disease is real and named, but until it becomes acute, the system doesn't have much to offer. You carry a diagnosis without a treatment plan and try to live normally inside that.

I've thought about this a lot since then. The blood work had the evidence. The expertise to interpret it existed. But the system that connects evidence to interpretation to action only worked because of a lucky accident, and then, once it did work, the best it could offer was surveillance. The clinicians involved were competent and attentive. The failure, to the extent there was one, was architectural. Care is organized around acute episodes. Continuous reasoning about a patient's trajectory, the kind of thing that would have caught those lab results the first time, isn't really built into how the system operates.

That experience shaped how I think about healthcare. It is a technology problem and a policy problem, but underneath both of those sits something more fundamental: an information problem. The deepest kind, one where the person who most needs clarity almost always has the least of it.

The mechanism underneath

Economists have a term for this: information asymmetry. In healthcare, it operates at every layer simultaneously. Patients have symptoms, history, and lived experience but often lack the clinical framework to interpret them. Clinicians have training and pattern recognition but limited time and incomplete access to a patient's full context. The healthcare system has protocols, referral networks, and institutional knowledge, but these are opaque to the people moving through them. Payers have rules, approval workflows, and incentive structures that can redirect care in ways that are invisible to both patient and provider.

When these layers align, the system works remarkably well. Modern medicine, at its best, is extraordinary. But alignment is the exception, not the rule, and the consequences of misalignment are not evenly distributed. A person with a good primary care relationship, a responsive insurance plan, and the ability to take time off work will navigate the system competently even when it's confusing. A person without those advantages faces a compounding series of uncertainties: uncertainty about urgency, about next steps, about what matters in a complex history, about whether a recommendation is safe, about where to go and when. The system asks them to behave like experts while they wait to see one.

I want to be careful not to overstate this. Many people receive excellent care. Many clinicians go to extraordinary lengths to close the gap for their patients. The American healthcare system, for all its dysfunction, produces world-class outcomes in many domains. But the variance is enormous, and the variance tracks geography, income, and access to information in ways that should trouble anyone who looks closely.

The numbers behind the gap

One statistic captures the structural dimension of this problem with uncomfortable clarity: 46.3% of U.S. counties have no practicing cardiologist, representing roughly 22 million residents. Now, county-level data can be somewhat misleading (some of those residents live near county borders and can reach a specialist in an adjacent area), but the overall picture is stark. A person living in one of those underserved counties can do everything right and still face a barrier that has nothing to do with their choices. Cardiovascular risk does not wait for a referral slot. Chest pain does not respect geography. Preventive care becomes a logistical challenge rather than a routine habit.

Waffle chart showing 46.3% of U.S. counties without cardiologists
Geographic distribution of specialist access: nearly half of U.S. counties lack cardiologists entirely

Cardiology is only one lens, but similar gaps exist across psychiatry, endocrinology, neurology, oncology, and rheumatology. The unevenness is a structural feature, one that emerges from how medical training, specialization incentives, and population density interact. And it acts like a multiplier on risk.

Even where specialists exist, time becomes a barrier. A large survey of appointment availability across major metro areas found average physician wait times of around 31 days, with the trend increasing. A month can be a long time in medicine. Long for disease progression, long for anxiety, long for missed work, long for compounding uncertainty. Delays push people toward urgent care, emergency departments, or doing nothing. Each path carries costs, both human and economic, and each path is a downstream consequence of the same root cause: the person didn't have what they needed to act earlier and more appropriately.

The pipeline itself is under strain. The Association of American Medical Colleges projects a U.S. physician shortage of up to 86,000 by 2036. Shortage behaves like gravity. It pulls access toward dense metros, concentrates expertise in well-resourced systems, and raises the price of time. It also increases clinician burden, which can reduce the quality of attention patients receive once they finally reach care, which in turn increases the information gap the patient was already struggling with. The cycle is self-reinforcing.

Timeline showing projected physician shortage growth and its cascading effects on healthcare access
The compounding crisis: projected physician shortages create cascading effects throughout healthcare delivery

Why existing tools haven't solved this

The obvious response is: hasn't technology already addressed this? We have telehealth, symptom checkers, patient portals, WebMD, and now a generation of general-purpose AI chatbots. Surely the information gap is closing.

I think the honest answer is: not nearly enough, and in some cases these tools have made the problem worse.

Telehealth expanded access to existing clinicians, and that is genuinely valuable. But it didn't change the fundamental supply constraint. A telehealth appointment still requires a provider on the other end, still operates within the same scheduling bottlenecks, and still leaves the patient on their own when they're trying to figure out whether they need the appointment in the first place. Symptom checkers have historically been either too conservative (telling everyone to go to the ER) or too shallow (providing generic information that doesn't adapt to context). Patient portals are useful for logistics but do almost nothing to help a patient reason about their health.

General-purpose AI chatbots are more interesting and more dangerous. They can produce fluent, confident text about medical topics. The problem is that fluency and correctness are different things, and in medicine the gap between them can be clinically significant. A chatbot that sounds reassuring when it should be escalating, or that provides a plausible-sounding medication interaction that happens to be wrong, creates a new kind of harm, one that feels like help right up until the moment it isn't. The information asymmetry doesn't disappear; it gets dressed up in a more convincing interface.

The question that actually matters

Here is what I think is the important framing: people will use AI for health guidance. They already are, in large numbers, and that behavior will accelerate because the underlying need is real and the friction in the existing system is high. The question is what kind of AI they get.

There are essentially two paths. One produces confident text that feels helpful while hiding its uncertainty. Systems that optimize for user satisfaction and engagement rather than clinical accuracy, that treat medicine like a content problem. The other path produces structured clinical reasoning that respects risk, acknowledges what it doesn't know, routes to human care when the situation demands it, and improves the patient's odds of reaching the right care at the right time.

These two paths look similar on the surface. The user experience might even feel comparable in the first few interactions. But they diverge dramatically at the margins, and medicine is a domain where the margins are exactly where the stakes are highest. The patient with the atypical presentation, the edge-case drug interaction, the symptom pattern that looks benign but isn't. Those are the cases where the difference between the two paths becomes the difference between early intervention and a missed window.

Chart showing how engagement-optimized and clinically rigorous AI perform similarly for routine cases but diverge dramatically at the margins
Both paths look similar for routine cases. They diverge where the stakes are highest.

The strongest objections, taken seriously

Before going further, I want to engage directly with the most serious objections to this line of thinking, because I take them seriously and I think anyone building in this space has an obligation to reckon with them honestly.

The first objection is that AI has no business providing clinical guidance. Medicine is too complex, too contextual, and too high-stakes for a system that can hallucinate, that lacks embodied experience, and that cannot be held accountable the way a licensed physician can. This is a real concern, not a strawman. Current AI systems do hallucinate. They do lack the embodied clinical judgment that comes from years of patient care. And the liability framework for AI-generated medical guidance is genuinely unsettled. I would push back, though, on the implicit assumption embedded in this objection: that the alternative to AI guidance is physician guidance. For the 22 million people in counties without a cardiologist, for the person waiting 31 days for an appointment, for the uninsured patient who can't afford an ER visit, the actual alternative is often no guidance at all, or a Google search, or a Reddit thread. The relevant comparison is "structured AI reasoning vs. unstructured uncertainty." Framed that way, the calculus changes.

The second objection is that this will be used to justify further underinvestment in the healthcare workforce. If AI can triage and guide patients, the argument goes, policymakers and insurers will use that as a reason to accept the physician shortage rather than fix it. I think this is a legitimate risk, and one that anyone building these systems should actively work against. But the honest response is that the shortage is already here, it is already projected to worsen, and no realistic expansion of medical training will close the gap in time. The question is what happens to the people who fall into that gap while we wait. Autonomous clinical AI does not replace the need for more physicians. It addresses the reality that "more physicians" is a decade-away solution to a problem people face today. If policymakers use AI as a reason to stop investing in medical training, that would be a policy failure worth naming in advance. It would not change the fact that millions of people need clinical reasoning now, in places where no physician is available to provide it.

The third objection is about equity. Won't this technology, like most technologies, reach affluent early adopters first and widen the gap before it closes it? That is the historical pattern, but we don't think it applies here in the same way. The cost structure of AI is fundamentally different from the cost structure of physician labor. Software scales in ways that human expertise cannot. The marginal cost of serving the hundred-thousandth user is approximately zero, which is categorically different from the marginal cost of training the hundred-thousandth physician. If the technology is built with equity as a design constraint, it has the structural potential to be more equitable than what it replaces, not less.

The discipline of building in healthcare

A wave of companies is racing into patient-facing AI. The incentive is obvious: demand is massive, access is constrained, and software scales faster than clinicians can. In that kind of market, speed becomes the default strategy. Ship quickly, iterate in production, optimize for engagement, and treat failures as a normal part of product discovery.

Healthcare punishes that mindset.

A missed red flag, a premature reassurance, an unsafe medication suggestion, or a delayed escalation has consequences that do not resemble a broken checkout flow. Clinical systems carry asymmetric risk, and the tail events matter more than the average case. This is the fundamental tension that most AI companies in healthcare have not yet resolved, and I'm not certain anyone has resolved it fully, including me. But I believe the resolution lies in treating safety as architecture rather than compliance: embedding structured reasoning, explicit uncertainty, and clear escalation pathways into the core system design rather than bolting a review layer on top of a system that was optimized for something else.

Layered safety architecture diagram showing structured reasoning, explicit uncertainty, and clear escalation pathways
Safety as architecture: building AI systems that behave like careful clinicians, not confident chatbots

What this means in practice is that the product can't be a single model behind a chat interface. It has to be a multi-stage clinical pipeline: intake reasoning that asks the right follow-up questions, a detection layer that identifies red flags and escalation signals, a knowledge layer grounded in real clinical evidence, and an output layer that documents its reasoning transparently so the patient understands why, not just what. Each stage has its own failure modes and its own safety requirements. A monolithic model approach, the kind most consumer AI companies default to, collapses all of these concerns into one prompt and hopes for the best. That works for writing emails. It does not work when the system needs to distinguish a tension headache from a subarachnoid hemorrhage.

This is harder to build, but the result is a system that earns trust at scale. It knows when to say "I'm not confident enough here" and can route to human care when the situation demands it, while handling the full reasoning on its own when it can. It builds for continuity: what to do next, what to watch for, what changes urgency, and when the situation has escalated beyond its scope. In healthcare, trust is the only growth strategy that compounds.

What we're building, and why

This is the reason we're building Certuma. The physician shortage is real, it is projected to get worse, and no plausible increase in medical training capacity will close the gap in time. The people who fall into that gap deserve more than a symptom checker or a chatbot that sounds confident. They deserve an autonomous doctor: a system capable of conducting a real clinical intake, reasoning through differential diagnoses, identifying red flags, and delivering a structured care plan on its own.

I want to be direct about this because the framing matters. Certuma is not a copilot for physicians. It is not a tool that summarizes notes or pre-fills charts. Those products are valuable, but they serve clinicians who are already in the room. We are building for the vast and growing number of situations where there is no clinician in the room, where the wait is five weeks, where the nearest specialist is two hours away, where the cost of an ER visit is a week's rent. In those moments, the choice is between autonomous clinical reasoning and no clinical reasoning at all.

I use the phrase "autonomous doctor" deliberately, and I know it makes some people uncomfortable. The discomfort is worth sitting with. Autonomous medical reasoning will exist. It is already emerging, unevenly and without enough care. The question is whether it will be built with the clinical rigor and architectural discipline the domain requires, or whether it will be built fast and fixed later. In healthcare, "fix later" has a body count.

The larger picture

People talk about "healthcare access" as if it were mainly a supply problem. Supply matters enormously. We need more physicians, more training programs, more residency slots, and more funding for the infrastructure that supports them. But access is also, and perhaps more fundamentally, an information problem. When expertise is scarce, information becomes triage. It becomes timing. It becomes whether a person takes action at the moment when action is most effective, or arrives at the system after the window has narrowed.

I think we are at the beginning of a transition that will be as significant for medicine as the stethoscope or the electronic health record, and while the timeline and specific form of the technology will evolve, the direction is clear. What I'm most certain about is the underlying need. The information asymmetry that shapes healthcare outcomes is not going to be resolved by training enough physicians (the math doesn't work) and it's not going to be resolved by generic AI chatbots that treat clinical reasoning as a text-generation problem. It requires something new: systems that are clinically grounded, architecturally cautious, and designed from the beginning to bridge the gap between a person's uncertainty and the structured guidance they need to act well.

The mission is to build that missing layer so that care quality depends less on zip code, wait lists, and luck, and more on timely understanding and the right next step. The work is hard, and we are early. But the cost of not trying, of leaving the information gap to widen as demand grows and supply falls further behind, is a future I'd rather not accept.