I. The Beginning
I founded ELI after attending the third funeral of a friend lost to suicide. At each service, I heard the same bewildered refrain: "The last time I saw them, they seemed perfectly fine."
That sentence stuck with me. Because I knew what it meant. By the time someone seems fine, they've often already made peace with their decision. The visible calm is not recovery; it's resignation. The moment of maximum danger arrives disguised as stability.
Those funerals forced me to confront how disastrously late our detection systems are. We wait until breakdown becomes event. We call it crisis, when really it was a trajectory we failed to measure.
So I started building a way to measure it. Not by turning people into datasets, but by listening differently through the thing we all leave behind: language.
ELI began as an experiment, not a company. I built a simple, asynchronous journaling tool where users could write freely (long or short, prompted by nothing) and then grade their mental state at the time of writing. With consent, they could "donate" their data: just age, gender, and occupation.
The first pattern that emerged was humbling. Anxiety and depression didn't look the same across lives. Engineers expressed drift through abstraction and disconnection. Healthcare workers showed it through flattened emotional vocabulary. Artists through fractured metaphor. Occupation and environment shaped how distress appeared in words.
That small dataset became a window. It showed that early detection might not lie in what people said, but how they said it, and that with enough examples, those linguistic fingerprints could be turned into an early-warning system for the mind.
That datalake became the seed of ELI. Everything that followed (the psycholinguistic models, the privacy architecture, the clinician dashboards) grew from that single realization: if language is the first place the mind changes, then we owe it to each other to listen before the silence.
II. The Background: Our Timing Problem
Modern healthcare measures almost everything that keeps a body alive: heart rhythm, oxygen saturation, glucose, sleep. But it is still nearly blind to the thing that keeps a life coherent: mind. We notice breakdown as an event (a crisis, an ER visit), yet the deterioration that leads there is a curve that unfolds quietly over weeks and months.
ELI's thesis is simple: we don't have a treatment problem, we have a timing problem.
The rest of this story (the architecture, the guardrails, the vision) follows from that premise. Look at the typical timeline of depression or burnout: months of subtle drift (sleep, affect, social withdrawal) that never crosses a clinical threshold, until it does. Then we scramble: medications, leave from work, damage control around relationships and obligations. Treatment often works, but the cost of late awareness is brutal for the person, the family, the team, the economy. ELI's thesis is simple: we do not primarily have a treatment problem; we have a detection-timing problem. A language biomarker exists exactly where deterioration begins (inside the day-to-day fabric of communication) and can surface risk before crisis.
III. Why Language (and Why Now)
Language exposes the inside of a mind in ways no accelerometer can. Pronouns, absolutist words, temporal framing ("always," "never," "I," "no one," past vs. future), syntactic simplification when cognitive load rises: these are well-studied correlates of distress. And unlike intermittent screenings, text and speech show up constantly and passively, without new hardware or behavior change. What changed recently is feasibility: modern NLP can read context ("I'm fine" when nothing is fine), track personal baselines rather than generic norms, and compute velocity (how fast a mind is moving toward the edge) rather than one-off scores. ELI turns this into a repeatable pipeline rather than a research novelty.
IV. What ELI Actually Is
ELI is a continuous analysis layer for natural language with three functions:
- Passive collection with consent from common channels (messages, email, journals, voice transcripts).
- Real-time psycholinguistic analysis that maps texts to dimensional scores (depression, anxiety, stress), personalized to baseline and updated over time.
- Graduated interventions that escalate thoughtfully (from self-guided nudges to clinician alerts and, when necessary, emergency action) based on severity and rate of change.
Under the hood, the system blends interpretable features (the "Genome of Depression" taxonomy) with transformer semantics and temporal models, so we preserve human legibility while taking advantage of state-of-the-art accuracy. The point isn't to replace clinicians or introspection; it's to restore context that the clinical window can't possibly catch.
V. Our Reasons for Building It (and What "Good" Looks Like)
1) Move care upstream. If a person's language shows a steady rise in absolutist framing and self-focus alongside a drop in future orientation, we should not wait for a crisis evaluation. A weekly nudge to sleep, a scheduled check-in, a low-friction therapy referral (cheap interventions) can prevent expensive, scarring ones later. Success looks like fewer catastrophic episodes, shorter durations, and more people who never enter the revolving door of treatment resistance.
2) Make mental health measurable without making it mechanical. Numbers aren't the goal; direction is. People need to see that "things are trending worse this week" with enough specificity to act, and clinicians need reliability strong enough to adjust plans between visits. ELI's velocity measures are designed for this (less verdict, more weather report).
3) Respect dignity at every layer. Language is intimate. ELI's architecture gives custody to the person by default (on-device processing where possible, selective sharing to clinicians, de-identification and differential privacy when aggregated). A tool that erodes trust cannot improve mental health at population scale.
VI. What Changes When Language Becomes a Vital Sign
- Individuals get an early-warning gauge that lives where life happens. The system learns your normal and flags deviations you might rationalize away in the moment. It also learns to quiet down when the mind re-stabilizes, so help is present but not pestering.
- Clinicians get longitudinal signal between brief encounters: not just "today's PHQ-9," but four weeks of drift in temporal framing and cognitive rigidity, with uncertainty bands and velocity. It's the difference between steering by snapshots and steering by a dashboard.
- Organizations (hospitals, universities, units under strain) can watch de-identified trends ethically (heat maps of risk that inform staffing, resources, and timing for outreach) without turning people into metrics. The unit of care remains the person; population analytics simply put leaders back inside reality.
VII. Guardrails (What We Refuse to Trade Off)
- Consent, provenance, reversibility. Sources are opt-in, signed, and revocable. People can exclude conversations, dates, or channels. You own the spigot.
- Interpretability where it matters. Clinicians can see which dimensions (absolutism, temporal focus, social language, complexity) drove a change. People can read their own trend narrative. Black-box probability without story is not care.
- Escalation with humility. The system treats velocity as a serious signal, but high uncertainty triggers human review, not automated alarmism. We design to waste time before we design to violate dignity.
VIII. Hypothetical (Near-Term) Moments That Justify the Whole Effort
- A student's language shifts from future-oriented planning to past-focused rumination over two weeks. ELI suggests a low-friction counseling visit before finals week snaps the curve. Grade saved; crisis avoided.
- A night-shift nurse's texts become syntactically flat, absolutist, and self-referential. Velocity crosses threshold. The supervisor is prompted to offer rotation changes and resources; the nurse accepts help without shame because outreach was timed and contextual, not punitive.
- A veteran in therapy shares ELI trends between sessions. The clinician spots a steep drop in lexical diversity and social language two days after a known trigger date and checks in proactively. The intervention is a conversation, not an involuntary hold.
None of these require new miracles. They require signal, timing, and respect.
IX. The Longer Arc
If we do our job, ELI fades into the background of everyday tools, like step counts did. But its social consequence could be larger: we normalize preventive mental healthcare the way we normalized preventive cardiology. And we upgrade the ethical grammar: consentful data, person-first custody, explainable metrics, interventions that preserve agency.
We think of this as building cognitive infrastructure (the boring, durable scaffolding that lets a society respond to invisible suffering before it becomes visible harm). The metric that matters is not how clever the models are; it's how many lives quietly don't fall apart because signal arrived early enough for ordinary care to work.
Closing
The mind rarely announces disaster. It whispers its way there through pronouns, tenses, the narrowing of sentences when the world feels too large. ELI listens to those whispers, translates them into a gentle, useful vital sign, and hands control back to the person and their clinician.
In a healthcare system optimized for events, we're building for trajectories. In a culture that valorizes crisis response, we're investing in preemptive grace. And in a world that counts everything but meaning, we're choosing to measure what helps meaning endure.