In other posts, I talked about healthcare’s Big Bet on supporting caregivers and how the system I’m building (“Ren”) aims to support caregivers at scale. In this post, I want to describe the AI technologies I’m using and what I mean when I talk about Ren as a “Hybrid AI”.
Hybrid AI
Hybrid AI is an approach to artificial intelligence that deliberately combines multiple forms of AI technologies and traditions. The goal is to build on the strengths of each tradition while compensating for the limitations of any single paradigm.
In the modern era, Hybrid AI typically means neuro-symbolic AI – a synthesis of “neuro” traditions (notably, LLMs) and “symbolic” traditions (or what is sometimes called “good old fashioned AI”). But in some corners, Hybrid AI means synthesis of a broader set of AI traditions such as multi-agent architectures, truth-maintenance systems, evolutionary learning, and so on. (Read more about this in Hybrid & Neuro-Symbolic AI: A Different Take.)
Ren weaves together 60+ years of AI technology and tradition. As such, Ren is a distinctive kind of Hybrid AI drawing on roughly nine distinct AI traditions synthesized into four core “pillars”:
- Epistemic Awareness & Agency: The “Knowing” Part.
- Reflective Reasoning: The “Thinking” Part.
- Improvisational Troupe of Agents: The “Acting” Part.
- Neuro-Symbolic: The “Interfacing” Part.

Epistemic Awareness & Agency: The “Knowing” Part
Epistemic Awareness & Agency (EAA) is a synthesis of three traditions in how intelligent systems handle knowledge. From knowledge representation it borrows the structures for encoding and connecting facts, most notably knowledge graphs and ontologies. From computational epistemology it takes the mechanisms for turning assertions into belief stances, reconciling conflicting information, weighting sources, and applying decay over time. From epistemic exploration it inherits the ability to recognize when knowledge is insufficient and to seek out what is missing.
- Knowledge Representation (KR). In EAA, knowledge representation provides the scaffolding for everything else. Ren uses a three-layer system to handle the full spectrum of knowledge – from speculative to verified. Hunches capture proto-knowledge: partial, unstructured, or speculative information stored as RDF-style triples that don’t require resolved entities. Propositions represent structured beliefs about relationships between known entities in the graph, carrying metadata like confidence scores and provenance, but remaining unresolved until evidence accumulates. The Graph serves as the store of resolved knowledge: a directed, heterogeneous graph of domain entities (people, roles, relationships, conditions, etc.) and their connections that Ren holds as verified facts. Together, these three layers form the Epistemic Substrate – Ren’s shared understanding of the world and the foundation for all reasoning activities. For more, see my post on Knowledge Representation & the Epistemic Substrate.
- Computational Epistemology (CE). Computational epistemology concerns itself with the status of knowledge: where an assertion came from, how confident we are in it, how it interacts with other beliefs, and how it changes over time. Traditions here include belief revision, truth maintenance, probabilistic reasoning, and models of decay. EAA draws on this machinery to let the system hold “beliefs about its beliefs” – attaching provenance, confidence, and coherence to every piece of knowledge. What EAA adds is the insistence that these epistemic stances are not abstract logic alone but working features of a system that must reason and act under uncertainty.
- Epistemic Exploration (EE). Epistemic exploration refers to the ability of an intelligent system to recognize when its knowledge is insufficient and to take action to reduce that uncertainty. In reinforcement learning and cognitive science, this means seeking out novel or informative experiences rather than merely exploiting what is already known. In the context of EAA, epistemic exploration extends beyond physical environments to the informational domain: probing gaps, asking questions, consulting sources, and testing propositions. It ensures that the system does not stall when faced with incomplete knowledge but instead treats the very act of inquiry as a central function. What EAA adds is the integration of this exploratory drive into its ongoing reasoning loop, so that gaps are not only noticed but actively pursued and resolved.
Together these threads form a capability that goes beyond storing or querying information. Knowledge representation gives the system a substrate of entities and relationships. Computational epistemology allows it to treat that substrate as beliefs with origins, confidence levels, and potential conflicts. Epistemic exploration provides the drive to notice gaps between what it knows and what it needs to know, and to act on those gaps. The result is an architecture where knowledge is never flat. Every fact is accompanied by its provenance, its reliability, and its relation to other beliefs. The system can revise its stance as new evidence arrives, let outdated beliefs decay, and launch inquiry when its knowledge proves insufficient for reasoning. EAA makes it possible for a machine not only to “know,” but also to understand the quality of its knowledge and to act when that knowledge falls short.
Reflective Reasoning: The “Thinking” Part
Reflective Reasoning (RR) is how Ren thinks – how it draws inferences, evaluates what it knows, and adapts its own reasoning over time. RR brings together two intertwined traditions: symbolic reasoning, the classic machinery of logic and inference, and meta-reasoning, the capacity to reason about one’s own reasoning. Together they allow the system not just to produce conclusions, but to understand how and why those conclusions arise – and when to question them.
- Symbolic Reasoning. In its classical form, symbolic reasoning represents knowledge as explicit rules and propositions and uses logical operations to derive new truths. This tradition gives Ren structure and discipline – the ability to make consistent, inspectable inferences over its epistemic substrate, and to trace how each conclusion follows from its premises. In the broader architecture, symbolic reasoning is the connective tissue between knowing and acting – the place where facts become implications and intentions.
- Meta-Reasoning. Meta-reasoning adds a second loop – the ability to monitor and adjust the reasoning process itself. Rather than treating inference as fixed, it treats it as a subject of reflection. Reflective Reasoning uses meta-reasoning to evaluate which curiosities to pursue, which beliefs to revisit, and which inference paths to prune or strengthen. Over time, this allows the system to refine its reasoning strategies – to learn how to think better from its own experience.
Together, these traditions form a two-level reasoning architecture. The inner loop performs direct inference over knowledge; the outer loop observes and tunes that process. In Ren, those loops aren’t confined to one place – they’re distributed. Each agent holds a small, local reasoning capability, and reflective reasoning emerges collectively as those agents share signals and revise beliefs. Reasoning, in other words, is not a single computation inside Ren – it’s an ongoing conversation among its parts, guided by reflection.
RR is what makes Ren adaptive and self-correcting. It allows the system to reason logically, recognize when its reasoning is insufficient, and improve its own process with experience – thinking not just about the world, but about its own thinking.
Improvisational Troupe of Agents: The “Acting” Part
The Improvisational Troupe of Agents (ITA) is a synthesis of three major traditions in multi-agent systems. From BDI it borrows the idea that agents hold beliefs and act on desires and intentions, while leaving behind heavy plan libraries and deliberation machinery. From swarm intelligence it takes emergent coordination through simple local rules, but replaces massive homogeneous populations with smaller clusters of specialized agents. From Adaptive Multi-Agent Systems it inherits the sense of adapting based on experience across runs, without the burden of explicit reward maximization or policy optimization.
- Belief-Desire-Intention (BDI). The Belief-Desire-Intention (BDI) model is one of the most established frameworks in agent design. In its classic form, an agent maintains explicit beliefs about the world, has desires or goals it wants to achieve, and forms intentions that become multi-step plans drawn from a library. This gives BDI agents a clear, goal-oriented structure and the ability to engage in sophisticated deliberation. ITA borrows the strength of this orientation – agents in a ITA architecture still hold beliefs and act from desires – but leaves behind the heavy machinery of planning and plan libraries. Instead, each agent is reduced to a simple rule: when its beliefs and desires diverge, it triggers a single, straightforward action. This micro-simplicity is what makes larger improvisation possible.
- Swarm Intelligence. Swarm approaches, modeled on ants, bees, and flocking birds, demonstrate how complex behavior can emerge from the interaction of many simple agents following local rules. In these systems, no individual has a global view; coordination arises indirectly as each agent responds to its immediate environment, and large-scale order emerges from countless small acts. ITA carries forward this principle of emergence – the idea that coordination and complexity need not be planned in advance. What it leaves behind is the dependence on massive populations of nearly identical agents. Instead, ITA works with a smaller troupe of differentiated agents, each with its own specialty. The collective patterns arise not from sheer numbers but from the interplay of diverse roles responding to shared signals.
- Adaptive Multi-Agent Systems (AMAS). Adaptive multi-agent systems explore how collections of agents adjust their behavior over time through interaction and feedback. Rather than optimizing against an explicit reward function, agents learn implicitly from experience – adapting their responses based on outcomes, context, and the behavior of others. ITA borrows this adaptive spirit – the idea that experience across runs should shape how agents behave in the future – but avoids the heavy machinery of formal optimization. In ITA, learning is lighter: not the pursuit of an optimal policy, but the gradual shaping of tendencies, habits, and biases through repeated experience. It resembles synaptic adjustment more than reward maximization – closer to axons strengthening or weakening through use than to computing explicit value functions. This allows adaptation without the overhead of reinforcement optimization.
The result of this synthesis is an architecture where behavior is emergent, reactive, and experience-shaped: a set of specialized agents improvising together, accumulating tendencies from past runs, all without a central planner. The Improvisational Troupe of Agents allows the system to be goal-driven without relying on rigid plans, producing flexibility and efficiency that neither BDI, Swarm Intelligence, nor Adaptive Multi-Agent Systems alone can provide.
Think of it like constellations lighting up in the night sky. A hospital discharge creates one pattern of activation as agents focused on medication reconciliation, family communication, and follow-up scheduling respond. A family conflict lights up a completely different constellation as agents specialized in mediation, resource coordination, and emotional support activate together. The constellations aren’t fixed – they emerge in response to specific contexts and needs. The system adapts not by replanning, but by allowing context-appropriate clusters to form organically from the stimulus environment.
Neuro-Symbolic: The “Interfacing” Part
I have a deeper dive on the Neuro-Symbolic synthesis in my post Hybrid & Neuro-Symbolic AI: A Different Take.
Ren’s neural-symbolic layer is how it connects to the human world – the part that listens, interprets, and responds in natural language. It bridges two very different domains: the symbolic systems that represent structured knowledge, and the neural systems that understand human expression in all its ambiguity.
The neural side – large language models and other generative tools – gives Ren fluency. It can read documents, interpret conversations, and understand intent even when phrased informally or emotionally. The symbolic side gives it grounding. Every interpretation must ultimately map to propositions, beliefs, and entities within Ren’s epistemic substrate, where reasoning and truth maintenance occur.
Neuro-symbolic integration is what lets Ren move gracefully between these worlds. It’s how a sentence becomes a proposition, how a caregiver’s note becomes an assertion, and how a conversation becomes an update to shared understanding. The neural components handle the noise and nuance of human communication; the symbolic components ensure that meaning becomes knowledge – structured, traceable, and actionable.
In this sense, the neural layer is not Ren’s “brain” but its interface tissue – translating between the fluidity of human language and the precision of symbolic reasoning. It allows Ren to inhabit human contexts naturally – to converse, to read, to interpret – while maintaining the rigor and reliability of its underlying knowledge systems.
Leave a Reply