Memory Isn't Enough
Most people building personal AI assume the same thing: give the AI enough memory, and it will eventually understand you.
I've been skeptical of that for a while.
I spent time studying how activity tracking tools work — logging what apps you open, how long you spend on tasks, when you're in deep focus versus context-switching. That data is useful. It tells an AI what you do.
But what you do and who you are aren't the same thing.
A photographic memory doesn't make someone understand you. I've worked with assistants, human and AI, who remembered every meeting I'd taken but still couldn't predict a judgment call I'd make. The facts were there. The understanding wasn't.
That gap is what we kept hitting as we built IrisGo.
The missing piece isn't more data. It's a model of the person.
Not a profile — name, role, a few toggles. I mean something closer to how someone actually thinks: the values that show up when they make a hard call, the way they communicate differently under pressure versus when they have room to breathe. None of that lives in a calendar or a browsing history. It lives in the gap between what someone says and what they mean.
We spent a long time trying to pin this down. The result is what we call the Context-Aware Engine, built around two components: a Root Persona and a Mask System.
The Root Persona is fixed. It captures how someone reasons, what they care about, how they approach decisions. We build it through structured interviews and behavioral patterns, and once it's set, it doesn't change based on context.
The Mask System does change. The same person talks differently with their engineering team than with an investor, differently when they're in a brainstorm versus a crisis. That's not inconsistency — it's just how people work. The Mask System lets agents adapt their framing to the situation without losing the thread of who the person actually is.
Memory tells an agent what happened. Persona tells it why the person made the call they made.
That matters more than it sounds. When an agent is drafting a response to a difficult partner, or framing a team update, memory gives it the facts. Persona gives it judgment.
This is the premise behind the CAE in IrisGo. Not a feature — the starting assumption: that a personal AI needs to know you well enough to act on your behalf, not just retrieve information when asked.
We're still building. But the bet is clear: understanding someone is a modeling problem, not a retrieval one.
— Lman Chu
Co-founder & COO, IrisGo