I built this to answer a specific question I had: how does dynamic memory actually work in an AI agent? Not conceptually, but mechanically. What does the code look like when an agent remembers something across sessions without you repeating yourself?
The project is a behavioral interview coach. It tracks your target role, the stories in your answer bank, your weak spots, and how you like to be coached. Every time you come back, it already knows you. I made a YouTube video walking through the build if you want to see it in action.
The underlying pattern comes from the Context Personalization cookbook from OpenAI, which I used as a foundation and built on top of with SvelteKit and the OpenAI Agents SDK. The pattern itself is clean and worth understanding: the agent writes memory notes silently mid-conversation into a staging area, those get injected into the system prompt on every turn via lifecycle hooks, and at the end of the session a second LLM call consolidates the session notes into long-term global memory, promoting what’s durable and dropping what was only relevant to that conversation.
What I found interesting is how domain-agnostic the architecture is. Swap the state schema and the prompts and you have a deal coach, a fitness coach, a language tutor. The memory mechanics are identical. That’s the part I’d encourage anyone to look at if they’re curious about building agents that actually retain context.