Notes From the Practice
How to Context Engineer your own Team of Experts

The single biggest determinant of whether your AI agents are useful or useless is not the model. It's what they know about you. Andrej Karpathy made this point go viral last year with a deceptively simple suggestion: build a personal knowledge base in Obsidian, and let your agents read from it.
The standard way to use AI agents is throwaway. You open a chat, type a prompt, get a response. Next time you want the same kind of work done, you open another chat and start over. The agent doesn't remember anything about you; your business, your standards, your past decisions, your voice.
The Agent that Forgets Everything
Every conversation is the agent meeting you for the first time. For one-off questions, this is fine. For ongoing work, the kind a real employee would do, it's the central failure mode. You spend more time re-explaining context than getting actual leverage. The first few months of working with agents tend to follow the same arc. Initial excitement at how capable the models are. Followed by a slow, irritating realisation that you're typing the same context over and over. By month three, most people have quietly stopped using them.
What Karpathy Actually Suggested
In mid-2024, Andrej Karpathy — formerly of OpenAI and Tesla, one of the most respected voices in machine learning — posted what looked like a passing observation on X. The gist: he'd been using Obsidian (a popular Markdown notes app) as a kind of long-term memory for LLMs, and finding it surprisingly powerful. Each note in his vault became context an agent could draw on. His existing personal knowledge base, in other words, doubled as agent fuel. The post went viral, and for good reason. It pointed at something most people in the AI agent space were missing: the bottleneck isn't model capability. The bottleneck is context. A capable model with no context about you produces generic work. A less capable model with rich context about you produces work that sounds like you. The second is almost always more useful.
Why Obsidian Specifically
Obsidian, and the broader category of Markdown-first notes apps ,happens to be ideal for this for several reasons.
First, Markdown is plain text. Every LLM can read Markdown natively, with no special parsing. No proprietary format to wrestle with. Your notes are portable, future-proof, and machine-readable.
Second, Obsidian's structure is open. Notes link to other notes; tags group concepts; folders organise. This isn't just for your benefit — it's a graph that agents can traverse. A note about a client links to a note about their preferences, which links to a note about how you write follow-up emails. The agent can walk the graph.
Third, you already need to take notes. The discipline of writing things down ,decisions, standards, observations — is foundational to almost any expert practice. Karpathy's suggestion wasn't "do new work for the AI." It was "the work you should already be doing for yourself is also what the AI needs."
How to Actually Set This Up
The basic recipe is straightforward. You don't need plugins, AI integrations, or fancy frameworks to start. You need: A vault, a folder of Markdown files. Obsidian creates one by default. Inside the vault, a few folders:
People — one note per person you work with regularly. Their role, preferences, communication style, any standing instructions.
Projects — one note per ongoing engagement. Status, decisions made, open questions, key documents. Standards — how you do things. Email tone, document format, decision frameworks, principles you operate by.
Voice — samples of your writing. Emails you've sent, paragraphs from your work, the way you structure arguments.
Decisions — the record of choices made and why. Useful for both you and any agent acting on your behalf.
The notes don't need to be long. A single paragraph per person, project, or decision is often enough. The point is that they exist — that there's a written record an agent can draw from. Then, when you want an agent to help with something, you give it access to the relevant slice of the vault. Most agent frameworks support feeding in a folder of Markdown as context. Some plugins (Smart Connections, Copilot for Obsidian) let you query the vault directly from inside Obsidian itself.
The shift from prompts to context
What changes when you operate this way is not just performance — it's the entire mental model.
Prompt engineering is throwaway work. You craft a clever prompt for a one-off task. The prompt is useful once, then discarded. Six months later, you've written hundreds of throwaway prompts and accumulated nothing.
Context engineering is cumulative. Every note you write — every decision recorded, every standard codified, every voice sample preserved — makes future work easier. The agent doesn't just answer better; it answers like you. And the marginal cost of the next piece of work is lower than the last.
This is what makes context engineering an investment rather than an expense. The vault is an asset that compounds. The prompt is a tweet.
What to Write Down First
Most people get stuck here. They open Obsidian, see a blank vault, and don't know where to start. A few principles help.
Write what you'd onboard a new hire with. If you were hiring an assistant tomorrow, what would they need to know in their first week? Your top three clients. Your meeting cadence. Your filing system. The five emails you write most often. Start there.
Capture decisions as you make them. When you decide something — a pricing change, a positioning shift, a tool migration — write a short note: what you decided, why, what you considered, what you'd revisit. These notes are gold for agents trying to act consistently with how you'd act.
Save voice samples. Take three emails you've recently written that feel like you. Strip identifying details, paste them into a Voice folder. An agent given those samples will write in your register far better than one without them.
Don't aim for completeness. A twenty-note vault that's actively maintained outperforms a two-hundred-note vault that's stale. Start small, keep it current.
The practitioner's perspective
We do this professionally for clients — building and maintaining their context layer so they don't have to. It's roughly half the work of composing a useful retinue of digital staff. The agents themselves are the easy part. The context they operate from is what makes them feel like employees rather than chatbots.
For most engagements, we start by interviewing the principal, observing their workflows, and writing the foundational vault for them. Then we hand it over, and they maintain it from there. It's their knowledge graph, not ours.
But none of this is gated to clients. If you're willing to put in the discipline, you can do it yourself. The first thirty days are the hardest — the habit of writing down what you'd otherwise hold in your head feels strange, then natural. After that, it's just a way of working.
Where to start if you start today
Pick one workflow. Just one. Inbox triage, weekly planning, project status updates — something you do regularly that's currently in your head.
Write five to ten notes about it. How you currently do it. What decisions tend to come up. What your standards are. What the people involved care about.
Then, ask an LLM — Claude, ChatGPT, whatever — to do that workflow on your behalf, with those notes as context. Compare the result to what you'd produce yourself. Iterate the notes based on what's missing.
Within a month, you'll have a working agent for one part of your operation. Within three, you'll have several. Within a year, your vault will be doing real work for you every day — and the marginal effort of adding new capabilities will keep dropping.
That's what Karpathy was actually pointing at. Not a tool. A way of working.

