Your AI Isn't Failing You. You're Failing to Onboard It.
My brother is smart. Genuinely smart - the kind of person who picks things up quickly, thinks clearly, and doesn’t suffer fools. So when he told me he’d given up on AI because “it kept telling me things that were wrong,” I didn’t dismiss it. I listened.
And I recognised the problem immediately. Because I’d been there too.
He was using AI as an Oracle. An all-knowing system you query for truth. And when the Oracle got something wrong - as it inevitably does - the whole model collapsed. If you can’t trust it, what’s the point?
The thing is, he wasn’t wrong about the experience. He was wrong about the mental model.
The Oracle Problem
We arrive at AI with decades of conditioning from search engines. You type a question, you get an answer. The answer is either right or wrong. Simple.
AI looks like a better search engine - it writes in full sentences, it sounds confident, it can handle complex questions. So we apply the same model. Query. Evaluate. Trust or reject.
But AI isn’t a search engine. And treating it like one is the fastest route to disappointment.
An Oracle is expected to already know everything relevant to your question. You don’t brief an Oracle. You don’t give it context. You don’t tell it what you’ve already tried. You just ask, and it answers from some mystical reservoir of complete knowledge.
AI doesn’t work like that. And more importantly - neither does anyone else you’d actually want to work with.
The Teammate Model
Think about the last time you brought someone genuinely useful into a project. A colleague, a consultant, a mentor. What made them effective?
It wasn’t that they already knew everything about your situation. It’s that they were good at their craft, and you invested time getting them up to speed. You explained the background. You shared the constraints. You told them what had already been tried. And then they got to work.
That investment compounds. The more context they have, the more useful they become. The relationship builds over time.
That’s the model that unlocks AI.
Not Oracle. Teammate.
The shift sounds simple, but it changes everything about how you interact with the tool. Because it makes one thing explicit: context is your job, not the AI’s.
What Good Onboarding Actually Looks Like
When you join a new company, a good onboarding covers three things. Who we are, what this project is, and what we’ve already learned the hard way. AI needs exactly the same.
Who you are and what your world looks like. Not your name - your context. What do you already know well? What are you trying to get better at? What does a good outcome look like for you? A teammate who doesn’t know this gives you generic output. One who does will calibrate to you.
What this specific piece of work is. Not just the task, but the stakes, the history, the constraints. “Write me a cover letter” versus “I’m applying for a product manager role after ten years as an engineer, I’m worried I’ll sound too technical, here’s the job description, here’s what I think they actually care about.” Same request. Completely different output.
What you’ve already tried and why it didn’t work. This is the one almost nobody does. It collapses the iteration loop dramatically. The AI stops suggesting things you’ve already rejected and focuses on the actual constraint. Your hard-won lessons are some of the most valuable context you can share - and most people treat them as throwaway.
Why This Matters Beyond AI
Here’s what I find genuinely interesting about this pattern: it’s not really about AI at all.
Good coaching works the same way. A coach who doesn’t understand your context, your history, and what you’ve already tried isn’t coaching - they’re guessing. The whole point of the first session is to make your world legible to someone who can then help you think about it differently.
Good mentoring works the same way. Good consulting works the same way. Any high-value thinking partnership requires someone to do the work of making their context explicit - and that someone is always you.
What AI does is make this pattern available at scale, on demand, across almost any domain. But only if you bring the context.
The people getting the most out of AI aren’t the ones with the cleverest prompts. They’re the ones who’ve understood that the quality of the output is almost entirely determined by the quality of the context they bring. That’s a skill. And like most skills, it gets better with practice.
My Brother, Revisited
A few weeks after that conversation, my brother came back to it. Started approaching it differently - bringing more of his world into the conversation, treating it less like a query and more like a collaboration. The results shifted.
He hasn’t become an AI evangelist. But he’s stopped dismissing it. More importantly, he’s started asking better questions - not of the AI, but of himself. What does this tool actually need from me to be useful?
That’s the right question. And it turns out, answering it makes you better at every other thinking partnership too.
This thinking led to Context Pillars - a set of interview prompts that help you create reusable briefing documents for AI. The prompts guide you through creating an onboarding document (who you are, how you think) and a landscape document (your operating environment, players, pressures). Create once, reuse everywhere.