Building Context Pillars: From Observation to Proof of Concept
It started with two articles making the same argument from different angles.
One was Fowler on knowledge priming. The other was a ZDNet piece on elite AI coding practices. Both observed the same thing: the people getting the most from AI have built systems, not better prompts. They’ve created structured ways to feed context into their tools consistently.
That observation opened up a question: what does that mean for non-technical, casual users?
Most of the discourse around “getting good at AI” assumes you’re a developer, or at least someone comfortable building tooling around your workflow. But the insight - that context quality determines output quality - applies to everyone. The implementation shouldn’t require engineering skills.
The three-layer model
Thinking about what context actually means, a structure emerged. Context has layers, each with a different lifespan and purpose.
Layer 1 - Personal onboarding (stable)
Who you are, how you think, how you work, what useful help looks like for you. This is portable across jobs, tools, and contexts. It changes slowly as you evolve. Update frequency: years.
Layer 2 - Working context (semi-stable)
The specific relationship or role you’re in right now. The founder dynamic, the key stakeholder relationship, the team you’re embedded in. More transient than personal philosophy, more personal than the landscape. Update frequency: when the role or relationship shifts.
Layer 3 - Domain landscape (perishable)
The world you’re operating in. Industry dynamics, political pressures, active threats and opportunities, key players and what they want. Specific, time-sensitive, needs updating whenever the environment shifts. In volatile industries or during organisational change, this could need refreshing weekly.
Different layers, different documents, different update frequencies. The insight is that you don’t need to brief AI on everything every time - you need to create reusable context at the right level of abstraction.
Building the interview prompts
The prompts needed to extract this context through conversation, not forms. Nobody wants to fill out a questionnaire about themselves. But most people will happily answer good questions in a conversation.
The key design decisions:
One question at a time. The prompt explicitly instructs the AI to ask a single question, wait for the answer, then follow interesting threads before moving on. This prevents the common failure mode where AI dumps five questions at once and gets shallow answers to all of them.
Concrete anchors, not abstract preferences. Instead of “What’s your working style?” the prompt asks “Tell me about a recent piece of work you’re genuinely proud of - what was it, and what made it good?” Specific situations produce specific, useful context. Abstractions produce generic descriptions that sound like everyone.
Six areas, naturally explored. The onboarding prompt covers how you think and work, what you’re trying to achieve, what you’ve learned the hard way, how you collaborate, your red lines, and what useful help looks like. But it explores these through conversation, not a checklist.
What worked
Ran the prompt with myself as the subject. The results were encouraging:
- Concrete question anchors produced specific stories, not generic self-descriptions
- The one-question-at-a-time instruction held throughout
- The final document had fingerprints - it sounded like me, not a generic professional
What needs improving
Three things to fix in the next iteration:
-
The collaboration section drifted toward preferences rather than specific stories. Need to tighten the instruction to push for a named relationship or specific situation.
-
The final document had a minor misread on one section - add instruction to confirm interpretation before writing.
-
The cheat sheet (a 3-5 sentence quick-reference version) emerged as an unprompted bonus. Should bake it in as a required second output.
The proof of concept
The real test was the landscape prompt. I created a document for a real personal scenario - navigating organisational change at a large company with a complex transformation underway.
Then I asked a genuine question I was facing: “I’m trying to move into a more strategic role during a period of organisational uncertainty. How should I position myself?”
I asked it cold first. Generic career advice. Technically fine, completely useless.
Then I loaded the landscape document and asked again.
Seven material shifts in the response:
-
Framing shifted from capability to risk reduction - because the organisation’s culture prioritises stability over innovation. Demonstrating value meant demonstrating reduced risk, not ambition.
-
Career framing shifted from self-advocacy to organisational usefulness - because in uncertain times, perceived self-interest weakens your case. Perceived usefulness to others strengthens it.
-
Much stronger emphasis on one concrete proof of concept - because the landscape explicitly stated “one well-placed proof of concept is worth more than a well-argued proposal.”
-
Shifted from “build general support” to “cultivate specific advocates” - because relationships with specific decision-makers matter enormously in these situations.
-
Explicit warning against ideological framing - because the organisation’s transformation was uneven and politically sensitive. Purity arguments land badly. Operational outcomes arguments land well.
-
Reframed the target function as fragile, not as a destination - important for positioning yourself as someone who helps it succeed, not someone seeking a lifeboat.
-
Independent side projects reframed as operational credibility - not as a hobby or curiosity, but as evidence of capability that reduces organisational risk.
The money quote
Without context: “Demonstrate leadership capability.”
With context: “Demonstrate that you reduce the risk that the emerging function’s initiatives fail operationally.”
That’s not a small tweak. It’s a completely different argument aimed at a completely different audience motivation.
What this proves
The landscape document didn’t change the universal principles - demonstrate value, build credibility, create advocates. Those are true everywhere.
What it changed was the emphasis, tone, and tactical framing in ways specific to that organisation, that culture, and that moment. Generic advice applied to a specific situation would have been technically correct and practically wrong.
The AI didn’t need to be smarter. It needed to be briefed.
The proof of concept took approximately 20 minutes including creating the landscape document. The output was immediately actionable advice specific enough to be genuinely useful.
That’s the case for context-pillars in one paragraph.
What’s next
The prompts are public and usable now. Three things on the roadmap:
- Tighten the onboarding prompt based on the observations above
- Build a Layer 2 prompt for working context (specific roles and relationships)
- Test whether the onboarding document actually improves AI conversations over time - closing the loop on the original hypothesis
The framework was tested against a real scenario. A fictional example (a consultant navigating a digital transformation at a management consultancy) is included in the repo for anyone who wants to see what a landscape document looks like in practice.
Context Pillars is available now - two interview prompts you can paste into any AI to create reusable briefing documents. The prompts, the example, and the thinking notes are all in the GitHub repo.