Building Laura: An AI Research Assistant Who Creates Artefacts, Not Answers
Thirty-five years ago, I spent long afternoons in the university library researching the Spanish Civil War. I’d pull books from dusty shelves, photocopy journal articles, take handwritten notes on index cards, and occasionally fall asleep over some dense academic text about the International Brigades.
Last week, I revisited that same topic. Same research question - was the Spanish Civil War really a European conflict? Same academic sources. But this time, instead of library visits and index cards, I typed /laura and let an AI research assistant build me a structured knowledge base.
The vault now contains concept notes on the International Brigades, the Condor Legion, and the Guernica bombing. Source notes with proper citations. Thematic maps connecting international intervention to the path to World War II. Everything linked. Everything mine.
No library visits. No falling asleep over books. And honestly? The output is more systematically organised than anything I produced as an undergraduate.
The Problem With AI Research Tools
Most AI tools make research worse, not better.
They answer questions - which feels productive in the moment - but leave you with nothing persistent. Ask about transit-oriented development today, and you’ll get a helpful summary. Ask again next week, and you’re starting from zero. The knowledge never compounds because it never becomes yours.
I wanted something different: an AI that builds artefacts, not just provides answers. Every research session should produce files that persist, connect, and grow. Files I own. Files that live in my knowledge base long after the chat session ends.
Enter Laura
Laura is an AI research persona implemented as a Claude Code skill. Type /laura in any research vault, and she activates - ready to conduct systematic research and build your knowledge base.
The name stuck because she feels approachable. Unflappable. The kind of research assistant who doesn’t get flustered when you ask about obscure historiographical debates from the 1980s.
What makes Laura different is the mandate: create files, don’t just discuss. Every research request produces actual markdown in your Obsidian vault:
- Concept notes in
/concepts/- one idea per file, with definitions, context, and relationships - Source notes in
/sources/- full citations, key arguments, critical assessment - Question notes in
/questions/- gaps and contradictions that need investigation - Maps of content in
/themes/- navigable structures connecting concepts
When you ask Laura to research the International Brigades, you don’t get a chat response. You get a dozen new markdown files appearing in Obsidian, each linked to the others, each following consistent templates.
Testing It On History I Already Know
The Spanish Civil War vault was a deliberate test. I know this material - or at least, I knew it once. Could Laura build a knowledge base that would satisfy someone who actually understood the domain?
She passed. Here’s what the International Brigades concept note looks like:
Definition: Foreign volunteer military units that fought for the Spanish Republic against Franco’s Nationalist forces during the Spanish Civil War (1936-1939). Organised and directed by the Comintern with headquarters in Paris.
Scale: 32,000-35,000 volunteers from approximately 50 countries (never more than 20,000 active simultaneously)
Contrasts with: [[condor-legion]] - German state intervention vs volunteer movement
The note goes on to discuss class composition, Jewish volunteer representation, recruitment networks, and links to a dozen related concepts. It flags historiographical debates - were volunteers motivated by ideology, adventure, or community networks? It poses open questions for further investigation.
This isn’t generic AI summary. It’s structured academic knowledge-building with proper cross-referencing and citation tracking.
The Philosophy: Augmentation, Not Automation
Laura doesn’t write your dissertation. She doesn’t make your arguments for you. She handles the scaffolding - the structure, the connections, the organisation - so you can focus on thinking and understanding.
The knowledge base is yours. The insights are yours. The writing is yours.
This distinction matters. Academic research tools that generate content are teaching you to perform someone else’s thinking. That might produce a passing grade, but it won’t develop your understanding. Laura creates the infrastructure for your own intellectual work.
She’s also rigorously careful about sources. The skill file includes explicit mandates:
NEVER invent citations. Only cite sources that the user has provided directly, you have fetched via web tools, or you have explicitly verified exist.
For academic vaults, she defaults to maximum caution. Claims need citations. Quotes need page numbers. General knowledge gets flagged as #needs-verification. The goal is infrastructure you can trust, not AI hallucination dressed up as scholarship.
The Multi-Persona System
Laura isn’t alone. The Obsidian Research Assistant includes three personas, each with distinct expertise:
Laura (Research Assistant) - conducts systematic research, processes papers, builds concept notes with proper academic citations
Alex (Solution Architect) - evaluates technology options, creates Architecture Decision Records, assesses risks and trade-offs
Riley (Product Owner) - writes user stories, defines value propositions, prioritises features
Each persona reads what the others have built. Laura researches event streaming platforms; Alex evaluates the options and creates an ADR; Riley ensures the decision connects to user value. Same vault, multiple perspectives.
The handoff between personas is where the system gets interesting. Laura’s research becomes Alex’s input. Alex’s architectural decisions become Riley’s constraints. The vault accumulates knowledge from different professional viewpoints, all interconnected.
Skills Over Prompts
Laura is implemented as a Claude Code skill - a SKILL.md file that lives in each vault’s .claude/skills/ directory. This is different from a prompt.
A prompt is instructions you give for a single interaction. A skill is methodology that persists across sessions. Laura’s skill file runs to over 700 lines, covering:
- Research methodology principles
- Source quality assessment criteria
- Information architecture patterns
- Workflow patterns for different research tasks
- Quality standards and self-checks
- Academic research protocols for dissertation-level work
When you invoke /laura, all of this methodology loads. You’re not just talking to Claude; you’re talking to Claude operating within a rich framework for systematic knowledge-building.
The skill file includes explicit self-checks:
Before responding to ANY research request, ask yourself:
- Have I created actual .md files in the vault folders?
- Will the user see new files when they look in Obsidian?
- Do my wikilinks match the actual filenames I created?
If the answer to any of these is NO, you have NOT completed the task.
Early versions of Laura would sometimes describe what files to create instead of creating them. Adding explicit behavioural mandates fixed this completely.
What I Learned Building This
The file creation mandate is essential. Without explicit instructions to create files, AI defaults to conversation. The skill file hammers this point repeatedly - artefacts, not answers; create files, don’t discuss.
Skills beat prompts. A comprehensive methodology file with templates, quality checks, and workflow patterns beats clever single-shot prompting every time. Claude follows detailed instructions reliably.
Cross-persona collaboration creates emergent value. The real power isn’t any single persona - it’s how they build on each other’s work. Research flows into architecture flows into product thinking.
Per-vault installation keeps things portable. Each vault has its own copy of the skill files. No global configuration, no external dependencies. Clone the vault, and the skills come with it.
Thirty-Five Years Later
My undergraduate dissertation is long gone - probably in a box somewhere, possibly recycled. But the research questions stuck with me: was the Spanish Civil War really an internal Spanish affair, or was it a European conflict from the start?
Now I have a vault that systematically maps the arguments. Axis intervention - the Condor Legion, the Italian CTV. International Brigades drawn from fifty countries. The Non-Intervention Committee’s failure and what it revealed about European appeasement. Guernica as both atrocity and symbol.
The notes are better organised than anything I wrote at 21. The citations are cleaner. The connections are explicit. And I didn’t have to fall asleep over a single dusty book.
Laura didn’t write my dissertation. But if I ever wanted to write it again, the infrastructure is now waiting.
The Obsidian Research Assistant is open source on GitHub. Set up a vault, install the personas, and let Laura build your knowledge base.