Jironaut - an origin story


Jironaut started with a frustration I suspect a lot of teams recognise.

Backlog refinement had become painful.

Not contentious.
Not dramatic.
Just slow, vague, and oddly exhausting.

Tickets were turning up that were technically fine but conceptually thin. The “what” was there, but the “why”, the assumptions, and the risks were often missing. Conversations drifted quickly to estimates or implementation details, even when nobody was entirely sure what problem we were solving.

We had a Definition of Ready.
Almost nobody insisted on it.

And that’s understandable. I don’t think it’s the Scrum Master’s job to act as the quality police. Chasing people for missing acceptance criteria or unclear intent rarely improves trust or collaboration.

So I started wondering whether something else could play that role.

Skunkworks and a hunch

Around that time, I’d created a Skunkworks space for the team. It was deliberately low-pressure: a place to explore ideas, try things out, and lead by example rather than by mandate.

I had a hunch that AI might help here.

Not by writing tickets.
Not by enforcing process.
But by asking better questions, consistently and without judgement.

So I wrote a set of prompts.

They were fairly detailed, designed to be copied and pasted as needed, and structured around the kinds of questions we said we cared about:

  • What problem is this ticket really trying to solve?
  • What assumptions are we making?
  • What risks or trade-offs are being accepted?
  • What does “good” look like when this is done?

Alongside the prompts, I wrote example story descriptions to show what “ready enough” could look like without being prescriptive.

To make it more engaging, I gamified it slightly.
The first version included badges and a simple scoring rubric. Not because I love gamification, but because it gave people permission to engage playfully rather than defensively.

The idea wasn’t compliance.
It was conversation.

What I was really testing

Underneath all of this was a bigger question.

Could AI act as a kind of neutral coach?

Something that helps surface gaps in thinking without anyone feeling personally criticised. Something that nudges people toward better preparation, while still leaving judgement and decision-making firmly with the team.

The early experiments did exactly what I’d hoped.
Not because the AI was “right”, but because its responses prompted discussion.

People challenged it.
They refined their thinking.
They noticed things they’d skipped over.

That was enough of a signal for me.

Jironaut didn’t start as a product idea.
It started as a way to reduce friction, improve conversations, and see whether AI could support learning rather than replace it.

The app came later.