Sprint Planning for a Team of One: Building a Solo Engineering Workflow with AI Agents
Context
Last post, I talked about building a CI/CD safety net for the BRS Workspace — a Next.js financial platform I maintain solo. That solved the “how do I ship safely” problem.
This post is about the problem that came right after: how do I decide what to build?
I had TODOs everywhere. Some in TickTick on my phone. Some in my head. Some in BMAD story files on my laptop. A few scribbled in a notebook. The codebase had 20+ things that needed attention — tech debt from code reviews, new feature requests, a 201-commit merge backlog, infrastructure chores, and a Keycloak auth system I knew was architecturally wrong but hadn’t scoped out fixing yet.
No prioritization. No visibility. No system.
On a team, this is what a PM, a scrum master, and sprint planning meetings are for. I don’t have any of those. I have me, a terminal, and a surprisingly opinionated collection of AI agents.
What I Tried First
I’d been using BMAD — a multi-agent workflow framework — for a while. It’s good at producing structured artifacts: PRDs, architecture docs, acceptance criteria, stories with clear scope. When I used it for the CI/CD epic, it helped me break 6 epics into 31 stories with concrete deliverables.
But BMAD stories live as markdown files on my machine. They’re not visible to anyone. There’s no board, no labels, no way to answer “what’s in this sprint?” at a glance. And when I had 20+ items of varying size and urgency, the flat file structure didn’t help me prioritize.
I also had TickTick — great for personal task management, terrible for engineering backlog. “Fix the race condition in cancelDialog” doesn’t belong next to “Buy groceries.”
Neither tool alone was enough. I needed both, and I needed them connected.
The System I Built
Hybrid Tracking: Specs + Visibility
The insight was simple: BMAD is good at specs, GitHub Issues are good at tracking. Use both for what they’re good at.
- BMAD stories hold the deep stuff — acceptance criteria, architecture notes, file inventories, migration strategies. The kind of detail you need when you sit down to implement.
- GitHub Issues hold the tracking stuff — titles, labels, sprint assignments, status. The kind of thing you need when you’re deciding what to work on next.
Every BMAD story gets a corresponding GitHub Issue. The issue body inlines the key acceptance criteria (no “see file at path/to/story.md” — the content lives in the issue). Sprint labels (sprint:1, sprint:2, sprint:3, backlog:groomed, defer) make the board scannable.
Extracting and Mapping TODOs
The first thing I did was dump everything into one place. I screenshotted my TickTick list, fed it to Claude, and got back a structured list of 20 items. Then came the mapping:
- 3 items were duplicates of existing BMAD stories
- 1 was a personal task (not engineering)
- 2 weren’t relevant to BRS
- The remaining 14 mapped to existing issues or became new ones
Each issue got bilingual titles and descriptions (Korean + English — the team and stakeholders speak Korean), sprint labels, and assignment. The rule I set: every code change must link to a GitHub Issue. No exceptions except hotfixes. Branch naming follows feature/#123-desc or fix/#123-desc.
The Party Mode Prioritization
This was the fun part. BMAD has a “party mode” — a multi-agent discussion where different AI personas debate a topic. I ran it with 6 agents to decide sprint priorities:
- John (PM): Focused on business value and user impact
- Winston (Architect): Flagged technical dependencies and risk
- Amelia (Dev): Reality-checked effort estimates
- Quinn (QA): Pushed for testing infrastructure
- Bob (Scrum Master): Balanced scope vs capacity
- Paige (Tech Writer): Advocated for documentation
They debated for about 10 rounds. The consensus:
Sprint 1: Structured logging (#312) + error pages (#329) — foundational, unblocks debugging for everything else.
Sprint 2: Tech debt cleanup (5 items from code review) + UI improvements — quick wins that improve code quality.
Sprint 3: The big merge (brs-dev to brs-prd, 201 commits) — needs the safety net from Sprints 1-2.
Was this overkill for a solo dev? Maybe. But it forced me to think about sequencing. The logging system genuinely needs to exist before I tackle the merge — I need structured error logs to debug any regressions. That dependency wasn’t obvious until the architect agent pointed it out.
How a New Task Flows Through
Here’s a concrete example. Today I realized the Keycloak auth implementation is wrong — tokens are stored in sessionStorage, there’s no server-side auth, no middleware.ts, the Keycloak JS adapter is a SPA tool being used in an SSR framework.
Old me would have added “fix auth” to TickTick and forgotten about it for weeks.
New workflow:
-
Explore the current state. I ran an agent to map out exactly how tokens flow — where they’re stored, refreshed, injected. Found: 7 axios instances all reading from
sessionStorage, 60-second client-side refresh interval, zero server-side validation. -
Scope it honestly. This touches auth-provider, all axios instances, middleware, layout — it’s an epic, not a story. And I don’t even know the right approach yet (
next-auth? custom OIDC? hybrid?). So story 8-0 is pure research. -
Create the BMAD epic. 6 stories: research → server-side tokens → middleware → migrate axios → refactor auth provider → cleanup. Each with clear scope and dependencies. Story 8-0 blocks everything — no implementation until the architecture is decided.
-
Create the GitHub Issue. Bilingual title and description,
backlog:needs-groominglabel, assigned to me. The issue body inlines the current state analysis, the story breakdown, and the dependency chain. -
Update sprint status. The epic goes into
sprint-status.yamlalongside the other epics. It’s visible, scoped, and waiting for its turn.
Total time from “I think auth is wrong” to “fully scoped epic in the backlog”: about 20 minutes. The key is that I didn’t try to solve it — I just made sure it’s captured with enough context that future-me can pick it up without re-investigating.
What Works
The bilingual GitHub Issues are genuinely useful. When I need to explain priorities to Korean-speaking stakeholders, I can just share the issue link. No translation needed on the spot.
BMAD stories as specs prevent scope creep. When I sit down to implement, the acceptance criteria are already written. I’m not making decisions mid-code about what “done” means.
Sprint labels make the backlog scannable. One gh issue list --label sprint:1 and I know exactly what’s in flight.
The party mode prioritization revealed a real dependency. Logging before merge wasn’t my instinct — I wanted to jump to the merge. The multi-agent debate caught the sequencing issue.
What’s Honestly Overkill
Running 6 AI agents to prioritize 20 items is more than a solo dev needs for a typical sprint. A 5-minute look at the list would have gotten me 80% of the way there. The party mode was fun and produced a defensible result, but it’s not something I’d do every sprint.
The BMAD story format is heavy for small tasks. The 5 tech debt items from code review (extract a constant, fix a race condition) don’t need acceptance criteria and architecture notes. A well-written GitHub Issue is enough.
Bilingual everything adds friction. It’s worth it for stakeholder visibility, but writing every issue body in two languages takes time. For internal-only tech debt, I sometimes wonder if Korean-only would be fine.
What I’d Tell Another Solo Dev
You don’t need all of this. Here’s the minimum viable version:
- One tracking system. GitHub Issues is free and sufficient. Don’t split across 3 tools.
- One rule: every code change links to an issue. This alone forces you to think before coding.
- Sprint labels. Even if your “sprint” is just “this week” vs “later.” Visibility matters.
- Write acceptance criteria before you code. Doesn’t need to be a BMAD story. A bullet list in the issue body works. The point is deciding what “done” means before you start.
The AI agents and multi-agent debates are a bonus — they’re most valuable when you’re stuck on sequencing or when a task is too ambiguous to scope alone. But the foundation is just: track everything, scope before you build, label your priorities.
What’s Next
Sprint 1 starts now. Structured logging system (pino, JSON for K8s) and error pages. Both are well-scoped with clear acceptance criteria in their issues.
After that: tech debt cleanup, then the big merge. And somewhere in the backlog, the auth migration is waiting — a research story that I’ll get to when the foundation is solid.
The system isn’t perfect. But for the first time, I can answer “what are you working on and why?” with a link instead of a ramble. For a team of one, that’s progress.
Built with GitHub Issues, markdown files, 6 argumentative AI agents, and the irrational belief that one person needs a sprint board.