Writing on software, systems, and hard-won lessons.
Writing on developer experience, systems thinking, and the mistakes behind both - covering AI workflows, continuous improvement, and the mental models that drive better decisions.
Stop drowning in engine noise and pinpoint the exact move that cost you the game. Turn every loss into a lesson with pro-level coaching tips and 64 tailored tactical patterns. Get lightning-fast, professional-grade analysis designed to deliver real results, every time. After 882 commits, ~21,500 lines of Python, 70 releases...
Nothing worked. Rage quit mode activated.

That was BlunderLab, a chess analysis CLI I vibe-coded for a month. The engineering standards were there but the planning wasn't. Pope Francis started a project, then High Sparrow stuck debugging 882 commits later.
After abandoning BlunderLab I asked myself what would I wanted to create this project with a team, instead of our friendly neighbourhood coding agent. I'd say plan before coding and lets sharpen that plan together.
That's what I do now, and it works. Here's the playbook.
Writing the requirements isn't just a best practice, it's survival. The coding agent didn't attend that meeting last week, so it doesn't know what's most important until you let it know.
Before AI reviews and sharpens the plan, first write detailed requirements. What the feature does, who it's for, what systems it touches, what the constraints are. When the AI guesses wrong on any of these, you don't find out until you're deep in the implementation.
Ask four AI agents for four competing plans. Have them rank and rate each other's plans out of 100 with reasons. Cherry-pick the best ideas into a prime plan. One agent spots a missing edge case. Another proposes a cleaner architecture. Another catches a blocking dependency. You play referee.
SBAO the plan after writing the requirements yourself by running multi-agent mode in cursor and ask four agents to deeply review the code base and the following requirements, then create a plan and save that in TODO_feature-name_{model}.md:
Once the files have been created, ask the agents:
rank each plan in a comparison table and rate them out of 100 and why
Review the feedback and cherry-pick the good ideas, asking your favourite neighborhood coding agent to create the prime TODO doc. Rinse and repeat one more time and once you're happy it's time to ask the prime agent to execute TODO_feature-name_prime.md plan.
Break the plan into 3-10 milestones. Each milestone ends with manual human testing before the next one starts. Fewer than 3 checkpoints and the agent drifts too far. A wrong assumption in milestone 1 cascades through everything after it.
Fail fast at every layer. In prompts, in code, in architecture. Slow failures are the expensive ones.
When the coding agent messes up, tell it what went wrong and ask it to suggest an instruction that prevents it next time. Claude re-enabled nonce validation in the Go backend on this blog without checking whether the frontend could actually send one. Production login broke. The agent wrote its own guardrail, now CLAUDE.md includes:
"This is a full-stack platform. When making changes, consider the impact across all layers."
Context Rot: This paper (LLMs Get Lost In Multi-Turn Conversation) calls it "lost in conversation". The model guesses too early, then refuses to let go. New information from you gets filtered through the lens of an error it already committed to, and by the end of the chat it's defending its first guess. Try adding this to your CLAUDE.md or AGENTS.md file today:
After creating a plan, always save it to docs/PLAN.md with checkboxes before starting work. Update PLAN.md after completing each task.
What is claude insights: The /insights command in Claude Code generates an HTML report analysing your usage patterns across all your Claude Code sessions. It's designed to help us understand how we in...
A friend of mine recently attended an open forum panel about how engineering orgs can better support their engineers. The themes that came up were not surprising:
With chatbots and coding agents we've all experienced those moments that make us stop trusting the first answer. From cheerleading to making stuff up to drifting off course. That's why sometimes we ne...