Writing on software, systems, and hard-won lessons.
Writing on developer experience, systems thinking, and the mistakes behind both - covering AI workflows, continuous improvement, and the mental models that drive better decisions.

Code review exists in balance. If we are too strict then PRs take forever, developers get frustrated, the business can't ship. If we are too relaxed then bugs reach production, technical debt accumulates, knowledge doesn't transfer.
The goal isn't maximum rigour. It's appropriate rigour — catching what matters without creating bottlenecks.
After years building healthcare software, I stopped asking "did we catch everything?" and started asking: what would make this process work for everyone?
| If you... | Read... |
|---|---|
| Want the 2-minute version | Quick Reference below |
| Are new to code review | Quick Reference + The Two Rules — that's your foundation |
| Submit PRs for review | Part 1: For Authors |
| Review other people's code | Part 2: For Reviewers |
| Lead a team or care about velocity vs quality | Part 3: The Balance |
| Reviewer's Job | Not Reviewer's Job |
|---|---|
| Complexity — Can a future dev understand this? | Design — Should be agreed before coding |
| Naming — Are names clear to someone reading cold? | Functionality — That's Author + QA/testing's job |
| Comments — Do they explain the why, not the what? | Tests pass — That's CI's job |
| Unusual patterns explained — Is the weird stuff justified? | Style/formatting — That's linters' job |
| Security — Access control, PII exposure, input validation | Requirements met — Trust the developer and tester |
Quality bar + velocity bar. Everything else flows from these.
A good PR isn't just correct code. It's code that's easy to review. Every minute you invest in clarity saves your reviewer five minutes of confusion — and saves you a round-trip of "what does this do?" comments.
Large PRs are hard to review well. When a reviewer sees 800 lines, they either skim it and miss things, or put it off because it's daunting.
Aim for:
If your PR is huge, ask: how can I avoid this next time?
And guide your reviewer. Add a few PR comments upfront explaining which sections need close attention and which can be skimmed (e.g., "This file is just generated migrations" or "The logic change is in lines 45-60, the rest is renaming"). Don't make them guess where the important bits are.
Don't stare at a blank description box. Use GitHub Copilot to generate a first draft — it can summarise your changes, identify what files were touched, and flag potential concerns.
Then review and refine it:
Copilot gets you 80% there in seconds. Your refinements add the context only you know.
Copilot will also review your code and leave comments. Treat these like any other reviewer's feedback — but with one addition:
If Copilot's feedback is wrong or nitpicky, let your tech lead know. They can update the Copilot code review instructions so it doesn't keep flagging the same unhelpful things. This improves the tool for everyone.
Your reviewer is reading cold. Help them follow it:
calculateInvoiceTotal() not process(). GitChangeDetector not Helper.The test: Could a new team member understand this in 5 minutes? If not, simplify or add context.
Before requesting review, read your own diff as if you were the reviewer:
Add your own PR comments to explain decisions, flag uncertainties, or pre-answer obvious questions. This saves round-trips and shows you've thought it through.
You don't need to reply "Done" to every comment. The reviewer trusts you'll address feedback you agree with — that's the default assumption.
Use "resolve conversation" to mark comments as addressed. This lets you track what's done and what's still pending, without cluttering the thread with acknowledgements.
Respond when:
This keeps conversation focused on areas where you and the reviewer genuinely see things differently.
Not all feedback belongs in this PR:
Pushing back isn't conflict. It's collaboration.
If you're three comments deep on the same issue and still not aligned, stop typing. A five-minute call beats an hour of async back-and-forth.
If the reviewer you want is busy with high-priority tasks, ask someone else. Check with your team leader if you're unsure who's available.
If you haven't heard back within 24 hours, follow up. The reviewer may have missed your request or be swamped with other work. A simple "friendly reminder about this PR" with a link makes it easy for them to jump back in.
A common misconception: reviewers should verify the code works correctly.
No. That's the developer's job (they wrote it) and the tester's job (they verified it). If you're re-checking requirements and re-testing functionality, you're duplicating work and slowing everyone down.
Your job is to answer one question: Can a future developer understand and maintain this code?
When reviewing any code, ask:
If the answer to any is "no", that's your feedback.
| Check | Why |
|---|---|
| Complexity | Can someone unfamiliar with this code understand it quickly? |
| Naming | Are names self-documenting? Would you find this in a search? |
| Comments | Do comments explain why, not what? Are unusual patterns justified? |
| Security | Access control checked before data access? PII protected? Input validated? Secrets exposed? |
| Don't Check | Why Not |
|---|---|
| Design/Architecture | Should be agreed before coding. Catching it in review means massive rework — that's a planning failure, not a review finding. (Exception: If you spot something genuinely dangerous — security hole, data loss risk, architectural landmine — escalate immediately. Don't let process trump safety. If you're unsure whether something is dangerous, escalate anyway — false positives are cheaper than missed incidents.) |
| Functionality | Trust the developer followed requirements. Trust the tester verified it. You're not QA. (Flag obvious mismatches if you spot them, but don't go hunting.) |
| Whether tests pass | That's CI's job. |
| Whether tests are correct | Requires domain knowledge you may not have. Trust the author. (Flag obvious gaps if you spot them, but don't audit every assertion.) |
| Style/formatting | That's linters' job. Automate it. |
Exception: If you spot an obvious bug while reviewing, of course flag it. But looking for bugs shouldn't be your primary focus — except for security and PII issues, which are always worth actively checking.
Every healthy code review culture runs on two rules:
Quality bar + velocity bar. That's the operating system.
There's no such thing as perfect code — only better code. If the PR makes things better and doesn't introduce new problems, approve it. Don't hold it hostage for polish.
And don't let PRs sit. A PR waiting for review is a developer blocked. Respond within one business day, even if it's just "I'll get to this tomorrow morning."
Before approving, ask:
If yes to 1 & 2, and no to 3 → approve. If no to 1 or 2 → comment. If yes to 3 → escalate.
If feedback will take longer than 5 minutes to address, tag it so authors know what's blocking vs advisory:
| Tag | Meaning |
|---|---|
| [Important/Now] | Must change before merge |
| [Important/Later] | Should change, but can be a follow-up ticket |
| [Quick Fix] | Not critical, but easy to address before merge |
| [FYI] | Good to know for next time, no action needed now |
Quick fixes don't need tags. But for anything substantial, tagging prevents authors spending hours on something you considered optional.
Bad: "This is confusing."
Good: "I couldn't follow the logic here. Could you extract the validation into a separate method, or add a comment explaining why we check X before Y?"
Don't just say what to change — say why it matters:
When authors understand the reasoning, they learn.
If something looks wrong but you're not sure, ask for a comment rather than challenging the decision:
validateInput() helper?"The author knows more context than you. And if you're asking the question, future developers will too.
This approach works both ways: if there's a problem, the author will spot it while writing the comment. If it's fine, the codebase gets a useful comment. Either way, maintainability improves.
If you're leaving 20+ comments, something's wrong — either the PR is too big, or you're nitpicking.
Guidelines:
If a PR has fundamental issues — wrong approach, missing requirements, major concerns — don't leave 30 comments. Reach out directly:
"Hey, I started reviewing and have some bigger questions about the approach. Can we chat for 5 minutes?"
This is faster, kinder, and more effective.
If someone asks for a code review and you're swamped with high-priority tasks, don't let their PR sit in limbo. Ask them to find someone else, or check with your team leader to reassign. A quick redirect is better than a week of silence.
Code review exists in tension:
The goal isn't maximum rigour. It's appropriate rigour.
The worst pattern: Thorough, technically-correct feedback on a prototype that's going to change anyway. All that review work? Wasted.
Different problems should be caught at different stages:
| Problem | When to Catch | Not In Code Review |
|---|---|---|
| Wrong approach / design | Architecture discussion, ticket refinement, spike | ❌ Too late — massive rework (but escalate if genuinely dangerous) |
| Missing requirements | Ticket refinement, dev-tester pairing | ❌ That's planning failure |
| Doesn't work correctly | Developer testing, QA | ❌ Reviewer isn't QA |
| Tests don't pass | CI pipeline | ❌ Automated |
| Style violations | Linters, formatters | ❌ Automated |
| Hard to understand | ✅ Code review | This is reviewer's job |
| Hard to find later | ✅ Code review | This is reviewer's job |
| Doesn't follow patterns | ✅ Code review | This is reviewer's job |
| Security issues | ✅ Code review | Access control, PII exposure, input validation |
If design problems are regularly caught in code review, fix your planning process — don't make reviewers the gatekeepers for architectural decisions.
If a rule matters enough to follow, it matters enough to automate:
Machines check the metrics. Humans check the readability.
Ambiguity creates friction. Be explicit:
If the same feedback keeps appearing and it's not quick to fix:
Don't keep leaving the same feedback. Fix the system.
| Context | Review Depth |
|---|---|
| Prototype / spike | Does it work? Obvious security holes? Ship it. |
| Standard feature | Readability, patterns, basic security |
| Core infrastructure | Full rigour — this code will be here for years |
| Hotfix at 5pm Friday | Verify correctness and security, skip the nits |
Code review should make your codebase better and your team smarter — without making everyone miserable or grinding delivery to a halt.
For authors: Make the reviewer's job easier. For reviewers: Focus on readability, security, and patterns — trust the author on functionality. For the business: Ship quality code at a sustainable pace.
When it's working, code review feels like collaboration. When it's not, it feels like bureaucracy.
If it feels like bureaucracy, discuss with the team and improve the process.
These practices aren't carved in stone. They're a starting point — shaped by experience, but meant to evolve with your team.
If something isn't working, say so. If a guideline creates more friction than value, we should change it. If you spot a gap, fill it. If you disagree with something here, that's a conversation worth having.
The best processes emerge from teams who continuously refine how they work together. This document should reflect that.
Suggest improvements. Challenge assumptions. Make it better.