We're all the GOAT at making mistakes sometimes, so let's make it easier for ourselves.
BLUNDER (what we get wrong)
- The Two Failure Modes: Rubber stamping vs. nitpick purgatory. You know who you are.
- The Pendulum Effect: I've seen after mistakes, people overcorrect. Then after the "days without accident" number creeps up, they get lax again.
- The Reciprocity Effect: Toxic ego driven code reviews can be a vicious cycle. Be thoughtful in your reviews and others will follow your lead.
GOAT (distributed rigour, who checks what)
- The Collaboration: Authors: small PRs, self-review first. Reviewers: assume competence, explain the why.
- The One Minute Wonder: Quick reviews are good for small, low-complexity, low-risk PRs on teams with strong QA. Easy to follow, nothing risky - a quick scan is enough.
- The Maintainability Test: Every line of code is a liability. If that person quit tomorrow, would you want to maintain it?
- Size Matters: Write smaller diffs. Better planning, testability, faster reviews. Psychological benefit: momentum. Nobody feels good sitting on a large diff.
- The Early Draft: Draft PRs catch major issues while code is still malleable - before sunk cost kicks in.
- The Legacy Lock: Don't fix legacy in someone else's PR. Review the change, not the surrounding code.
- The Context Pushback: Authors have more context. Push back on feedback that misses something, "I considered that, but didn't work because ..."
Who Checks What
The goal isn't less rigour, it's better distributed rigour.
- Author: "Does this match requirements?" Self-review first. Add comments explaining decisions. The reviewer shouldn't be the first person to read your code.
- QA/Testing: "Is this ready for live?" Functionality works, requirements met.
- Reviewer: "Could I work on this later?" Scan for red flags, verify maintainability.
- Automation: "Beep-boop." Style, formatting, test coverage, security scanning. If a rule can be automated, it shouldn't be in a human's head.
When the reviewer isn't spending cognitive load on things the author or CI should have caught, they can go deeper on the things that need human judgment.
TOOLS (automation, standards, AI)
- The Linter Lever: Automate style and convention checks. Free up reviewer time for deeper issues.
- The CamelCase Crisis: If seniors are checking naming conventions, you have a people problem, not a tooling problem.
- The AI Instructions: AI reviews are only as good as your instructions. Update them when you get false positives.
WORDS (writing feedback)
- The Labels: Use prefixes (
suggestion:, nitpick:, issue (non-blocking):) or severity tags ([Important/Now], [FYI]) so authors know what's blocking vs optional.
- The Reason: Explain the problem, offer alternatives, let the author decide. "Consider
calculateTotal() or sumLineItems(), can you please rename to make it a little easier for others to follow."
- The Offline: If you see major problems, then talk in person. Instead of all that back and forth, it's quicker to discuss, and there might be some extra context the reviewer is not aware of yet.
TEAM (ego, trust, culture)
Toxic reviews compound. Thoughtful ones do too. That's team culture.
- The Separation: You are not your code. Critique the code, not the person.
- The Reread: Words hit harder in writing than in person. Read your comment back before posting.
- The Copy Ninja: When you join a team, copy how they do things, even if you disagree. Gain trust first, then slowly suggest changes.
- The Learning: Code reviews are a chance to learn: a new corner of the codebase, a better habit, a different approach.