Writing on software, systems, and hard-won lessons.
Writing on developer experience, systems thinking, and the mistakes behind both - covering AI workflows, continuous improvement, and the mental models that drive better decisions.

One lazy Sunday afternoon, I went from an empty folder to a production-ready scanner with 13 detectors, three themed personalities, and a Cloudflare Worker proxy.
Total time: ~5 hours.
Lines of code I personally wrote: Zero.
Time the AI spent actually writing code: ~40 minutes.
The other 4+ hours? Planning, taste, and judgment. That ratio is the entire unlock.
A free scanner that diagnoses why your site is invisible to Google - then roasts you in one of three voices:
Disappointed Therapist: "This level of self-sabotage suggests deep-seated issues. I'm concerned."
Sentient Algorithm: "Perfect inefficiency achieved. Deletion recommended. Goodbye, Dave."
Angry Chef: "WHAT ARE YOU?! AN IDIOT SANDWICH! Your content is raw, your HTML is sloppy, and in between is nothing but failure!"
What shipped in 5 hours:
Traditional solo-dev estimate: 4-6 weeks. Reality: One human, one Sunday, 40 minutes of AI coding.
SBAO is knowing when to sprint with one AI and when to bring in a council - scaling how many AIs you involve based on human signals, not fixed rules.
Core rule: ~90% single AI, ~10% signal-triggered multi-AI.
Loophole Detector: "It works... but I can already see how it could break in the real world."
Annoyance Factor: "This technically works, but there's unnecessary friction. There has to be a cleaner way."
Sniff Test: "Looks right. Feels wrong."
When a signal fires, I halt the execution and start arbitrating. Disagreement is diagnostic. Convergence is confidence.
AIs execute with senior-level breadth and speed. I validate every critical output with junior-level distrust until cross-model convergence. Result: high quality without sacrificing velocity.
The 5-Hour Build Timeline: 60% planning, 13% AI coding, 27% human testing and polish.

Claude reviewed my plan with client-side fetch + AWS static hosting. It immediately warned that this setup would run into CORS/SSRF issues and suggested Cloudflare instead. Switching to a stack I barely knew felt like unnecessary friction.
And in SBAO, when the Annoyance Factor fires, we escalate...
SBAO escalation: I brought in a full council (Codex, Gemini, Grok). Unanimous verdict: CORS failures, SSRF risks, debugging hell.
Outcome: Pivot to Cloudflare Pages + Worker proxy. Saved weeks of pain.
ChatGPT pitched a "crime scene investigation" concept. I loved it immediately - the dark humor, the forensic angle, the detective vibe. But my Sniff Test fired: "This is great... but one personality isn't enough. This tool needs variety."
In SBAO, when your intuition says you need breadth not depth, we escalate...
SBAO escalation: I asked Gemini, Claude, Grok, and Codex to each brainstorm personality concepts independently, then rank all the options. The Therapist concept ranked 1st across multiple AIs. Sentient Algorithm ranked 2nd. But I chose the 5th-ranked option (Angry Chef) as the third personality because the council's rankings showed why each worked - and I wanted maximum tonal contrast.
Outcome: Three distinct voices with clear use cases. The council informed the options. I made the final call.
The initial scoring strategies proposed by the AIs were all overly complex. My Loophole Detector fired - I could see risk of implementation trouble and maintenance pain. Codex wanted additive scoring. Claude argued for impact tiers. Gemini wanted diminishing returns. Grok wanted chaos.
Each had valid reasoning. None agreed. That's when we escalate...
SBAO escalation: I switched into cross-examination mode - forcing each AI to critique the others' logic. From that debate, I synthesised the strongest elements into a single model.
Outcome: A coherent 0-666 scoring framework that balanced psychological impact with technical severity.
Those five hours broke down simply:
I handled: Product taste, personality calibration, strategic judgment, signal detection, final arbitration.
AI handled: Every line of code, every CSS war, every CORS workaround, every DOM quirk.
The unlock isn't using AI faster - it's knowing when to bring in multiple AIs, how to arbitrate their disagreements, and which signals demand human judgment.
You're not just coding anymore. You're the architect, the signal detector, and the final arbiter.
Stop treating AI like a tool. Start collaborating with it like a team.
That shift - from assistant to council - is the difference between 10% faster and 10x faster.
If you start building with the SBAO mindset, you'll probably discover what I did:
The fastest way to ship better software isn't coding faster - it's deciding better.
Plan with clarity, delegate the execution, and own the final call.