Session 4 · Week 2

AI as a tool.
Not a crutch.

Use AI to ship faster without outsourcing understanding. One rule: if you can't explain every line, you don't ship it.

Powered by
Prisma tRPC Next.js TypeScript Tailwind TanStack
AI as a tool · 20 min

The rule: explain every line

If you can't explain every line of AI-generated code, you don't ship it. You own the code, not the AI.

"Treat AI-generated code as a draft that requires human review and understanding." — Addy Osmani
1.7×

Measured risk

PRs containing AI-authored code had 1.7x more issues than human-only PRs. Review isn't optional.

What you gain

Accountability, preserved skills, fewer hidden defects, and less team friction when you own every line.

When you skip review

Lost accountability, eroded skills, hidden defects that cost 30–100× more to fix in production, and reviewers catching issues you should have found. Your job shifts from writing code to integrating it.

Live demo

Four AI workflows (tRPC project)

Use these four workflows on your T3 project. Each has a clear input and output.

1

Comprehension

Paste unfamiliar tRPC middleware. Ask: "Explain this in plain English". Use to onboard to existing patterns.

2

Debugging

Structured prompt: error message + what you tried + context. Get targeted help instead of generic fixes.

3

Boilerplate

"Generate a Zod schema for a user profile form with these fields" → review and adjust. Don't hand-edit blindly.

4

Design rubber-ducking

"I need tagging on tasks. Separate table or JSON field? Here's my schema." Discuss tradeoffs before coding.

Live demo

Run through all four workflows on your T3 project during the session.

Rubber-duck rule

Design prompts are for thinking out loud. Don't implement until you understand the tradeoffs.

Prompting

The prompting formula

Context + Constraint + Specific Ask. 61% of developers struggle with AI code quality due to insufficient context—not model limits.

Bad

"fix my code"

No context. No error. No behavior. AI guesses.

Good

"Here's my tRPC router [paste]. The create mutation throws [error] when I send [input]. I expect [behavior]. What's wrong?"

Context + constraint + specific ask.

Research insight

Context improves code generation more than role-playing personas. Treat AI like a colleague who needs onboarding to your codebase.

Boundaries

When not to use AI

AI is a tool. Some situations require human judgment and full understanding.

×

Architectural decisions

Don't use AI for decisions you don't understand yet. Learn first, then decide.

×

Security-sensitive code

Auth, tokens, encryption. If it guards user data, you must own every line.

×

Tempted to skip understanding

If your instinct is to paste and hope, stop. That's when you need to slow down.

Never outsource the reading. Hold AI-written code to the same standards as human teammates.
Maximising AI usage · 35 min

Claude Maxxing

Skills, MCPs, context engineering, and autonomous loops. Get the most out of Claude Code.

Explore

claudemaxxing.org — Skills, MCPs, CoVe (Chain-of-Verification), Ralph Loop, GSD (Get Shit Done).

CoVe

Chain-of-Verification

Create → Outline → Verify → Emit. Claude tests its own code against docs and codebase.

GSD

Get Shit Done

Discuss, plan, execute, verify. State in files, not context. Fresh 200K windows per task.

"Give me six hours to implement a feature and I will spend the first four writing the prompt." — adapted from Eisenhower