Close the loop.
Ship the feature.
Wire up your backend to the UI in minutes. Own the feature from database to user interaction.
Traditional useEffect fetching
Look at this implementation. What could go wrong here?
function PokemonCard({ id }: { id: number }) { const [data, setData] = React.useState(null); React.useEffect(() => { async function run() { const res = await fetch(`/api/pokemon/${id}`); const json = await res.json(); setData(json); } run(); }, [id]); return <pre>{JSON.stringify(data)}</pre>; }
First obvious problems
Two issues show up immediately in production: missing loading state and missing error state.
No loading state
Users stare at stale or blank UI while request is in flight. You also cannot distinguish initial load from refetch.
No error state
Failed responses have nowhere to go in UI, so failures become silent, confusing, or app-breaking.
What you end up adding manually
const [pokemon, setPokemon] = useState(null); const [isLoading, setIsLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { async function fetchPokemon() { setIsLoading(true); setError(null); try { const res = await fetch(url); if (!res.ok) throw new Error("request failed"); setPokemon(await res.json()); } catch (e) { setError(e.message); } finally { setIsLoading(false); } } fetchPokemon(); }, [url]);
Still not done
if (isLoading) return <Skeleton />; if (error) return <ErrorState message={error} />; if (!pokemon) return <Empty />; return <PokemonCard data={pokemon} />; // Better, but we still have one subtle async bug...
Question
What else can you spot?
The subtle bug: race conditions
Even with loading and error states, fast input changes can still cause stale flashes from out-of-order responses.
Race condition example: Pokemon carousel
// User toggles between id=25 and id=133 quickly const [id, setId] = useState<number>(25); const [imageUrl, setImageUrl] = useState<string | null>(null); useEffect(() => { async function load() { const res = await fetch(`/api/pokemon/${id}`); const data = await res.json(); setImageUrl(data.sprites.front_default); } load(); }, [id]);
What can go wrong
// t0: id=25 request starts (slow network) // t1: user clicks next -> id=133 request starts // t2: id=133 resolves first -> image becomes Eevee ✓ // t3: id=25 resolves later -> setImageUrl(Pikachu) ❌ // UI now flashes stale image even though id is 133. // That's the race: late old response overwrites new state.
Interactive demo: force the race condition
Click quickly between Pokemon. This demo intentionally does not cancel prior requests.
Loading...
Starting request...
What is React Query, really?
React Query is an async state manager for server state.
Server state aware
Models stale/fresh data, background refetching, retries, and synchronization after writes.
Cache + dedupe
Request results are cached by query key and shared across components, avoiding duplicate calls.
Mutation workflows
Built-in mutation lifecycle callbacks and invalidation patterns keep UI aligned with server truth.
Without React Query
function App() { const [projectId, setProjectId] = React.useState("p_123"); const [project, setProject] = React.useState(null); const [isLoading, setIsLoading] = React.useState(true); const [error, setError] = React.useState(null); React.useEffect(() => { let ignore = false; const loadProject = async () => { setProject(null); setIsLoading(true); setError(null); try { const projectRes = await fetch(`/api/projects/${projectId}`); if (ignore) return; if (!projectRes.ok) { throw new Error("Failed to load project"); } setProject(await projectRes.json()); setIsLoading(false); } catch (e) { setError(e.message); setIsLoading(false); } }; loadProject(); return () => { ignore = true; }; }, [projectId]); if (isLoading) return <Skeleton />; if (error) return <ErrorState message={error} />; if (!project) return <Empty />; return <ProjectHeader project={project} />; }
With React Query
function ProjectPage({ projectId }) { const project = useQuery({ queryKey: ["project", projectId], queryFn: () => fetchProject(projectId), }); if (project.isPending) return <Skeleton />; if (project.error) return <ErrorState />; if (!project.data) return <Empty />; return <ProjectHeader project={project.data} />; } async function fetchProject(projectId) { const res = await fetch(`/api/projects/${projectId}`); if (!res.ok) throw new Error("Failed to load project"); return res.json(); }
Concrete payoff
Each unique queryKey maps to one cache entry and one queryFn contract. Same key = shared cache; different key = different cache.
What is useQuery?
It models reads of server state. You declare a queryKey and a queryFn, and TanStack Query manages fetch + cache + subscriptions.
Minimal shape
const project = useQuery({ queryKey: ["project", projectId], queryFn: () => fetchProject(projectId), }); if (project.isPending) return <Skeleton />; if (project.error) return <ErrorState />; return <ProjectHeaderView project={project.data} />;
Operational model
Contract - queryKey identifies one cache entry - queryFn defines how that key is fetched Behavior - runs automatically when mounted - serves cached data for same key - background refetch on focus/reconnect/mount (default stale behavior) Outcome - multiple components with same key stay synchronized
Bridge to next section
If reads are cached by key, what happens after a successful write changes server data?
TanStack Query internals (1/2)
Start with the core pieces. If these are clear, the defaults and lifecycle behavior make sense.
QueryClient
The control tower. Owns global defaults and APIs like invalidateQueries, prefetchQuery, and setQueryData.
QueryCache
The storage layer. Holds entries by queryKey with their data, timestamps, and status state.
Observers
The view bindings. Every useQuery registers an observer that subscribes to a queryKey; cache updates push re-renders to all observers of that key.
How they work together
QueryClient executes policy and commands, QueryCache stores truth by key, observers keep UI synced to that truth.
TanStack Query internals (2/2)
Now the defaults and why they exist.
Important Defaults (official docs)
Cached data is stale by default. Inactive queries are garbage collected after 5 minutes. Failed queries retry 3 times with exponential backoff.
Stale by default
Queries refetch in the background on mount/focus/reconnect. Rationale: server data can change outside your app, so freshness is safer than assuming permanence.
Inactive cache retention
Inactive queries stay cached for 5 minutes by default. Rationale: quick back/forward navigation feels instant without retaining data forever.
Retry behavior
Failed queries retry 3 times with exponential backoff. Rationale: many failures are transient network blips; retrying improves reliability without extra UI code.
Request-to-UI flow (visual)
Key idea
One cache entry per queryKey, many observers, one shared source of truth.
Lifecycle of useQuery
High-level flow (docs model): from mount to cache reuse, refetch, and garbage collection.
Mental model
queryKey identifies the cache entry, queryFn produces the data, and observers keep UI synced with that entry.
useQuery lifecycle after writes
When mutations change server data, use invalidation so stale cache entries rejoin the read lifecycle and refetch.
Fix: invalidate this key on success
const createTask = useMutation({ mutationFn: createTaskApi, onSuccess: async (_data, vars) => { await queryClient.invalidateQueries({ queryKey: ["tasks", vars.projectId], }); }, });
What invalidateQueries does
1) Match cache entries by queryKey filter 2) Mark matched entries stale 3) Active observers refetch in background Matching examples - { queryKey: ["tasks"] } => all task queries - { queryKey: ["tasks", projectId], exact: true } => one query Result - post-mutation UI converges back to server truth
Rationale
You declare which server-state slice is outdated; TanStack Query handles stale marking + refetch orchestration.
What is useMutation?
It models write operations (create/update/delete) and other server side-effects. Unlike useQuery, it does not run automatically.
Minimal shape
const createTask = useMutation({ mutationFn: createTaskApi, }); const onSubmit = (input) => { createTask.mutate(input); };
Operational model
Trigger styles - mutate(input): fire-and-forget callback flow - mutateAsync(input): Promise flow with await/try-catch States - idle -> pending -> success | error Key behavior - stores last error/data for this mutation - does not refetch queries by itself - default retry is false (unlike queries)
Bridge to next section
If mutations do not sync read caches automatically, how do we keep query data correct after a write?
Lifecycle of useMutation
High-level flow: writes are triggered manually, then you synchronize read caches after success.
Question
Should we always invalidate, or sometimes write the mutation response directly into cache?
Lifecycle of useMutation — answer
Two valid patterns: invalidate affected queries, or update cache from mutation responses.
Pattern A: invalidations from mutations
const updateTask = useMutation({ mutationFn: updateTaskApi, onSuccess: async (_data, vars) => { await Promise.all([ queryClient.invalidateQueries({ queryKey: ["tasks", vars.projectId], }), queryClient.invalidateQueries({ queryKey: ["projectStats", vars.projectId], }), ]); }, });
Pattern B: updates from mutation response
const updateTask = useMutation({ mutationFn: updateTaskApi, onSuccess: (updatedTask, vars) => { queryClient.setQueryData( ["task", updatedTask.id], updatedTask ); queryClient.setQueryData( ["tasks", vars.projectId], (old = []) => old.map((t) => t.id === updatedTask.id ? updatedTask : t) ); }, });
Rule of thumb
Invalidate when related server state is broad or hard to predict. Use setQueryData when the response is the exact next value for a known key.
Audience check
If the server applies sorting/filtering side effects after save, which pattern is safer?
tRPC + TanStack Query
The key insight
api.task.getAll.useQuery() is TanStack Query — but with automatic types and fetch logic. No fetch(), no response parsing, no manual types.
Reading data
import { api } from "~/trpc/react"; export function TaskList() { const { data, isLoading, error } = api.task.getAll.useQuery(); if (isLoading) return <Spinner />; if (error) return <Error />; return ( <ul> {data.map(task => ( <li key={task.id}> {task.title} {/* ^ fully typed */} </li> ))} </ul> ); }
Hover over task — full autocompletion matching your router's return type.
Writing data
export function CreateTask() { const utils = api.useUtils(); const create = api.task.create .useMutation({ onSuccess: () => { // Invalidate stale data utils.task.getAll.invalidate(); }, }); const handleSubmit = () => { create.mutate({ title: "New Task", priority: "MEDIUM", projectId: "...", }); }; return ( <button onClick={handleSubmit}> Add Task </button> ); }
Cache invalidation — the #1 gotcha
After a mutation, call utils.task.getAll.invalidate() in onSuccess. Forget this and your UI shows stale data. This will bite you.
Tailwind in five minutes
Don't learn it. Use it. Search the docs, copy classes, iterate.
Before
<div> <h2>My Tasks</h2> <ul> <li>Fix login bug</li> <li>Add search</li> </ul> </div>
After — 30 seconds
<div className="max-w-md mx-auto p-6"> <h2 className="text-2xl font-bold mb-4"> My Tasks </h2> <ul className="space-y-2"> <li className="p-4 rounded-lg bg-gray-800 shadow hover:bg-gray-700 transition"> Fix login bug </li> </ul> </div>
The whole Tailwind lesson
Bookmark tailwindcss.com/docs. Search for what you need. Copy classes. Iterate in the browser. That's it.
The full feature loop
Simulated ticket: "Add a status field to tasks (todo / in-progress / done) with the ability to update from the UI."
Add status to Prisma, migrate
enum TaskStatus { TODO IN_PROGRESS DONE } // Add to Task model: status TaskStatus @default(TODO)
npx prisma migrate dev --name add-status
Update router, add updateStatus mutation
updateStatus: publicProcedure .input(z.object({ id: z.string(), status: z.enum([ "TODO", "IN_PROGRESS", "DONE" ]), })) .mutation(({ ctx, input }) => ctx.db.task.update({ where: { id: input.id }, data: { status: input.status }, }) ),
Status dropdown + cache invalidation
function StatusBadge({ task }) { const utils = api.useUtils(); const update = api.task.updateStatus .useMutation({ onSuccess: () => utils.task.getAll.invalidate(), }); return ( <select value={task.status} onChange={(e) => update.mutate({ id: task.id, status: e.target.value, }) }> <option value="TODO">To Do</option> <option value="IN_PROGRESS">In Progress</option> <option value="DONE">Done</option> </select> ); }
This is what your day-to-day looks like. Requirement, schema, backend, frontend. Every feature, same loop.
Every feature, same loop
Schema
Add/change models in schema.prisma. Run migrate dev. Types auto-generate.
Backend
Add/update tRPC procedures. Zod validates input. Prisma handles queries. Return type flows to client.
Frontend
useQuery to read, useMutation to write, invalidate to refresh.
// Today's assignment
- Build one primary Airtable-style screen for your chosen workflow
- Wire reads with useQuery: at least two related datasets, with loading/error/empty states
- Wire writes with useMutation: implement create, edit, and delete interactions for your core record type
- After each mutation, keep UI/server in sync using invalidateQueries or setQueryData intentionally
- Demo a full flow live: change data, refresh page, verify DB + UI stay consistent
- Add optimistic inline edit with rollback on failure
- Add URL-driven filters (status/search) and include them in query keys
Keep building
T3 Stack
create.t3.gg — docs, tutorials, community
Prisma
prisma.io/docs — schema, queries, migrations
tRPC
trpc.io/docs — routers, middleware, errors
TanStack Query
tanstack.com/query — caching, mutations
Next.js
nextjs.org/docs — app router, SSR, deploy
Tailwind CSS
tailwindcss.com/docs — utilities, responsive
The live hours plant the seed. The project work is where it takes root.