
To optimize React performance in 2026, install the React Compiler, delete most of your useMemo and useCallback calls, and push data-fetching components to the server. Everything after that (transitions, virtualized lists, Suspense streaming) is second-order work that compounds on those three changes.
The teams that still spend their Friday afternoon hand-memoizing components are running 2023 plays. The ones that ship the smoothest apps treat memoization as a compiler concern, treat data fetching as a server concern, and only reach for runtime tricks when the profiler tells them to.
Three shifts reset the playbook:
useEffect waterfall now runs on the server and streams HTML. The performance budget moved.If your perf doc still leads with "wrap callbacks in useCallback," it's outdated. The same energy is now better spent on stack-level decisions, the kind we cover in how to choose a tech stack for your startup in 2026.
For five years, the standard advice was a checklist: React.memo your components, wrap every callback in useCallback, wrap every derived value in useMemo, then split your bundle by route.
That advice produced three predictable failure modes.
Memo sprawl. Devs wrap everything, the dependency arrays drift, and you end up with stale closures that re-render anyway. Worse, every memo call is itself JavaScript that runs on every render. On a hot path with 200 children, manual memo can cost more than it saves.
Premature React.memo. Wrapping a component in React.memo only helps if the parent passes the same props. If the parent is also rebuilding objects on every render (the common case), the memo wrapper adds an equality check and then re-renders anyway.
Cargo-cult code splitting. Splitting a 200KB bundle into ten 20KB chunks does not make the page faster if you still need all ten on first paint. You moved bytes around without reducing them.
DebugBear's 2025 teardowns of mid-size React apps showed that roughly 60 to 80 percent of manual useMemo calls in a typical codebase had no measurable effect on render time. The compiler is the cleanup pass we never wrote ourselves.
Here is the sequence we run on a typical client-heavy React app, from highest payoff to lowest.
This is the single highest-impact change in the post.
npm install --save-dev --save-exact babel-plugin-react-compiler@latest
npm install --save-dev eslint-plugin-react-hooks@latest
Then add the plugin to your build config. For Next.js 15:
// next.config.js
module.exports = {
experimental: {
reactCompiler: true,
},
}
For Vite:
// vite.config.ts
import react from '@vitejs/plugin-react'
export default {
plugins: [
react({
babel: {
plugins: ['babel-plugin-react-compiler'],
},
}),
],
}
Run the new ESLint preset with recommended enabled. It catches the cases where your code violates the Rules of React, which is the only thing that breaks the compiler. Fix what it flags, then ship.
Once the compiler is on, the next pass is destructive. Open your codebase and search for useMemo, useCallback, and React.memo. For each hit, ask: does this exist to satisfy a useEffect dependency, or to keep a context value stable across providers?
If yes, keep it. If no, delete it.
// Before
const handleClick = useCallback(() => {
setCount(c => c + 1)
}, [])
const filtered = useMemo(() => items.filter(i => i.active), [items])
// After (compiler does the work)
const handleClick = () => {
setCount(c => c + 1)
}
const filtered = items.filter(i => i.active)
A 12,000-line React codebase we audited last quarter had 410 manual memo calls. We deleted 320 and the bundle shrunk by 4KB gzipped, the compile output got cleaner, and not a single render-count regression showed up in the React DevTools Profiler.
The biggest LCP and TTI wins in 2026 come from moving fetches off the client. A typical pattern looks like this in Next.js 15:
// app/dashboard/page.tsx
async function getOrders(userId: string) {
return await db.orders.findMany({ where: { userId } })
}
export default async function DashboardPage({ params }) {
const orders = await getOrders(params.userId)
return <OrdersTable orders={orders} />
}
No useEffect, no loading skeleton on first paint, no JSON parse cost on the client. The HTML streams from the server with the data already inlined. If you're new to this pattern, our walkthrough on Server Actions in Next.js 15 covers the mutation half of the same story.
Real before/after on a sample dashboard we built (1,200 rows, three filter dropdowns):
| Metric | Client-side fetch | Server Components |
|---|---|---|
| LCP | 2.4s | 0.9s |
| TTI | 3.1s | 1.2s |
| First-load JS | 312KB | 184KB |
This was the same data, the same UI, the same network. The only change was where the fetch ran.
The compiler does not fix input lag on heavy filters or search boxes. That work needs concurrent rendering primitives.
function ProductSearch() {
const [query, setQuery] = useState('')
const [isPending, startTransition] = useTransition()
const deferredQuery = useDeferredValue(query)
const results = expensiveFilter(allProducts, deferredQuery)
return (
<>
<input
value={query}
onChange={(e) => {
startTransition(() => setQuery(e.target.value))
}}
/>
{isPending && <Spinner />}
<ResultList items={results} />
</>
)
}
The pattern is simple: useTransition marks an update as low-priority so the input stays responsive; useDeferredValue lets you render a stale copy of the expensive output until React has a free moment to recompute.
On the same dashboard sample, adding useTransition to the filter inputs dropped INP from 340ms (failing CWV) to 110ms (passing). One afternoon of work. No new dependencies.
If you render more than 200 rows in a scrollable container, virtualize. TanStack Virtual is the headless option we reach for first.
import { useVirtualizer } from '@tanstack/react-virtual'
function OrderList({ orders }) {
const parentRef = useRef(null)
const virtualizer = useVirtualizer({
count: orders.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 56,
overscan: 8,
})
return (
<div ref={parentRef} style={{ height: 600, overflow: 'auto' }}>
<div style={{ height: virtualizer.getTotalSize(), position: 'relative' }}>
{virtualizer.getVirtualItems().map(v => (
<div
key={v.key}
style={{ position: 'absolute', top: 0, transform: `translateY(${v.start}px)`, height: v.size }}
>
<OrderRow order={orders[v.index]} />
</div>
))}
</div>
</div>
)
}
We use react-window interchangeably for simpler cases. The win is the same: only the visible rows render, scroll FPS stays at 60, and memory pressure drops on long sessions.
Wrap slow data dependencies in <Suspense> so the rest of the page can paint. With Next.js 15 and the loading.tsx convention, you get this almost for free, but explicit boundaries inside a route give you more granular streaming.
export default function ReportPage() {
return (
<>
<Header />
<Suspense fallback={<TopMetricsSkeleton />}>
<TopMetrics />
</Suspense>
<Suspense fallback={<ChartSkeleton />}>
<SlowChart />
</Suspense>
</>
)
}
The header paints first, top metrics stream in once their query resolves, and the chart streams in whenever it's ready. The user sees progress instead of a single long blank.
If the React perf rollout above sounds like a quarter of work you don't have the bandwidth for, the Cadence ship-or-skip audit gives you an honest grade on which of these six steps your stack actually needs first. It takes about 4 minutes and tells you what to delete before you build.
The compiler is not a total replacement. Three cases still warrant explicit useMemo or useCallback:
useEffect dependency array, you want it referentially stable so the effect doesn't fire on every render. The compiler optimizes rendering, not effect identity.useMemo to keep them happy.Outside those cases, manual memo is residue. Treat it like a manual for loop in modern TypeScript: occasionally necessary, but mostly a sign of older code.
Million.js had a strong 2023 and 2024. The premise (a block-based virtual DOM that out-runs Fiber on huge lists) was real, and it shipped in plenty of production apps.
In 2026, the picture is narrower. The React Compiler eats most of the gains Million advertised on typical app workloads. The Million team has been honest about this and the project's positioning shifted toward specialist UIs: trading dashboards with thousands of live cells, log viewers, spreadsheet-grade tables.
If that's your product, Million still measurably wins. If your app is a typical SaaS dashboard, you no longer need it. Install the compiler, virtualize where it matters, and skip the extra dependency.
Don't optimize in the dark. Three tools cover 95% of what you need.
React DevTools Profiler. Record a typical interaction (filter, scroll, tab switch). The flame graph shows you which components rendered, how long each took, and why each rendered. Most surprises in real codebases come from a context provider higher up the tree changing on every render. The Profiler will point right at it.
Lighthouse CI for INP. Run lhci autorun in your CI pipeline against the routes that get the most traffic. Set a budget of 200ms INP at the 75th percentile. Fail the build if it regresses. INP is the Core Web Vitals metric Google now weights heaviest for interaction-heavy pages.
PageSpeed Insights API. For production monitoring, hit the PSI API nightly for your top 10 routes and pipe the field data (real Chrome users, not lab) into a dashboard. Lab data lies; field data is what Google ranks on. If you're calling the PSI API at scale, you'll want to read up on how to add rate limiting to your API or you will trip the 25,000 requests-per-day quota faster than you'd think.
A good cadence: profile locally before you ship, run Lighthouse in CI on every PR, watch field data weekly.
Best practices have ROI curves. Respect them.
If you are two founders pre-revenue with 40 daily active users, your perf budget is "the page renders before they click away." React 19 with the compiler off and zero manual optimization will hit that bar. Don't spend a sprint on Suspense boundaries when you don't have product-market fit.
The threshold where this work pays back is roughly:
Below that, ship features. Above that, work the playbook in order.
A senior engineer can run the full sequence (compiler install, manual-memo cleanup, RSC migration on the heaviest routes, INP fixes) in two to four weeks on a typical SaaS codebase. That's roughly $3,000 to $6,000 of engineer time at a Cadence senior tier of $1,500 per week. Every Cadence engineer is AI-native by default, vetted on Cursor, Claude, and Copilot fluency before they unlock bookings, which matters here because compiler migrations are exactly the kind of mechanical-but-careful work where AI-assisted refactor speeds up the boring parts (codemod the manual memo calls, generate the Suspense boundaries) without skipping the judgment calls (which boundaries actually help LCP).
If your team is mid-sized and you want a contractor to own this for a sprint, the book a senior engineer flow ships a vetted candidate inside 48 hours, free trial, weekly billing. If you'd rather keep the work in-house and need a second opinion on whether the rollout is worth doing this quarter at all, run the Cadence ship-or-skip audit on your stack first and decide from there.
Mostly no. The React Compiler memoizes for you. Keep useMemo and useCallback only as escape hatches for useEffect dependencies, library prop boundaries, and widely-fanned-out context values. Outside those three cases, delete them.
Only if your code already breaks the Rules of React (mutating props, conditional hooks, side effects in render). The new eslint-plugin-react-hooks recommended preset will tell you what to fix before you ship the compiler.
No, but it's no longer the first reach. The React Compiler covers the common case. Million still wins for trading dashboards, log viewers, and other list-heavy specialist UIs where individual re-render cost matters more than re-render frequency.
Under 200ms at the 75th percentile of real users. That's the Core Web Vitals "good" threshold. Anything from 200 to 500ms is "needs improvement," and above 500ms is "poor" and will hurt your search rankings.
For a typical SaaS codebase, a senior engineer can install the compiler, clean up manual memoization, migrate the heaviest routes to Server Components, and fix the worst INP offenders in two to four weeks. The compiler install itself is a single afternoon if your code already follows the Rules of React.
Only if you have a specialist UI (trading grid, spreadsheet, log viewer) where you've profiled and confirmed Million wins. The two layers solve different problems (the compiler reduces re-render frequency; Million reduces per-re-render cost), and they coexist fine, but the operational cost of maintaining both is real. Default to compiler-only.