
To deploy Next.js on Render in 2026, create a Web Service from your GitHub repo, set the build command to npm run build and the start command to npm start, then add the env vars your app reads at runtime. Render auto-binds $PORT for next start, so no extra flags. For pure static exports (output: 'export' in next.config.js), pick Static Site instead and publish the out directory.
That is the 30-second answer. The rest of this post is the playbook we use to run Cadence itself on Render: a real render.yaml, the persistent-disk gotcha that quietly breaks Next/Image, the 9 cron jobs we ship, and an honest take on when you should pick Vercel instead.
Vercel makes Next.js feel magic until the bill arrives. The classic horror story is the indie team that wakes up to a $4,000 weekly invoice because a viral post melted their bandwidth quota, or because their image-optimization budget exploded after a Hacker News spike. Vercel publishes spend caps now, but the structural problem hasn't changed: you don't fully control your runtime, and pricing is metered per request, per GB-second, per image transform.
Render is the boring inverse. You pay for a container of a known size (e.g. $25/month for a Standard 2 GB instance), plus a Postgres line item, plus disk. Bandwidth is generous and predictable. The bill on the 1st of the month is the bill on the 1st of every month, give or take a Postgres tier upgrade.
Vercel still wins at three specific things in 2026: the global edge runtime (V8 isolates running close to the user), zero-config ISR with on-demand revalidation, and image optimization without you thinking about a disk. If your product is a content site that absolutely depends on edge latency in 30 regions, Vercel is the right call. For everything else (SaaS dashboards, internal tools, API-heavy products, anything with a Postgres in front), Render is calmer and cheaper.
We run Cadence on Render. Postgres, Key Value, web service, and 9 cron jobs, all in render.yaml. The bill is predictable to within 5% month over month.
Render has two service types that can host Next.js, and they map cleanly to the two next build outputs.
| Service type | Use when | Build command | Start command | Persistent disk? |
|---|---|---|---|---|
| Web Service | SSR, API routes, ISR, middleware, any server logic | npm run build | npm start | Yes, for ISR + Next/Image cache |
| Static Site | output: 'export' in next.config.js, no API routes | npm run build | (none) | No |
If you're shipping App Router with route handlers, server actions, or middleware, you want a Web Service. Period. A Static Site cannot run a Node process, so anything beyond pre-rendered HTML breaks at request time.
If you're shipping a marketing site or a docs portal with output: 'export', the Static Site option is genuinely great: free SSL, global CDN, atomic deploys, deploy hooks for headless CMS rebuilds, and the price is $0 for the static-site product itself. Don't over-engineer it onto a Web Service just because you might add an API route someday.
Click-ops works for the first deploy. After that, you want infrastructure as code. Here is a trimmed version of what we actually run, with the secrets and project-specific names changed.
# render.yaml
previewsEnabled: true
previewsExpireAfterDays: 7
services:
- type: web
name: cadence-web
runtime: node
plan: standard
region: oregon
branch: main
rootDir: apps/web
buildCommand: npm ci && npm run build
startCommand: npm start
healthCheckPath: /api/health
autoDeploy: true
envVars:
- key: NODE_ENV
value: production
- key: DATABASE_URL
fromDatabase:
name: cadence-pg
property: connectionString
- key: REDIS_URL
fromService:
type: keyvalue
name: cadence-cache
property: connectionString
- key: AUTH_SECRET
sync: false
- key: STRIPE_SECRET_KEY
sync: false
disk:
name: nextcache
mountPath: /opt/render/project/src/apps/web/.next/cache
sizeGB: 1
- type: cron
name: nightly-billing
runtime: node
schedule: "0 7 * * *"
buildCommand: npm ci && npm run build
startCommand: node apps/web/scripts/nightly-billing.js
envVars:
- key: DATABASE_URL
fromDatabase:
name: cadence-pg
property: connectionString
databases:
- name: cadence-pg
plan: standard
region: oregon
postgresMajorVersion: "16"
ipAllowList: []
A few things worth calling out. previewsEnabled: true flips on per-PR preview environments (more on those below). sync: false on a secret tells Render to require manual entry per environment, which is the pattern you want for anything Stripe-shaped. The disk block is the part most teams forget, and it's the single most common cause of "ISR worked yesterday and broke today." The cron block runs a real container on a schedule, which is the thing serverless cron platforms can't match.
Follow these in order. The whole sequence takes about 25 minutes the first time, 4 minutes for every project after.
main as the deploy branch. If you're in a monorepo, set Root Directory to your Next.js app path (e.g. apps/web).npm run build (or pnpm install && pnpm build, or turbo run build --filter=web for a Turborepo). Start: npm start. Render auto-detects Node and picks a sensible version, but pin it with a .nvmrc or engines.node field in package.json to avoid build drift. Use Node 20 LTS or 22 LTS in 2026.DATABASE_URL, AUTH_SECRET, STRIPE_SECRET_KEY, and any third-party keys. For anything sensitive, mark it as a secret (the sync: false pattern in render.yaml). If you provision Render Postgres or Key Value in the same dashboard, you can wire the connection string in via fromDatabase or fromService and never paste it manually. This is the same discipline you'd use when you implement authentication with a managed provider: don't put production credentials in your repo.app.example.com. Render gives you either a CNAME target or an A/ALIAS record. Add it to your DNS, and Render auto-issues a Let's Encrypt cert within a few minutes. For an apex domain (example.com), use ALIAS or ANAME if your DNS provider supports them; otherwise CNAME flatten via Cloudflare.previewsEnabled: true in render.yaml (or toggle it in the dashboard). Every new pull request now spins up its own Web Service, optionally its own Postgres branch and Key Value instance, and gives you a unique URL like cadence-web-pr-142.onrender.com. Render tears it down when the PR closes or hits the expiry window. This is the one feature that turns Render from "Heroku replacement" into "Vercel replacement for teams."That's the loop. Push to a branch, open a PR, get a preview URL, merge to main, prod redeploys with a zero-downtime health-check rollover. No bespoke CI required.
Most platforms force you to glue together five vendors: web host, Postgres, Redis, queue, and cron. Render bundles all of them, and the prices are honest.
Render Postgres ships with IPv4, automatic daily backups, point-in-time recovery on the Standard tier and up, and (importantly in 2026) a built-in PgBouncer-style connection pooler. The pooler matters because Next.js spawns a connection per cold serverful instance, and a 200-connection Postgres exhausts fast under preview-deploy churn. Use the pooled connection string for the app and the direct string for migrations.
Render Key Value is a Redis-API-compatible store. Point your ioredis or @upstash/redis client at it. We use it for session storage, rate limiting, and cache invalidation. It's not Upstash-cheap at the bottom tier, but it's in the same network as your web service, so latency is microseconds.
Render Cron Jobs are the underrated feature. A cron job is a full container that runs on a schedule, with the same env vars and disk access as your web service. Cadence runs 9 of them: nightly billing reconciliation, weekly engineer-rating recompute, hourly Stripe webhook sweep, daily SEO drift check, and so on. On Vercel you'd be reaching for Inngest or QStash and an extra $50/month. On Render it's a type: cron block in the same yaml file.
If you've ever tried to handle email deliverability for a SaaS, you know the value of a real cron container that can sit and process a queue for 5 minutes without a 10-second function timeout breathing down its neck.
This is the one that quietly breaks Next.js apps on Render. Both next/image (when the loader is default) and ISR (revalidate) write to .next/cache. By default, Render Web Services have ephemeral filesystems: every deploy and every restart wipes the disk. So your image cache and your ISR cache rebuild from scratch on every push, which means your first 100 visitors after a deploy hit slow paths.
The fix is one block in render.yaml:
disk:
name: nextcache
mountPath: /opt/render/project/src/.next/cache
sizeGB: 1
A 1 GB disk costs $0.25/month and solves the problem entirely. If you serve a lot of images, bump it to 5 GB. If you don't use next/image and you don't use ISR, skip the disk and don't pay for what you don't need.
For very heavy image workloads, swap to a third-party loader like Cloudinary or imgix and put the loader behind an env var. That gets you global edge image delivery without holding state on your Render instance.
Render runs in Oregon, Virginia, Ohio, Frankfurt, and Singapore as of 2026. Pick the region that matches where your users live, not where you live. We chose Singapore for one Cadence-adjacent app because the engineer pool skews SE Asia. The latency drop from a US region to Singapore for that audience was 240ms to 30ms, which is the difference between "feels broken" and "feels native."
Three rules:
Honesty section. Render is not the answer for everyone.
For a deeper take on the platform itself, our honest Render review walks through 18 months of running production traffic on it. And if you're still on the fence, the Vercel-side playbook covers the same deploy in a different shape so you can pick with eyes open.
If you have a Next.js app and you're tired of either Vercel surprise bills or AWS yak-shaving, spend an hour wiring up a render.yaml and pushing to a branch. The preview URL alone usually sells the case to the rest of the team. If your app is more complex (monorepo, custom server, edge-heavy), pull in someone who's done this before. Every engineer on Cadence is AI-native by default (vetted on Cursor, Claude Code, and Copilot fluency before they unlock the platform), and a senior, $1,500/week, will have your render.yaml, Postgres, Redis, and cron jobs in production by Friday.
If you're earlier than that and just want a sanity check on the whole stack before you migrate, our Ship or Skip stack audit gives you an honest grade on what to keep, what to swap, and what to delete.
Either way, the migration loop is short. The boring deploy is the good deploy.
Want a Render-fluent engineer on your repo this week? Cadence shortlists 4 vetted engineers in 2 minutes, with a 48-hour free trial. Book one in 2 minutes.
No. Render runs Next.js as a long-lived Node process, so middleware and route handlers run in the Node runtime. The edge runtime (V8 isolates) is a Vercel-specific deploy target. If you depend on edge functions for sub-50ms latency in 30 regions, Render is the wrong call. For 95% of SaaS apps, the Node runtime is what you want anyway.
Preview environments are included on the Pro plan ($19/user/month) and above. Each preview spins up its own Web Service, plus Postgres and Key Value if you opt in via render.yaml, and they shut down when the PR closes or hits the expiry window. The compute is metered as if it were a small Web Service, so a busy team with 20 open PRs might add $30 to $80 of preview compute per month.
Yes. Set rootDir in render.yaml to the package path (e.g. apps/web) and Render installs and builds from there. For Turborepo, your build command becomes turbo run build --filter=web. For Nx, it's nx build web. Pin your package manager via packageManager in the root package.json so Render uses the right one.
Yes. next start reads process.env.PORT by default and binds 0.0.0.0. You do not need to pass -p or set HOST. If you run a custom server (Express, Fastify, Hono), you must explicitly read process.env.PORT and bind 0.0.0.0, otherwise Render's health check will fail and the deploy will roll back.
Use Render Background Workers (a service type that's just a long-running container with no public port) plus BullMQ or Inngest pointed at your Render Key Value instance. The same render.yaml file holds the worker block, so the deploy story stays one-file-clean.