
To set up GitHub Actions for a Next.js app, you create a single .github/workflows/ci.yml that runs install, typecheck, lint, test, build, and Playwright end-to-end on every push, then deploys preview environments per pull request and production on merge to main. The full playbook below covers caching, monorepos, OIDC for AWS, and the cost math that decides when GitHub-hosted runners stop being the cheap default.
This post is the YAML you wish someone had handed you on day one. Copy the blocks, swap the secrets, ship.
Three things shifted in the last 24 months and they all pile work onto your pipeline.
First, App Router and Server Actions widened the surface a CI run actually has to cover. A button in a React component now triggers a server function that talks to your database. If you only run next build, you have proven the bundle compiles. You have not proven the action works.
Second, Vercel preview URLs got commoditized. Vercel, Netlify, Render, and Cloudflare all give you a URL per branch for free. That is great for design review and useless for catching the bug that crashes when an authenticated user submits a form. You still need a gate before merge.
Third, AI-assisted PRs ship two to three times more code per developer than they did in 2023. A small startup with three engineers using Cursor, Claude Code, and Copilot can open thirty PRs a week. CI becomes the bottleneck, not the writing. Every minute the workflow takes is a minute somebody is staring at a status check.
The 2026 default has to be: fast, gated, cheap, and keyless.
Open most public Next.js repos and you will see this in .github/workflows/ci.yml:
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- run: npm run build
It works. It also misses everything that catches real bugs.
No typecheck, so a tsc error lands on main any time someone uses // @ts-ignore to push a hot fix. No lint, so unused imports and broken hooks accumulate. No tests, so refactors silently delete behavior. No Playwright, so Server Actions never run end to end. No cache, so a 2-minute install runs on every push. No deploy step, so somebody is still clicking buttons in Vercel.
A real pipeline is longer, but each block earns its place.
Here is the workflow we use as the default starting point. Save it as .github/workflows/ci.yml in any Next.js app.
name: CI
on:
push:
branches: [main]
pull_request:
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: true
jobs:
verify:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- name: Cache .next/cache
uses: actions/cache@v4
with:
path: ${{ github.workspace }}/.next/cache
key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json') }}-${{ hashFiles('**/*.[jt]s', '**/*.[jt]sx') }}
restore-keys: |
${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json') }}-
- run: npm run typecheck # tsc --noEmit
- run: npm run lint # next lint or eslint .
- run: npm test -- --ci # vitest or jest
- run: npm run build
e2e:
needs: verify
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci
- name: Cache Playwright browsers
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
- run: npx playwright install --with-deps chromium
- run: npm run build
- run: npx playwright test
- if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: playwright-report/
retention-days: 7
Three things to notice. The concurrency block kills stale runs when you push twice in quick succession, which alone saves 20 to 30 percent of minutes on an active repo. The .next/cache key includes a hash of source files, so incremental builds reuse webpack's module cache and finish in 30 to 90 seconds instead of 4 minutes. E2E runs as a separate job that depends on verify, so a typo in a test file does not block the cheaper static checks from giving you fast feedback. The same discipline shows up in our REST API design playbook: cheap checks first, expensive checks gated behind them.
The cleanest pattern is a single deploy job that knows whether it is shipping preview or production from the trigger.
For Vercel:
deploy:
needs: [verify, e2e]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm install --global vercel@latest
- name: Pull Vercel env (preview)
if: github.event_name == 'pull_request'
run: vercel pull --yes --environment=preview --token=${{ secrets.VERCEL_TOKEN }}
- name: Pull Vercel env (production)
if: github.ref == 'refs/heads/main'
run: vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
- run: vercel build ${{ github.ref == 'refs/heads/main' && '--prod' || '' }} --token=${{ secrets.VERCEL_TOKEN }}
- name: Deploy and capture URL
id: deploy
run: |
URL=$(vercel deploy --prebuilt ${{ github.ref == 'refs/heads/main' && '--prod' || '' }} --token=${{ secrets.VERCEL_TOKEN }})
echo "url=$URL" >> $GITHUB_OUTPUT
- name: Comment preview URL on PR
if: github.event_name == 'pull_request'
uses: thollander/actions-comment-pull-request@v3
with:
message: "Preview: ${{ steps.deploy.outputs.url }}"
comment-tag: preview-url
For Render, swap the deploy block for a curl to the Render Deploy Hook URL stored as a secret. For Cloudflare Pages, replace it with cloudflare/pages-action@v1 using CF_API_TOKEN and CF_ACCOUNT_ID secrets. The shape stays identical: build once, deploy with a flag that flips on main.
The --prebuilt flag on Vercel is the most underused optimization in this whole file. It tells Vercel to skip its own build because you already built in CI, cutting deploy time roughly in half.
GitHub Environments are the right home for secrets, not repository-level secrets. Create staging and production environments under Settings, scope each one's secrets, and turn on Required Reviewers on production. Now a deploy to prod cannot happen without a human approving the workflow run, even if every gate passes. This is the same gating discipline we describe in setting up DKIM, SPF, and DMARC for SaaS: make the dangerous action explicit.
For anything that touches AWS (S3 uploads, Lambda deploys, ECS pushes), do not store an AWS_ACCESS_KEY_ID. Use OIDC. The setup is one IAM identity provider plus one IAM role with a trust policy scoped to your repo, then this in your workflow:
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-deployer
aws-region: us-east-1
- run: aws s3 sync ./out s3://my-bucket
GitHub mints a JWT, AWS STS validates it against the trust policy, and your job runs with a 1-hour temporary credential. No keys in GitHub. No quarterly key rotation drama. If the role is ever misused, you revoke the IAM role, not 14 different access keys spread across 3 repos.
The trust policy on the IAM role is the security boundary. Scope it to a specific repo and branch:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com" },
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": { "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" },
"StringLike": { "token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:ref:refs/heads/main" }
}
}]
}
The sub condition is critical. Without it, any workflow in any GitHub repo could assume your role.
Matrix builds are for libraries that need to prove they work on multiple Node versions:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
node: [20, 22]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: ${{ matrix.node }}, cache: npm }
- run: npm ci
- run: npm test
For an application repo, pick one Node version and stop there. Matrix doubles your minutes for no real coverage gain.
Monorepos are where caching changes the game. With Turborepo and remote cache, a workflow that only changed the docs site skips the entire web app build:
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with: { fetch-depth: 2 } # turbo needs the previous commit
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci
- run: npx turbo run build test lint --filter=...[HEAD^1]
The --filter=...[HEAD^1] only runs tasks for packages that changed (and their dependents) since the previous commit. Combined with remote cache, monorepo CI typically drops 40 to 80 percent. A 12-minute build for a 6-package monorepo becomes 90 seconds when only one package touched.
If you are still picking the framework for a monorepo, our notes on choosing a tech stack for a startup cover the trade-offs between Turborepo, Nx, and a single-package repo.
This is the section nobody writes and everybody asks about.
GitHub Actions billing as of 2026, after the January price cuts:
| Runner | Cost per minute | Notes |
|---|---|---|
| Linux 2-core (standard) | $0.008 | Default, ~39% cheaper than 2025 |
| Linux 4-core | $0.012 | Worth it for cold builds >5 min |
| Linux 8-core | $0.022 | Diminishing returns above 4-core for Next.js |
| Windows 2-core | $0.010 | 1.25x Linux. Only if you actually need Windows. |
| macOS 3-4 core | $0.062 | Roughly 7-10x Linux. iOS builds only. |
Free tier on private repos: 2,000 minutes per month for Pro accounts, 3,000 for Team, 50,000 for Enterprise. Public repos are free at any scale.
Real cost math for a startup with 5 engineers shipping 100 PRs per month, average 5 minutes per CI run, 2 runs per PR (push + retry):
Same team a year later, 25 engineers, 600 PRs per month, 8 minutes per run with E2E:
You have to scale to roughly 50,000 minutes per month before self-hosted runners become the obvious answer, and even then the math shifted in March 2026. GitHub introduced a $0.002/minute platform charge on self-hosted runners covering the control plane (orchestration, scheduling). A self-hosted runner running on a $40/month VM that handles 10,000 minutes a month now costs $40 + $20 in platform charges = $60, versus $80 on GitHub-hosted. The break-even moved.
The honest answer for 90 percent of teams: stay on GitHub-hosted, optimize the cache, kill matrix runs you do not need.
Self-hosted earns its keep when you need GPU runners, ARM at scale, IP whitelisting to your VPC, or you are burning >50,000 paid minutes a month with predictable load. Otherwise the operational cost of patching runner VMs eats whatever you saved.
A few patterns that look right and break in production:
node_modules instead of the npm cache. Always cache ~/.npm (which actions/setup-node does for you with cache: npm). Caching node_modules directly skips the resolution step and silently uses stale binaries when native modules need rebuilding.workflow_dispatch inputs. Inputs are echoed in logs. Use secrets context only.${{ runner.os }}.concurrency. Without it, three pushes in 5 minutes start three full runs. With it, the first two cancel and only the latest finishes. Free 60 percent minute reduction on busy repos.timeout-minutes on jobs. A hung process can run for 6 hours on GitHub's default timeout, billing the whole time.If you want a similar discipline applied to runtime (not build time), our writeup on adding rate limiting to your API covers the same "fail cheap, fail clearly" pattern at request level.
Be honest about scope. If you are a solo founder pre-revenue, the Vercel git integration is your CI. It builds, it deploys, it gives you preview URLs. You do not need a 200-line workflow file to ship a landing page.
Add gates as you add engineers. The order we recommend is: typecheck and lint first (they fail fast and catch the most), then unit tests when you actually have business logic, then Playwright E2E once you have authenticated flows you cannot manually retest before every release.
The goal is shipping confidently, not collecting YAML.
Pick one Next.js repo, copy the verify job from above, and ship it. That one job catches the largest chunk of bugs you currently miss for the smallest investment. Once it is green for two weeks, layer in the E2E job. After that, OIDC if you are pushing to AWS, Turborepo cache if you are in a monorepo.
If you want a second pair of eyes on the workflow before you run it in production, audit your stack with our Ship-or-Skip tool or have a senior engineer review it. Cadence's senior tier ($1,500/week) is where most CI/CD rollouts get done; every Cadence engineer is AI-native by default, vetted on Cursor, Claude Code, and Copilot fluency before they unlock the platform, so the YAML you get back is something they can actually maintain after the rollout. Out of a pool of 12,800 engineers, the typical first commit on a CI cleanup engagement lands inside 27 hours.
Want CI built right the first time? Book a 48-hour free trial with a senior engineer on Cadence. Weekly billing, replace any week, no notice period. The first PR usually lands the day they start.
A clean install plus build plus test on a 2-core Linux runner runs 4 to 8 minutes for a typical app. With actions/cache on .next/cache and Turborepo remote cache, incremental runs drop to 60 to 120 seconds. Cold runs after a package-lock.json change stay near the upper bound because npm has to download and compile native modules.
Stay on GitHub-hosted until you regularly exceed 50,000 minutes per month. The January 2026 price cuts dropped Linux 2-core to roughly $0.008/minute (about 39 percent below 2025). The March 2026 $0.002/minute platform charge on self-hosted closed the gap further. Self-hosted earns its keep for GPU workloads, ARM at scale, or when you need runners inside your VPC.
No. Vercel uses its own deploy token scoped to your team. OIDC matters when CI pushes to AWS, GCP, or Azure and you want to stop rotating long-lived access keys. Once you have one IAM role using OIDC, every future AWS-touching workflow is a one-line addition.
Most teams under 5 engineers stay inside the 2,000 free minutes per month on a Pro account. A team running 100 PRs a month with 5-minute builds spends $0 to $4 in overage. The cost curve only matters once you cross roughly 25 engineers or add macOS builds.
Yes. Run npx playwright install --with-deps chromium in a job and cache ~/.cache/ms-playwright between runs to save 60 to 90 seconds per build. Put E2E in a separate job that depends on the verify job, so a flaky test does not block the cheaper static checks from reporting status. Always upload the playwright-report/ directory as an artifact on failure, otherwise debugging is guesswork.
.next/cache?Hash both the lockfile and the source files. The lockfile alone misses changes to your code that invalidate webpack's module cache. The source alone misses dependency changes. Combine: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json') }}-${{ hashFiles('**/*.[jt]s', '**/*.[jt]sx') }} with a restore-keys fallback to the lockfile-only key for partial hits. The pattern is similar to what we describe in Server Actions in Next.js 15 for cache invalidation: be explicit about what makes a cache stale.