I am a...
Learn more
How it worksPricingFAQ
Account
May 8, 2026 · 10 min read · Cadence Editorial

How to set up code coverage in 2026

code coverage 2026 — How to set up code coverage in 2026
Photo by [Nemuel Sereti](https://www.pexels.com/@nemuel) on [Pexels](https://www.pexels.com/photo/computer-program-on-the-monitor-6424585/)

How to set up code coverage in 2026

To set up code coverage in 2026, pick the native tool for your stack (Vitest's v8 provider for JS/TS, pytest-cov for Python, JaCoCo for JVM, go test -cover for Go, llvm-cov for Rust), upload reports to Codecov in CI, and enforce a threshold on the diff, not the whole repo. Aim for 60-80% on new code and ignore the 100% target.

That last sentence is the entire opinion of this post. Most teams over-index on a single repo-wide percentage, then either game it (snapshot tests, assertion-free tests) or abandon it the first time it blocks a hotfix. The 2026 setup that actually works is boring: native tooling, one CI step, patch coverage as the gate. Below is the playbook.

What code coverage actually measures

Coverage is execution measurement, not test quality. Four metrics matter:

  • Line coverage: percentage of executable lines run during tests. Easiest to grow, easiest to game.
  • Branch coverage: every if, else, switch, and ternary path is exercised. The only honest signal of "did the test actually explore this code?"
  • Function coverage: every function or method was called at least once. Useful for catching dead exports.
  • Statement coverage: every statement executed. Almost identical to line coverage in practice; ignore the distinction unless you care about multi-statement lines.

If you only track one number, track branch coverage. A function with five if statements has 32 possible execution paths; line coverage hits 100% as soon as you touch each line once, which means you can ship code where 31 of 32 paths are untested and still claim full coverage.

Why 100% coverage is a goal trap

You can hit 100% coverage with zero real tests. Here is the proof:

// src/discount.ts
export function applyDiscount(price: number, code: string) {
  if (code === "FREE") return 0;
  if (code === "HALF") return price / 2;
  return price;
}

// src/discount.test.ts
import { applyDiscount } from "./discount";
test("does something", () => {
  applyDiscount(100, "FREE");
  applyDiscount(100, "HALF");
  applyDiscount(100, "OTHER");
});

That test gives 100% line, branch, function, and statement coverage. It also asserts nothing. Every code path returns wrong numbers and the suite passes green forever.

The trap shows up in three patterns:

  1. Assertion-free tests. Engineers write expect(fn).toBeDefined() to satisfy the threshold.
  2. Snapshot abuse. Snapshots count as assertions to the runner. They count as nothing to a reader six months later who blindly approves the diff.
  3. Coverage-driven testing. Engineers write tests for the easy lines (getters, simple branches) and skip the hard ones (error paths, race conditions). The number goes up, the bug rate stays flat.

The fix is not chasing 100%. The fix is treating coverage as one signal alongside mutation testing, real assertions, and code review. We've covered the broader testing toolkit in our Jest vs Vitest comparison for 2026 and Playwright E2E test guide; coverage is the bottom layer of that stack, not the whole thing.

Realistic targets for 2026

Stop quoting 80% as universal. The honest targets depend on what code you are measuring.

Code typeBranch coverage targetNotes
New code on PR (patch)70-80%Enforce hard. This is the lever that matters.
Mature core domain80-90%Billing, auth, anything that loses money when broken.
New service, < 6 months60-70%Surface area is still moving; high targets create friction with no payoff.
Legacy code (untouched)Whatever it isDo not enforce. Backfill only when you change a file.
Internal tools, prototypes0%Coverage is overhead. Skip the setup.
Generated codeexcludedAdd to coverage.exclude.

The single highest-ROI move is enforcing patch coverage on the diff. A repo that sits at 62% overall but requires 80% on changed lines will drift up over months without ever blocking a hotfix. A repo with a flat 80% repo-wide gate blocks every refactor that touches a low-coverage file.

The tooling shortlist by language

Pick the native tool. Avoid third-party wrappers unless you have a specific reason.

StackToolDefault installBest report format
JS/TSVitest + @vitest/coverage-v8npm i -D @vitest/coverage-v8lcov + html
Pythonpytest-covpip install pytest-covxml + term-missing
JVM (Java/Kotlin/Scala)JaCoCoGradle or Maven pluginxml + html
Gogo test -coverbuilt-incoverprofile + html
Rustcargo-llvm-covcargo install cargo-llvm-covlcov + html

JavaScript / TypeScript: Vitest with c8 or Istanbul

Vitest ships two coverage providers: v8 (default, native V8 instrumentation, fast) and istanbul (slower but more accurate, especially for branch coverage and decorators).

// vitest.config.ts
import { defineConfig } from "vitest/config";
export default defineConfig({
  test: {
    coverage: {
      provider: "v8", // or "istanbul"
      reporter: ["text", "lcov", "html"],
      include: ["src/**/*.{ts,tsx}"],
      exclude: ["src/**/*.test.ts", "src/generated/**"],
      thresholds: {
        lines: 70,
        branches: 70,
        functions: 75,
        statements: 70,
      },
    },
  },
});

Run vitest run --coverage. Use v8 unless you find a real branch-coverage gap; istanbul is roughly 3x slower in our benchmarks. Jest users get the same shape with --coverage plus coverageProvider: "v8".

Python: pytest-cov

pip install pytest-cov
pytest --cov=src --cov-branch --cov-report=xml --cov-report=term-missing --cov-fail-under=70

Always pass --cov-branch. Without it you are measuring lines only, and pytest --cov with --cov-fail-under=80 will happily pass a suite that never executes any else branch. The term-missing report prints uncovered line numbers in your terminal, which is the fastest local feedback loop in any ecosystem.

JVM: JaCoCo

JaCoCo is the JVM industry standard. Gradle setup:

// build.gradle.kts
plugins { jacoco }
tasks.jacocoTestReport {
  dependsOn(tasks.test)
  reports {
    xml.required.set(true)
    html.required.set(true)
  }
}
tasks.jacocoTestCoverageVerification {
  violationRules {
    rule { limit { minimum = "0.70".toBigDecimal() } }
  }
}

Run ./gradlew test jacocoTestReport jacocoTestCoverageVerification. The XML report lands at build/reports/jacoco/test/jacocoTestReport.xml, ready to upload.

Go: built-in tooling

go test -coverprofile=coverage.out -covermode=atomic ./...
go tool cover -html=coverage.out -o coverage.html
go tool cover -func=coverage.out | tail -n 1

-covermode=atomic is required if you run tests with -race. The last command prints a total: line that CI can grep for a threshold check. Go has no native fail-under flag, so wrap it in a tiny shell check:

total=$(go tool cover -func=coverage.out | grep total: | awk '{print $3}' | sed 's/%//')
[ "$(echo "$total >= 70" | bc -l)" -eq 1 ] || { echo "coverage $total% < 70%"; exit 1; }

Rust: cargo-llvm-cov

cargo install cargo-llvm-cov
cargo llvm-cov --lcov --output-path lcov.info --fail-under-lines 70

llvm-cov uses LLVM's source-based instrumentation. It is more accurate than the older tarpaulin and fast enough for CI. Output is plain lcov.info, which Codecov consumes directly.

CI integration: Codecov, Coveralls, GitHub Pages

Three options, in order of how much you care about pretty UI vs zero cost.

  • Codecov (free for OSS, paid for private). Best PR comments, native patch coverage, carryforward flags for monorepos. Default choice in 2026.
  • Coveralls (cheaper, simpler). Solid if you just want a badge and a basic diff comment.
  • Self-hosted HTML on GitHub Pages. Push the coverage/ folder to a gh-pages branch. Zero monthly cost; no PR integration. Fine for small teams.

Here is a single GitHub Actions workflow that runs Vitest, fails the build under threshold, and uploads to Codecov. The same shape works for any language; swap the test command. If you want a deeper guide on this exact pipeline, see our GitHub Actions for Next.js writeup.

# .github/workflows/test.yml
name: test
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with: { fetch-depth: 2 }
      - uses: actions/setup-node@v4
        with: { node-version: 20, cache: 'npm' }
      - run: npm ci
      - run: npx vitest run --coverage
      - uses: codecov/codecov-action@v5
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          files: ./coverage/lcov.info
          fail_ci_if_error: true

fetch-depth: 2 is the line that everyone forgets. Without it Codecov cannot diff against the parent commit, and patch coverage silently shows 0%.

Differential coverage: enforce on the diff, not the repo

This is the single most important config change. Your codecov.yml should look like:

# codecov.yml
coverage:
  status:
    project:
      default:
        target: auto
        threshold: 1%
    patch:
      default:
        target: 80%

project.target: auto means "do not regress overall coverage by more than 1%." patch.target: 80% means "the changed lines in this PR must hit 80% coverage." That second rule is the one that actually changes engineering behavior. It is enforceable on day one, regardless of where the repo currently sits.

For monorepos, add per-flag thresholds so the auth service can require 90% while the marketing site sits at 50%. If you are still arguing about a single repo-wide number in a stand-up, that is the smell that tells you patch coverage will fix the dispute. Cadence's senior tier ($1,500/week) typically wraps this rollout in three to four days for a 5-service monorepo, including writing the per-flag config and back-filling thresholds; the audit your stack tool gives you an honest grade in 30 seconds if you want a starting point.

Common pitfalls

  • Snapshot tests inflate coverage. A test that snapshots a 200-line component covers every line and asserts nothing meaningful. Track snapshot density (snapshots / total tests) as a separate metric.
  • Ignoring untestable code via comments. /* istanbul ignore next */ is fine for genuine dead code, but auditing usage quarterly catches the drift where engineers ignore code they were too lazy to test.
  • Skipping branch coverage. If your config sets lines: 80 and omits branches, you are measuring nothing useful.
  • Not uploading from PR runs. Codecov needs the report from the PR build to compute patch coverage. Forks and pull_request_target triggers get this wrong constantly.
  • Repo-wide thresholds in legacy codebases. Set project.target: auto, not a hard number. A flat threshold blocks every refactor that touches a low-coverage file and trains the team to disable coverage instead of fixing it.

When you can skip this entirely

Coverage tooling has a real cost: CI minutes, config maintenance, the social cost of arguing about thresholds. Skip it if:

  • You are two founders pre-revenue and the question is "do we ship the demo this week or set up CI." Ship the demo.
  • The repo is a 4-week prototype that will either die or get rewritten.
  • You have fewer than 200 tests total and no production users yet.

Coverage starts paying for itself once a second engineer joins, the test suite passes 500 cases, and a regression in production would cost real money. Before that, you are decorating a repo nobody else reads. We've made the same point about scoping infra correctly in our MVP-to-production scaling guide; coverage rollouts follow the same ROI curve.

Steps

  1. Pick the native tool. Vitest v8 for JS/TS, pytest-cov for Python, JaCoCo for JVM, go test -cover for Go, cargo-llvm-cov for Rust. Avoid third-party wrappers.
  2. Add the local config. vitest.config.ts, pytest.ini, build.gradle, or a Makefile target. Always enable branch coverage and exclude generated code.
  3. Wire CI. One GitHub Actions job that runs the test command with coverage, then uploads lcov.info (or coverage.xml) to Codecov. Set fetch-depth: 2.
  4. Add codecov.yml with patch coverage. Set patch.target: 80% and project.target: auto. This is the rule that actually changes behavior.
  5. Set per-flag thresholds for monorepos. Auth at 90%, marketing site at 50%, default 70%. Push this in a single PR so the discussion happens once.
  6. Audit quarterly. Grep for istanbul ignore, count snapshots, check the trend line. If patch coverage is consistently failing, the threshold is wrong; if it's always passing, the threshold is too low.

If your team would rather book the rollout than run it, every engineer on Cadence is AI-native by default (vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings), and a senior at $1,500/week can ship steps 1-6 across a typical 5-service monorepo inside a 48-hour trial.

Try it: if you want a second opinion on whether your test setup is actually catching regressions, run your repo through the ship-or-skip stack audit for an honest grade in under a minute, no signup.

FAQ

What is a good code coverage percentage in 2026?

60-80% branch coverage on new code is the honest target. Anything above 90% usually signals snapshot abuse or assertion-free tests. Track branch coverage, not line coverage, and enforce on the diff rather than the whole repo.

Should I block merges on coverage?

Yes, but only on the diff. Block PRs whose changed lines drop below 70-80% patch coverage, and let the repo-wide number drift down on legacy code you are not touching. A flat repo-wide threshold trains the team to disable coverage instead of fix it.

Vitest v8 or istanbul provider?

Use v8 by default. It is roughly 3x faster and uses native V8 instrumentation. Switch to istanbul when you need exact branch coverage on heavily branching code, decorator support, or coverage in non-V8 runtimes like Bun.

Does coverage replace tests?

No. Coverage measures execution, not assertion. A test that runs every line and asserts nothing still gives 100% coverage and zero confidence. Pair coverage with mutation testing (Stryker for JS, mutmut for Python) if you want a real signal.

How long does it take to set up coverage in CI?

An afternoon for a single repo, including Codecov integration and a patch-coverage threshold. A monorepo with 5+ services with per-flag thresholds and back-filled config is closer to two days; a Cadence senior engineer at $1,500/week typically wraps the full rollout in three to four days end to end.

All posts