
To set up structured logging in Node.js, install Pino, configure it to emit JSON with pino({ level: 'info' }), attach a request-scoped child logger via AsyncLocalStorage, redact secrets with the redact option, and pipe stdout to a vendor like Datadog or Better Stack. That replaces every console.log in your app with queryable, level-filtered, correlation-tagged events you can actually search at 2am.
console.log writes a string. Once it leaves your process, nobody can do anything useful with it. Datadog can't filter it. Loki can't index it. Your SRE can't ask "show me every 500 from user 4271 in the last hour" without writing a regex.
Structured logging fixes this by emitting one JSON object per line. Every log entry has a level, a time, a msg, and any context fields you attach. Vendors auto-parse the fields. You filter by level:error AND userId:4271 instead of grepping. You add a new field tomorrow and every dashboard picks it up automatically.
The mental shift is small but load-bearing. You stop writing logs for humans reading a terminal and start writing them for a query engine. Every log line is a row in a database you'll query later.
Four libraries get mentioned in every comparison post. In 2026, only one is the right default.
Pino is the answer for almost everything. JSON-first, asynchronous, and benchmarked at 5 to 8 times the throughput of Winston. It was designed for production from day one, which is why Fastify ships with Pino baked in.
Winston is still the most-installed logger on npm because it's been around the longest. It's slower, its transports are synchronous, and its API has more surface area than most teams need. If you already have Winston in a working app, don't migrate just to migrate. If you're greenfield, skip it.
tslog is a reasonable choice for TypeScript-heavy services that prize developer ergonomics over raw throughput. The API is pleasant; the throughput is not Pino's.
Bunyan is effectively dead. It pioneered structured logging in Node years ago, but maintenance has lapsed. Don't start a new project on it.
| Library | Speed | JSON-first | Maintained | Best for |
|---|---|---|---|---|
| Pino | Fastest (5-8x Winston) | Yes | Active | Production default in 2026 |
| Winston | Slower, sync transports | Optional | Active | Legacy apps already on it |
| tslog | Mid | Yes | Active | TS-first dev ergonomics |
| Bunyan | Mid | Yes | Effectively dead | Don't start here |
The rest of this guide uses Pino. The patterns (JSON output, child loggers, AsyncLocalStorage, redaction, vendor transports) port to any structured logger; the API names change.
Install Pino. Run npm install pino for production and npm install -D pino-pretty for local development. That's the only dependency you need to start.
Create a JSON-formatted root logger. In lib/logger.ts, export a single Pino instance: export const logger = pino({ level: process.env.LOG_LEVEL ?? 'info', redact: ['req.headers.authorization', 'req.headers.cookie', '*.password', '*.token'] }). In dev, pipe through pino-pretty so logs are human-readable: node --enable-source-maps app.js | pino-pretty. In prod, leave them as raw JSON for the vendor to parse.
Propagate request context with AsyncLocalStorage. Create an AsyncLocalStorage<{ requestId: string; userId?: string }> and wrap every incoming request with als.run({ requestId: crypto.randomUUID() }, next). Build a getLogger() helper that reads the store and returns logger.child(als.getStore() ?? {}). Now any code, however deep in your stack, gets a logger that knows which request it's serving.
Ship JSON to a vendor. Configure a Pino transport for your vendor of choice (Datadog, Better Stack, Loki, Logtail, or CloudWatch). Set the API key in your environment, restart the process, and verify a log shows up in the vendor dashboard within 30 seconds. From here, every logger.info({ orderId }, 'order placed') flows automatically.
That sequence is the entire setup. Everything below is detail on how to do each step well.
Logs are the most common accidental data leak in modern apps. Someone logs req for debugging, the request includes an Authorization header, and now your bearer tokens are sitting in your observability vendor's storage forever.
Pino's redact option fixes this declaratively:
import pino from 'pino';
export const logger = pino({
level: process.env.LOG_LEVEL ?? 'info',
redact: {
paths: [
'req.headers.authorization',
'req.headers.cookie',
'req.body.password',
'req.body.token',
'*.creditCard',
'*.ssn',
'user.email',
],
censor: '[REDACTED]',
},
});
The * wildcard matches one level. ** matches any depth, useful when you log nested API responses and don't want to enumerate every path. Audit this list every time you add a new field that holds a secret. The orchestrator that ships logs has no idea what's sensitive; only you do.
Bake this discipline in early. Once a token is in your log vendor, rotating it is the only fix; Datadog and Better Stack do not let you delete individual records. The same caution applies when you handle data deletion under GDPR: logs are a downstream system and need their own retention policy.
The pattern is the same everywhere: middleware assigns a request ID, AsyncLocalStorage stores it, downstream code reads it via a child logger.
Express + pino-http
import express from 'express';
import pinoHttp from 'pino-http';
import { logger } from './logger';
const app = express();
app.use(pinoHttp({ logger, genReqId: () => crypto.randomUUID() }));
app.get('/orders/:id', (req, res) => {
req.log.info({ orderId: req.params.id }, 'fetching order');
res.json({ ok: true });
});
pino-http auto-logs every request with method, path, status, and latency. It also attaches req.log, a child logger pre-tagged with the request ID.
Fastify
Fastify ships with Pino. You configure it at server creation:
import Fastify from 'fastify';
const app = Fastify({ logger: { level: 'info' } });
app.get('/health', async (req) => { req.log.info('health check'); return { ok: true }; });
Hono
import { Hono } from 'hono';
import { logger as honoLogger } from 'hono/logger';
import { logger } from './logger';
const app = new Hono();
app.use('*', async (c, next) => {
const requestId = crypto.randomUUID();
await als.run({ requestId }, next);
});
app.get('/orders/:id', (c) => {
getLogger().info({ orderId: c.req.param('id') }, 'fetching order');
return c.json({ ok: true });
});
Next.js Route Handlers (Node.js runtime)
// app/api/orders/[id]/route.ts
import { als, getLogger } from '@/lib/logger';
export async function GET(req: Request, { params }: { params: { id: string } }) {
return als.run({ requestId: crypto.randomUUID() }, async () => {
getLogger().info({ orderId: params.id }, 'fetching order');
return Response.json({ ok: true });
});
}
The edge runtime is a different story. AsyncLocalStorage works in Node Route Handlers but Pino transports do not run on edge. Use a fetch-based logger like Axiom's edge SDK for routes you've explicitly opted into the edge runtime. The same edge-runtime caveats show up if you're implementing authentication in 2026 and trying to run middleware on the edge.
You configure shipping with Pino transports. A transport is a separate worker that reads JSON lines from your main process and forwards them. Below are working configs for the five vendors most Cadence engineers ship to.
Datadog
import pino from 'pino';
export const logger = pino({
level: 'info',
transport: {
target: 'pino-datadog-transport',
options: {
ddClientConf: { authMethods: { apiKeyAuth: process.env.DD_API_KEY } },
ddServerConf: { site: 'datadoghq.com' },
service: 'orders-api',
ddsource: 'nodejs',
},
},
});
Better Stack (Logtail)
transport: {
target: '@logtail/pino',
options: { sourceToken: process.env.LOGTAIL_TOKEN },
}
Grafana Loki
transport: {
target: 'pino-loki',
options: {
host: 'https://logs-prod.grafana.net',
basicAuth: { username: process.env.LOKI_USER, password: process.env.LOKI_PASSWORD },
labels: { app: 'orders-api', env: process.env.NODE_ENV },
},
}
AWS CloudWatch
transport: {
target: 'pino-cloudwatch-transport',
options: {
logGroupName: '/orders-api/prod',
logStreamName: process.env.HOSTNAME,
awsRegion: 'us-east-1',
},
}
Generic stdout (for Kubernetes / Fly.io / Render)
// no transport at all. Container platforms ingest stdout natively.
export const logger = pino({ level: 'info' });
Pick the one that matches your existing stack. Don't pick by feature checklist; pick by where your team already pages from. Logs you can't get to during an incident are decorative.
JSON.stringify on the object before passing it to logger.info. Pass objects directly; let Pino serialize.pino() called inside the handler. Always create the root logger once at module load and call logger.child() per request.TypeError: Converting circular structure to JSON. Cause: logging an Express req or a Mongoose document. Use Pino's built-in serializers (pino.stdSerializers.req) or pluck the fields you actually need.module not found: 'worker_threads'. Cause: Pino transport in a Next.js middleware or edge route handler. Use a fetch-based logger on edge.logger.info inside a hot loop. Drop to debug for high-cardinality events and ship debug to a separate, sampled stream.Best practices have ROI curves. Respect them.
The threshold to bother is usually the second production service or the first incident you couldn't debug from console output. After that, do the four steps above and don't look back.
Structured logs are one signal of three. Metrics tell you what is broken. Traces tell you where it's broken. Logs tell you why. A mature stack runs all three and correlates them by trace ID. If you ship a traceId field in every log line and your vendor supports it (Datadog, Better Stack, and Grafana all do), one click in the trace view jumps you to the exact log entries for that request.
This is the same operational discipline that shows up when you estimate software development time accurately (you need real numbers, not hand-waving) or when you version your API correctly (you need a queryable record of which version each client called).
If you'd rather have a Cadence senior engineer ($1,500/week) own the rollout end-to-end, that's a 1-2 week scope: Pino, AsyncLocalStorage, redaction, vendor transport, dashboards, and runbooks. They'll also wire up traceId propagation across your services so logs and traces line up. Audit your current logging stack with Ship-or-Skip before you commit to a vendor; you may already have most of what you need.
Want this shipped end-to-end? Cadence books a senior engineer in 2 minutes, with a 48-hour free trial. They'll have JSON logs, request correlation, secret redaction, and a vendor pipeline live before week one wraps.
Yes, by 5 to 8 times in synthetic benchmarks. Pino writes minimal JSON to stdout asynchronously; Winston applies formatters and writes to transports synchronously. For any service above 1,000 requests per minute, the gap is measurable on the event loop.
Files work for a single box. The moment you have more than one process or container, you'll be SSHing into machines to grep, which doesn't scale past one engineer. Pick a vendor before your second deploy target.
Pino works fine in Node.js Route Handlers (the default). The edge runtime does not support Pino transports because it lacks worker_threads. For routes you've explicitly opted into the edge runtime, use a fetch-based logger like Axiom's edge SDK or post directly to your vendor's HTTP intake.
Use Node.js's built-in AsyncLocalStorage. Wrap each incoming request with als.run({ requestId }, next), then have your getLogger() helper read from the store and return a logger.child() with the stored fields. Every downstream await keeps the context.
info as the default. Drop to warn if volume is too high or your vendor bill is getting expensive. Reserve debug and trace for local development; if you need them in prod, ship them to a separate, sampled stream so the cost stays bounded.