
To use Drizzle ORM in 2026, install drizzle-orm plus a database driver (pg, mysql2, or better-sqlite3), define your schema in TypeScript with pgTable, generate and run migrations with drizzle-kit, then query with the SQL-shaped builder. The whole library is roughly 7.4 KB gzipped with zero runtime dependencies, so it fits cleanly in serverless and edge runtimes.
That is the punchline. The rest of this guide is the working code: schema, migrations, queries, relations, validation with drizzle-zod, transactions, prepared statements, CTEs, and the raw SQL escape hatch you will need exactly once a quarter.
Three things changed in the last two years that make Drizzle the default pick for a new TypeScript backend.
First, serverless Postgres got real. Neon, Supabase, PlanetScale-for-Postgres, and Cloudflare D1 all want a tiny ORM with no binary runtime. Drizzle ships as plain TypeScript and runs on the edge without WASM blobs.
Second, type inference matured. typeof users.$inferSelect and $inferInsert give you exact row types from your schema with no codegen step. The schema is the source of truth, full stop.
Third, AI-assisted coding made the SQL-shaped API a feature instead of a tax. When Claude Code or Cursor writes a Drizzle query, it writes it like SQL, which is the language those models know best. Prisma's nested-object filter syntax fights the model. Drizzle does not.
If you are still deciding between the two, we wrote a separate post on Drizzle vs Prisma in 2026 that covers the trade-offs. This post assumes you have decided.
You need three packages: the ORM, a driver, and the migration toolkit.
npm install drizzle-orm pg
npm install -D drizzle-kit @types/pg
Connect to Postgres in db/index.ts:
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
import * as schema from "./schema";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export const db = drizzle(pool, { schema });
For Neon's HTTP driver (recommended for Vercel), swap the import:
import { drizzle } from "drizzle-orm/neon-http";
import { neon } from "@neondatabase/serverless";
const sql = neon(process.env.DATABASE_URL!);
export const db = drizzle(sql, { schema });
Passing { schema } is the part most tutorials skip. Without it, the relational-query API (db.query.users.findMany) does not work. With it, you get joins by name.
The schema is plain TypeScript. No .prisma file, no codegen.
// db/schema.ts
import { pgTable, serial, text, timestamp, integer, boolean, uuid } from "drizzle-orm/pg-core";
export const users = pgTable("users", {
id: uuid("id").defaultRandom().primaryKey(),
email: text("email").notNull().unique(),
name: text("name").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
});
export const posts = pgTable("posts", {
id: serial("id").primaryKey(),
authorId: uuid("author_id").references(() => users.id, { onDelete: "cascade" }).notNull(),
title: text("title").notNull(),
body: text("body").notNull(),
published: boolean("published").default(false).notNull(),
views: integer("views").default(0).notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
});
export type User = typeof users.$inferSelect;
export type NewUser = typeof users.$inferInsert;
export type Post = typeof posts.$inferSelect;
export type NewPost = typeof posts.$inferInsert;
Two patterns to internalize. defaultRandom() runs in Postgres, not in your app. references() adds a real foreign key, so the database (not your code) enforces integrity. onDelete: "cascade" is one line; the equivalent in a hand-rolled query layer is a footgun.
The $inferSelect and $inferInsert types are why Drizzle is worth the switch. Every function that touches users can take User or NewUser and get exact column types, including nullability.
Add a config file at the project root:
// drizzle.config.ts
import { defineConfig } from "drizzle-kit";
export default defineConfig({
schema: "./db/schema.ts",
out: "./db/migrations",
dialect: "postgresql",
dbCredentials: { url: process.env.DATABASE_URL! },
});
Then the loop is two commands.
npx drizzle-kit generate # diff schema vs migrations folder, write a new SQL file
npx drizzle-kit migrate # apply pending migrations to the database
The generated SQL files are committed to git. Read them before applying. Drizzle-kit is good but not psychic; renames look like drop-and-create unless you accept the prompt that it gives you.
In CI, run drizzle-kit migrate as part of the deploy step before the new app version starts serving traffic. Do not run it on app boot. That race condition has eaten weekends.
For local prototyping, drizzle-kit push skips the migration file and writes the diff straight to the database. Useful for the first hour of a project. Never use it past that.
The query builder reads like SQL because it is SQL.
import { eq, and, desc, gt } from "drizzle-orm";
import { db } from "./db";
import { users, posts } from "./db/schema";
// SELECT
const recent = await db
.select()
.from(posts)
.where(and(eq(posts.published, true), gt(posts.views, 100)))
.orderBy(desc(posts.createdAt))
.limit(20);
// INSERT (typed against NewPost)
const [created] = await db
.insert(posts)
.values({ authorId: user.id, title: "Hello", body: "..." })
.returning();
// UPDATE
await db
.update(posts)
.set({ views: sql`${posts.views} + 1` })
.where(eq(posts.id, created.id));
// DELETE
await db.delete(posts).where(eq(posts.id, created.id));
A few rules of thumb. Always use .returning() on inserts and updates if you need the row back; Postgres gives it to you for free, and skipping .returning() then doing a follow-up select is a round trip you do not need. Always destructure the array, even when you expect one row, because the builder returns an array.
Define relations once in the schema:
import { relations } from "drizzle-orm";
export const usersRelations = relations(users, ({ many }) => ({
posts: many(posts),
}));
export const postsRelations = relations(posts, ({ one }) => ({
author: one(users, { fields: [posts.authorId], references: [users.id] }),
}));
Now you can use the higher-level db.query API for nested reads:
const userWithPosts = await db.query.users.findFirst({
where: eq(users.id, userId),
with: {
posts: {
where: eq(posts.published, true),
orderBy: desc(posts.createdAt),
limit: 5,
},
},
});
This compiles to a single SQL query with json_agg, not the N+1 disaster you would get from a naive ORM. For most read paths, db.query is the right call. For complex joins or aggregates, drop back to db.select() and write the join explicitly.
The killer combination in 2026 is Drizzle plus drizzle-zod. Your schema becomes the single source of truth for both database shape and request validation.
npm install drizzle-zod zod
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import { z } from "zod";
import { posts } from "./db/schema";
export const insertPostSchema = createInsertSchema(posts, {
title: z.string().min(1).max(200),
body: z.string().min(1).max(50_000),
}).omit({ id: true, createdAt: true, views: true });
export type InsertPost = z.infer<typeof insertPostSchema>;
In a Next.js Route Handler:
export async function POST(req: Request) {
const body = await req.json();
const parsed = insertPostSchema.safeParse(body);
if (!parsed.success) {
return Response.json({ error: parsed.error.flatten() }, { status: 400 });
}
const [created] = await db.insert(posts).values(parsed.data).returning();
return Response.json(created, { status: 201 });
}
When you add a column to the table, the Zod schema picks it up automatically. No drift between the validator and the database. This pattern slots cleanly into a Next.js project structured for scale, where the schema, queries, and validators live in lib/db/.
Drizzle transactions are just nested function calls.
await db.transaction(async (tx) => {
const [user] = await tx.insert(users).values({ email, name }).returning();
await tx.insert(posts).values({ authorId: user.id, title: "Welcome", body: "..." });
});
If anything throws, the transaction rolls back. Pass tx, not db, to any helper called inside the block; otherwise the helper runs outside the transaction and you get partial writes.
For long-running transactions or partial-failure tolerance, savepoints work too:
await db.transaction(async (tx) => {
await tx.insert(users).values(newUser);
try {
await tx.transaction(async (sp) => {
await sp.insert(posts).values(riskyPost);
});
} catch {
// savepoint rolled back; outer transaction still alive
}
});
If you have a query that runs millions of times a day (auth checks, feature flags, session lookups), prepare it once and reuse the plan.
import { sql } from "drizzle-orm";
const getUserByEmail = db
.select()
.from(users)
.where(eq(users.email, sql.placeholder("email")))
.prepare("get_user_by_email");
const user = await getUserByEmail.execute({ email: "ada@example.com" });
Prepared statements skip the SQL parsing and planning step on every call. For high-QPS services, this is a real 10-30% latency win. For an internal tool that runs the query twelve times a day, do not bother.
Common Table Expressions land cleanly in Drizzle:
import { sql } from "drizzle-orm";
const popularPosts = db.$with("popular_posts").as(
db.select().from(posts).where(gt(posts.views, 1000))
);
const result = await db
.with(popularPosts)
.select()
.from(popularPosts)
.orderBy(desc(popularPosts.views))
.limit(10);
This is the right tool when a query has two or three logical steps. Resist the urge to chain four CTEs; at that point the query belongs in a Postgres view or a materialized view.
Every ORM hits a wall. Drizzle's escape hatch is `sql , and it is first-class.
import { sql } from "drizzle-orm";
const trending = await db.execute(sql`
SELECT p.id, p.title, count(*) as comment_count
FROM ${posts} p
LEFT JOIN comments c ON c.post_id = p.id
WHERE p.created_at > now() - interval '7 days'
GROUP BY p.id
ORDER BY comment_count DESC
LIMIT 20
`);
Interpolating ${posts} injects the safe table identifier, not a string concat. Parameters use ${value} and are bound, not interpolated. You get raw SQL when you need it without losing type safety on the parts the builder can handle.
A short list of patterns that look right and break in production.
schema to drizzle(). Breaks the db.query API silently. Fix: always pass it.db inside a transaction. Helpers run outside the transaction; rollbacks miss them. Fix: pass tx everywhere, or take a db: typeof db | typeof tx parameter.drizzle-kit push in production. It will silently drop columns. Fix: only migrate in CI/CD..returning() then doing a follow-up select. Two round trips for one job. Fix: .returning().serial for new primary keys. Postgres prefers uuid or bigserial for distributed systems. Fix: pick uuid().defaultRandom() for anything user-facing.Be honest. If you are running a single Postgres instance with three tables and one developer, plain pg plus a SQL file is fine. The day you have four tables and two developers touching the schema, you want Drizzle. Below that bar, the migration discipline is overkill.
If you are hooking a frontend to a hosted backend (Supabase, Firebase) with its own client SDK, you do not need Drizzle on the client. Use it on the server, or not at all.
npm install drizzle-orm pg and npm install -D drizzle-kit @types/pg.db/index.ts that exports a drizzle() instance with { schema } passed.db/schema.ts using pgTable, with references() for foreign keys and $inferSelect / $inferInsert types exported.npx drizzle-kit generate and read the SQL file before committing.npx drizzle-kit migrate in CI before the new app version takes traffic.relations() to wire one and many links so the db.query API can do nested reads.db.select() for explicit SQL shapes and db.query.* for nested reads.drizzle-zod and generate createInsertSchema() validators that stay in sync with the table.db.transaction(async (tx) => ...) and pass tx to every helper inside.db.$with() for multi-step queries before reaching for raw SQL.Drizzle's surface area is small enough that one engineer can learn it in a week. Where teams stall is the next layer up: connection pooling for serverless, schema design for multi-tenant apps, migration strategy for zero-downtime deploys. Those are the projects that pay for an outside hand.
If your team needs that, a senior engineer ($1,500/week) on Cadence can typically own a Drizzle migration from greenfield schema to production-ready in 1-2 weeks. Every engineer on the platform is AI-native by default, vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings, which means they ship Drizzle code in the SQL-shaped style the library wants. You can audit your stack with Ship or Skip before booking, if you want a second opinion on whether Drizzle is the right call for your shape of project.
Try it free. Cadence pulls from a 12,800-engineer pool, shortlists in 2 minutes, and runs a 48-hour free trial before any week is billed. If the engineer is not shipping by day two, you swap or walk. Book a senior engineer.
For a typical SaaS app with 15-30 tables, expect 3-5 days. The schema port is mechanical. The longer tail is rewriting nested include queries into db.query calls and re-running your test suite. We covered the full trade-off in our Drizzle vs Prisma breakdown.
Yes. Use drizzle-orm/neon-http, drizzle-orm/postgres-js, or drizzle-orm/d1 depending on your database. The library has zero Node-specific dependencies, so it runs on V8 isolates without polyfills.
Generate. push is for the first hour of a project; it will drop columns silently when the diff is ambiguous. Use generate to write a migration file, review it, commit it, and apply with migrate in CI.
jsonb() and json() columns accept a TypeScript type via .$type<MyShape>(). The shape is enforced at the type level, not the database level, so pair it with a Zod validator on writes.
Up to a point. Anything with window functions, recursive CTEs, or tricky aggregates is fine through the `sql template tag. If your workload is 90% analytics, look at a query builder built for it (Kysely is closer to the metal) or push the work into a view.