
Postgres row-level security (RLS) for SaaS works in four moves: enable RLS on every tenant-scoped table, write USING plus WITH CHECK policies per operation, set a tenant_id session variable in your request middleware, and prove isolation with SET ROLE tests. Done right, a forgotten WHERE tenant_id = $1 clause stops being a data breach. The database becomes your last line of defense, not the first thing to silently fail.
This post is the implementation deep dive. If you want the architecture-level decision (single DB vs schema-per-tenant vs DB-per-tenant), start with our companion guide on implementing multi-tenancy in SaaS and come back here once you've picked the shared-database model.
RLS is a per-table feature that adds an implicit WHERE clause to every SELECT, INSERT, UPDATE, and DELETE. You write the predicate once as a policy on the table; Postgres applies it to every query the connecting role runs.
Two clauses do the work:
USING (predicate) filters which existing rows the query can see or modify.WITH CHECK (predicate) validates rows being written. New or updated rows that fail the check raise new row violates row-level security policy.Three rules are easy to forget:
BYPASSRLS attribute skip every policy.ALTER TABLE ... FORCE ROW LEVEL SECURITY.That last one bites teams that run their app as the migration user. We'll fix it in the steps below.
Three things pushed RLS from "advanced Postgres trick" to "responsible default" in the last two years. Supabase made RLS table-stakes and trained a generation in the syntax. Drizzle and Prisma both shipped first-class RLS helpers in 2025. And AI coding agents now write most CRUD endpoints; they forget tenant filters, and RLS is the one mechanism that catches them before the breach.
If you're a 5-person team running a B2B SaaS on a single Postgres, RLS belongs in your schema from day one. The cost is one extra line in your connection middleware. The payoff is sleep.
Every table that holds tenant data needs the same two-line treatment:
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
ALTER TABLE projects FORCE ROW LEVEL SECURITY;
ENABLE turns RLS on for the connecting role. FORCE extends it to the table owner, which is what your migration user almost certainly is. Skip FORCE and your policies will silently not apply when you query through the same role that ran the CREATE TABLE.
You also want a non-superuser app role that the API uses at runtime:
CREATE ROLE app_user NOLOGIN NOSUPERUSER NOBYPASSRLS;
GRANT app_user TO api_login_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
The NOSUPERUSER and NOBYPASSRLS flags are not redundant. A role can pick up BYPASSRLS later through inheritance if you're not careful. State the intent on the role itself.
The single biggest mistake in RLS implementations is writing one FOR ALL policy and assuming it covers INSERT. It doesn't, not the way you want.
Here's the operation matrix every tenant-scoped table needs:
| Operation | USING | WITH CHECK | What it prevents |
|---|---|---|---|
SELECT | required | n/a | reading other tenants' rows |
INSERT | n/a | required | creating rows in another tenant |
UPDATE | required | required | modifying or moving rows across tenants |
DELETE | required | n/a | deleting other tenants' rows |
A complete, safe policy set on a projects table looks like this:
CREATE POLICY projects_select ON projects
FOR SELECT
USING (tenant_id = current_setting('app.tenant_id')::uuid);
CREATE POLICY projects_insert ON projects
FOR INSERT
WITH CHECK (tenant_id = current_setting('app.tenant_id')::uuid);
CREATE POLICY projects_update ON projects
FOR UPDATE
USING (tenant_id = current_setting('app.tenant_id')::uuid)
WITH CHECK (tenant_id = current_setting('app.tenant_id')::uuid);
CREATE POLICY projects_delete ON projects
FOR DELETE
USING (tenant_id = current_setting('app.tenant_id')::uuid);
The WITH CHECK clause on UPDATE is the one nobody remembers. Without it, a tenant could UPDATE projects SET tenant_id = 'other-tenant-id' WHERE id = ... and hand a row to a different tenant. The USING clause says "you can see this row." The WITH CHECK clause says "and after your update, the row still belongs to you." Both must pass.
Use a generator. Hand-writing this for 30 tables produces typos. A psql function or a Drizzle migration helper that emits all four policies from a table name and a column name pays for itself within a week.
The whole RLS scheme depends on current_setting('app.tenant_id') returning the right value for the current request. That happens in middleware, before your route handler runs.
The pattern for node-postgres plus Drizzle:
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export async function withTenant<T>(
tenantId: string,
fn: (db: ReturnType<typeof drizzle>) => Promise<T>,
): Promise<T> {
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query(
`SELECT set_config('app.tenant_id', $1, true)`,
[tenantId],
);
const db = drizzle(client);
const result = await fn(db);
await client.query('COMMIT');
return result;
} catch (err) {
await client.query('ROLLBACK');
throw err;
} finally {
client.release();
}
}
Three details matter:
true third argument to set_config makes the setting transaction-scoped (SET LOCAL), not session-scoped. Without it, the setting leaks across requests through the connection pool, and request 2 inherits request 1's tenant. This is the worst possible bug; you will not see it in tests with a single user.SET LOCAL only lives inside the BEGIN/COMMIT. If you forget the transaction, the setting silently does nothing.tenantId from your verified auth token (JWT claim, signed session), never a request header or query param.Your route handler then becomes:
app.get('/api/projects', async (req, res) => {
const projects = await withTenant(req.auth.tenantId, async (db) => {
return db.select().from(projectsTable);
});
res.json(projects);
});
No WHERE tenant_id = anywhere. The database adds it for you.
The two ecosystems take opposite postures.
Supabase ships with RLS off by default on a new table but flips on a "RLS not enabled" warning in the dashboard the moment you expose the table to the public API. Policies use the auth.uid() and auth.jwt() helpers, which read the JWT that the Supabase client passed in. You don't manage set_config yourself; the gateway does.
Drizzle and Prisma assume you own the connection. You write the middleware above, or you use the withRLS helper Drizzle shipped in 0.36 that does the set_config wrap for you. Prisma 6 added $transaction extensions that achieve the same effect.
The trade-offs:
If you're already on Supabase, use its RLS conventions and lean on auth.uid(). We've written a deeper take on the trade-offs in our Supabase review. If you're on a self-hosted Postgres or Neon, Drizzle's withRLS is the lowest-friction option.
RLS predicates run inside the query planner, so they show up in EXPLAIN. The good news: the planner is smart about pushing the predicate into index scans. The bad news: only if the right index exists.
The single most important performance rule: tenant_id is the leading column of every multi-column index on a tenant-scoped table. Not the trailing column. Not a separate index. The leading column.
CREATE INDEX projects_tenant_status_idx
ON projects (tenant_id, status, updated_at DESC);
Without that, RLS rewrites your query to scan the entire table, filter by tenant_id, then apply your WHERE status = 'active'. With the composite index, Postgres seeks straight to the tenant's slice and reads sequentially. Internal benchmarks from teams shipping RLS at scale put policy evaluation at under 1ms with the composite index and roughly 100ms (two orders of magnitude slower) without.
Run EXPLAIN (ANALYZE, BUFFERS) on your hot queries with RLS on and a realistic dataset. Look for "Filter: (tenant_id = ..." in the plan. If you see it on a sequential scan, you're missing the index. If you see it on an index scan but the rows-removed-by-filter is high, your index doesn't lead with tenant_id.
A few other things RLS will break and how to handle them:
FORCE ROW LEVEL SECURITY. Otherwise a foreign-key insert can leak existence of other tenants' rows.COUNT(*) over the whole table is now per-tenant. That's usually what you wanted, but billing dashboards that expected global counts will quietly become per-tenant counts.WHERE tenant_id = 'big-customer-uuid' keeps their slice hot without bloating the main index.If you're already chasing query plans, our guide to optimizing Postgres queries covers the broader EXPLAIN workflow.
RLS bugs do not show up under load. They show up in audits. Write integration tests that prove isolation before the code reaches main.
The pattern:
-- In a test transaction, simulate two tenants
INSERT INTO projects (id, tenant_id, name) VALUES
('p1', 'tenant-a', 'A project'),
('p2', 'tenant-b', 'B project');
SET ROLE app_user;
SELECT set_config('app.tenant_id', 'tenant-a', true);
SELECT count(*) FROM projects; -- expect 1
SELECT id FROM projects; -- expect ['p1']
SELECT set_config('app.tenant_id', 'tenant-b', true);
SELECT count(*) FROM projects; -- expect 1
SELECT id FROM projects; -- expect ['p2']
-- Try to write into wrong tenant
SELECT set_config('app.tenant_id', 'tenant-a', true);
INSERT INTO projects (id, tenant_id, name) VALUES ('p3', 'tenant-b', 'evil');
-- expect: ERROR: new row violates row-level security policy
RESET ROLE;
Wrap that in pgTAP, Vitest with pg, or whatever your stack uses. The key is SET ROLE app_user so the test runs as the non-bypassing role and set_config so the tenant context is real.
For every tenant-scoped table, you want at least three tests: cross-tenant SELECT returns nothing, cross-tenant INSERT raises, and cross-tenant UPDATE (changing tenant_id) raises. That's the minimum bar.
A short list of footguns ranked by how often they sink RLS rollouts:
BYPASSRLS. Your policies are decorative. Always create a dedicated app_user with NOSUPERUSER NOBYPASSRLS.FORCE ROW LEVEL SECURITY. Migrations create tables. The migration user owns them. The migration user bypasses policies. So does any code path that connects as the migration user.WITH CHECK on INSERT or UPDATE. Reads are isolated, writes are wide open. Your data slowly migrates between tenants and nobody notices until support tickets surface.SECURITY DEFINER functions that don't reset the search path. A SECURITY DEFINER function runs as the function owner, which usually has more privileges. Always set SET search_path = pg_catalog, public and re-check tenant context inside the function.SET (not SET LOCAL). Session settings leak across requests because the connection is reused. Always use set_config(..., true) so the setting is transaction-scoped.BYPASSRLS, then drop the privilege when the migration finishes.If your team needs an outside set of eyes on this, a senior engineer ($1,500/week on Cadence) can audit your policies, add the missing WITH CHECK clauses, and ship the middleware in a week. Most RLS rollouts on existing codebases are 30-50 hours of careful work, not a six-week refactor.
ALTER TABLE <name> ENABLE ROW LEVEL SECURITY and ALTER TABLE <name> FORCE ROW LEVEL SECURITY for each table that holds tenant data. Skip lookup tables and global config.SELECT with USING, INSERT with WITH CHECK, UPDATE with both, DELETE with USING. Use the current_setting('app.tenant_id')::uuid predicate.SELECT set_config('app.tenant_id', $1, true) with the verified tenant ID from your auth token. Use withTenant() (or Drizzle's withRLS) so route handlers cannot bypass it.SET ROLE app_user, set a tenant context, and assert that rows from other tenants are invisible and that cross-tenant writes raise. At minimum, three tests per table.EXPLAIN (ANALYZE, BUFFERS) on the top 10 hottest queries with RLS enabled. Confirm tenant_id is the leading column of every composite index used.Want an honest grade on the rest of your stack? Run Ship or Skip for a free audit of what's working and what to drop before you scale tenants.
With a composite index leading on tenant_id, RLS adds well under 1ms per query. Without that index, it can be two orders of magnitude slower because the planner falls back to a sequential scan plus filter. The fix is index design, not turning RLS off.
Yes, as long as you use set_config('app.tenant_id', value, true) (or SET LOCAL) inside an explicit transaction. Session-scoped SET will leak between requests when PgBouncer reuses connections. Drizzle's withRLS helper and the withTenant pattern in this post both do this correctly.
All three work. Supabase makes RLS the default and uses auth.uid() helpers tied to its gateway. Drizzle 0.36+ ships a withRLS helper that wraps the set_config middleware. Prisma 6 supports it through $transaction extensions. Pick the one your stack already uses; do not switch ORMs to get RLS.
Run migrations as the owning role, which bypasses RLS by default (and by BYPASSRLS if you've set FORCE ROW LEVEL SECURITY). Keep the migration role separate from the runtime app role. Our Drizzle ORM guide covers the migration role pattern in detail.
RLS is your last line of defense, not your only one. Keep authorization checks in your API layer (who is allowed to call this endpoint at all) and use RLS for row-level isolation (which rows can this verified user touch). Defense in depth is cheap; RLS makes the database half free.