I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 12 min read · Cadence Editorial

How to implement multi-tenancy in SaaS in 2026

multi tenancy saas — How to implement multi-tenancy in SaaS in 2026
Photo by [Brett Sayles](https://www.pexels.com/@brett-sayles) on [Pexels](https://www.pexels.com/photo/web-banner-with-online-information-on-computer-3803517/)

How to implement multi-tenancy in SaaS in 2026

To implement multi-tenancy in SaaS in 2026, default to a shared Postgres database with a tenant_id (or org_id) column on every business table, Postgres Row Level Security policies that filter by current_setting('app.tenant_id'), and an ORM query helper (Drizzle or Prisma) that sets the tenant context on every request. Escalate to schema-per-tenant or database-per-tenant only for regulated industries or specific enterprise contracts that demand physical isolation.

That sentence is the entire 2026 default. The rest of this post is the working code, the honest tradeoffs across all three patterns, and the escalation rubric.

The three multi-tenancy patterns and what they actually cost

Every multi-tenant SaaS lands on one of three patterns. Labels vary (AWS calls them pooled / bridge / silo); the shape is the same:

PatternIsolationPer-tenant costOperational loadBest for
Pooled (shared DB + RLS)Logical, DB-enforcedLowestLowestB2B SaaS up to ~1,000 tenants
Schema-per-tenant (bridge)Logical, schema-scopedMediumHigh (N migrations)Dozens of large enterprise tenants
DB-per-tenant (silo)Physical10-50x higherHighestHIPAA, PCI, FedRAMP, single-tenant SLAs

The pooled pattern is the only one that scales operationally without a dedicated platform team. Schema-per-tenant feels safer until your first cross-cutting migration, at which point running an ALTER TABLE across 400 schemas becomes a real engineering project. Database-per-tenant gives you genuine physical isolation at 10x to 50x the per-tenant cost.

Most SaaS companies should not pick schema-per-tenant. It looks like a middle ground; it combines the worst of both ends, logical-only isolation with N times the operational surface.

Why pooled + Postgres RLS is the 2026 default

Three things changed between 2022 and 2026 that make pooled + RLS the obvious starting point:

  1. Postgres RLS shipped in 9.5 (2016) but the tooling, examples, and ORM integrations finally caught up. Drizzle and Prisma both ship first-class patterns for setting the tenant GUC per connection.
  2. Connection-pool tooling (PgBouncer, Supavisor, Neon's pooler) makes per-tenant statement timeouts and isolated pools cheap to configure. Noisy neighbor used to be a real architectural argument; in 2026 it's a config file.
  3. Postgres itself is the default operational database for new SaaS. Every managed provider (Neon, Supabase, Render, Crunchy, RDS) supports RLS. You're not picking an exotic feature.

The economics also changed. Designing multi-tenancy in from day one takes 2 to 4 days. Retrofitting at customer 50 takes 4 to 8 weeks plus a security review. Designing it in is cheaper because no migration is ever required.

Org, Team, and Member: the data model that scales

Before any RLS policy, you need the right entity model. The 2026 default is three tables:

create table organizations (
  id uuid primary key default gen_random_uuid(),
  name text not null,
  slug text not null unique,
  created_at timestamptz not null default now()
);

create table users (
  id uuid primary key default gen_random_uuid(),
  email citext not null unique,
  created_at timestamptz not null default now()
);

create table memberships (
  id uuid primary key default gen_random_uuid(),
  org_id uuid not null references organizations(id) on delete cascade,
  user_id uuid not null references users(id) on delete cascade,
  role text not null check (role in ('owner','admin','member','viewer')),
  created_at timestamptz not null default now(),
  unique (org_id, user_id)
);

Two notes. First, organizations is your tenant. Call the column org_id on resource tables; the naming matches the entity your customers actually understand. Second, the role lives on the membership, not on the user, so a user can be admin in one org and viewer in another. This is the foundation you'll want when you layer in a real RBAC design.

If you need teams (sub-groups inside an org), add a teams table with org_id and a team_memberships join. Don't add teams until a customer asks.

Putting org_id on every business table (the mechanical part)

Every business-data table gets org_id uuid not null references organizations(id). Composite indexes lead with org_id. Foreign keys cascade on org delete so you can hard-delete a tenant cleanly.

create table projects (
  id uuid primary key default gen_random_uuid(),
  org_id uuid not null references organizations(id) on delete cascade,
  name text not null,
  created_at timestamptz not null default now()
);

create index projects_org_id_created_at_idx
  on projects (org_id, created_at desc);

When retrofitting an existing schema, follow the standard expand-migrate-contract pattern from our database migrations playbook: add the column nullable, backfill from membership data, set NOT NULL, then add the foreign key. Don't try to do this in one transaction on a hot table.

Query-layer enforcement: where most teams quietly fail

Here is the failure mode that ships customer data to the wrong tenant: a developer writes db.select().from(projects) and forgets the where(eq(projects.orgId, ctx.orgId)) clause. Code review misses it. Tests pass because the test database has one tenant. The bug ships and stays silent until a customer screenshots another customer's project list.

The fix is to make the right thing the default thing. Wrap the database client in a tenant-scoped helper:

// db/tenant.ts (Drizzle)
import { db } from './client';
import { and, eq, SQL } from 'drizzle-orm';
import { projects, tasks, memberships } from './schema';

export function tenantDb(orgId: string) {
  return {
    projects: {
      list: () => db.select().from(projects).where(eq(projects.orgId, orgId)),
      byId: (id: string) =>
        db.select().from(projects)
          .where(and(eq(projects.orgId, orgId), eq(projects.id, id))),
      insert: (row: Omit<typeof projects.$inferInsert, 'orgId'>) =>
        db.insert(projects).values({ ...row, orgId }).returning(),
    },
    tasks: { /* same pattern */ },
  };
}

Now every call site reads tenantDb(ctx.orgId).projects.list() and the org scope is impossible to forget. The Prisma equivalent uses client extensions:

// db/prisma.ts
import { PrismaClient } from '@prisma/client';
export const tenantPrisma = (orgId: string) =>
  new PrismaClient().$extends({
    query: {
      $allModels: {
        async $allOperations({ args, query, model }) {
          if ('where' in args) args.where = { ...args.where, orgId };
          if ('data' in args && !Array.isArray(args.data)) args.data = { ...args.data, orgId };
          return query(args);
        },
      },
    },
  });

This is the first line of defense, not the only line. RLS is the second line, and you need both. Application-layer wrappers catch 99% of mistakes (typos, forgotten clauses, junior engineer copy-paste). RLS catches the 1% your wrapper missed: raw SQL, ad-hoc psql queries, the migration script that runs as the table owner.

The Postgres RLS policies that enforce isolation at the database

The pattern has three pieces: a non-superuser app role, a session GUC that holds the current tenant, and a policy on every business table.

-- 1. Create an app role that does NOT bypass RLS
create role app_user nologin;
grant connect on database app to app_user;
grant usage on schema public to app_user;
grant select, insert, update, delete on all tables in schema public to app_user;

-- 2. Enable RLS on the resource table
alter table projects enable row level security;
alter table projects force row level security;

-- 3. Policies that read the request's org_id from a session variable
create policy projects_org_isolation on projects
  for all
  to app_user
  using (org_id = current_setting('app.org_id', true)::uuid)
  with check (org_id = current_setting('app.org_id', true)::uuid);

Two details that matter. force row level security makes the policy apply even to the table owner; without it, the role that ran your migrations bypasses the policy. current_setting('app.org_id', true) (note the true) returns NULL if unset instead of throwing, which keeps unauthenticated reads from leaking error details.

The application middleware sets the GUC at the start of every request and resets it at the end:

// middleware/tenant-context.ts
import { db } from '@/db/client';

export async function withTenantContext<T>(
  orgId: string,
  fn: () => Promise<T>,
): Promise<T> {
  return await db.transaction(async (tx) => {
    await tx.execute(sql`select set_config('app.org_id', ${orgId}, true)`);
    return await fn();
  });
}

The third argument true to set_config makes the setting transaction-local, which is critical when you run on a connection pooler that recycles connections. The Nile and Supabase teams both publish more variations of this pattern; the one above is the minimum that works correctly with PgBouncer in transaction-pooling mode.

If your stack is Supabase specifically, our Supabase review goes deeper on the auth.uid() and auth.jwt() helpers that hook directly into RLS and let you skip the middleware in many cases.

The noisy neighbor problem and connection-pool isolation

In a pooled architecture, one tenant's runaway query can lock up your pool and slow down everyone. The 2026 fix is layered:

  1. statement_timeout on the app_user role. Set it to 30 seconds for OLTP workloads. Long reports get a separate role with a longer timeout.
  2. PgBouncer per-tenant pools for your top 5 to 10 customers by traffic. Most SaaS has a long tail; isolating the head accounts for 80% of the noise.
  3. Per-tenant rate limits at the API layer (Upstash Redis, or your existing rate limiter, keyed on org_id).
  4. Tenant-tagged metrics. Every Prometheus metric and every OpenTelemetry span gets an org_id attribute. When your microservices monitoring stack shows a latency spike, you can answer "which tenant" in the same dashboard.

If you skip the tenant tagging, you will spend 40 minutes per incident grepping logs to find out who was running the expensive query. Add it from day one.

When to escalate: schema-per-tenant or DB-per-tenant

Pooled + RLS handles the first 500 to 1,000 tenants without architectural change. Beyond that, escalate selectively. The triggers:

  • Regulated industry contracts (HIPAA BAAs, PCI Level 1, FedRAMP Moderate). The auditor will ask for physical isolation. You will lose the deal arguing about logical isolation. Work this into your SOC 2 readiness early; our SOC 2 audit prep guide covers the customer-facing parts.
  • A single tenant exceeds 10 to 20% of total database load. They're paying you enough to deserve their own database. Move them.
  • Single-tenant deployment SLAs (some healthcare and government contracts). DB-per-tenant or even cluster-per-tenant.

Plan on 1% to 5% of your tenants needing this. The rest stay on the pooled cluster forever and your platform team stays small.

When you do escalate, keep the application code identical. The tenantDb helper takes a connection-string lookup table; the resolver picks the right database based on org_id. The application doesn't know which pattern it's running on. This is the single most important architectural decision you make: the abstraction has to hide the deployment topology.

Testing the isolation boundary (the part everyone skips)

The negative test is the one that proves your isolation works. It looks like this:

// tests/isolation.test.ts
test('tenant A cannot read tenant B projects', async () => {
  const orgA = await createOrg();
  const orgB = await createOrg();
  await tenantDb(orgA.id).projects.insert({ name: 'A-secret' });
  await tenantDb(orgB.id).projects.insert({ name: 'B-secret' });

  const aResults = await tenantDb(orgA.id).projects.list();
  expect(aResults.map(p => p.name)).toEqual(['A-secret']);
  expect(aResults.map(p => p.name)).not.toContain('B-secret');
});

test('raw SQL with wrong org GUC cannot read other tenant data', async () => {
  const orgA = await createOrg();
  const orgB = await createOrg();
  await tenantDb(orgB.id).projects.insert({ name: 'B-secret' });

  await db.execute(sql`select set_config('app.org_id', ${orgA.id}, true)`);
  const rows = await db.execute(sql`select name from projects`);
  expect(rows.map(r => r.name)).not.toContain('B-secret');
});

The second test is the one that actually exercises RLS. Run both in CI on every PR. When someone disables a policy by accident or grants a new role that bypasses RLS, the test fails before the deploy.

When you should not do any of this

If you are pre-revenue with one pilot customer and a six-month runway, build single-tenant. Skip the org_id columns. Ship the product. The day you sign your second customer, do the migration; you'll have learned what your real schema looks like by then and the retrofit will be a week of focused work, not a quarter.

The exception is anything regulated. If your second customer is a hospital, design multi-tenancy in from day one even if customer one is a pilot, because retrofitting a HIPAA-compliant data isolation boundary after launch is a much harder conversation than building it in.

For everyone else, the cost curve is real but not enormous. If you want a pragmatic gut-check on whether your current stack is ready for a second tenant, the Ship-or-Skip audit walks through your repo and grades the multi-tenancy readiness in about ten minutes.

Steps

  1. Model the entities. Create organizations, users, and memberships tables. Org is your tenant; role lives on membership.
  2. Add org_id to every business table. NOT NULL, foreign key to organizations(id) with on delete cascade, composite indexes leading with org_id.
  3. Build the query-layer scope. Wrap your database client in a tenantDb(orgId) helper that injects where org_id = $1 on every read and write. Make the unscoped client private to migrations.
  4. Enable Postgres RLS. Create a non-superuser app_user role, alter table ... enable row level security on every business table, and write for all policies that filter on current_setting('app.org_id').
  5. Wire the request middleware. At the start of every request, run set_config('app.org_id', orgId, true) inside a transaction so the connection pooler resets it cleanly.
  6. Add isolation tests. Write negative tests that confirm tenant A cannot read tenant B data, both through your ORM helper and through raw SQL with a spoofed GUC. Gate CI on them passing.
  7. Tag your observability. Add org_id to every metric, log line, and trace span. You'll need it the first time a tenant complains about latency.
  8. Plan the escalation path. Document which tenants get moved to schema-per-tenant or DB-per-tenant, what the trigger is (regulated contract, load percentage, SLA), and how the routing layer picks the right connection string.

The Cadence connection

Multi-tenancy retrofits land on a senior engineer's plate for two to four weeks. The work is 30% schema design, 30% migration safety, 40% writing the negative tests that prove isolation. If nobody in-house has shipped this twice, booking a senior is usually cheaper than learning it the hard way. Cadence's senior tier is $1,500/week, and every engineer on the platform is AI-native by default, vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings.

Want a second pair of senior eyes on your multi-tenancy plan? Book a Cadence senior engineer for a 48-hour free trial. They'll review your schema, write the RLS policies, and ship the negative tests; if it isn't a fit, you don't pay.

FAQ

Should I use Postgres RLS or just enforce tenant_id in my application code?

Use both. Application-layer enforcement (a tenantDb helper that wraps every query) catches 99% of mistakes; RLS is the safety net that prevents a single forgotten WHERE clause from leaking customer data when someone runs raw SQL or a migration script.

How long does it take to add multi-tenancy to an existing single-tenant SaaS?

Designing it in from day one takes 2 to 4 days. Retrofitting it after you have 50 customers takes 4 to 8 weeks plus a security review. The math says: design it in even if you have one paying customer, because no migration is ever required.

Does Postgres RLS hurt performance?

Not meaningfully if your indexes lead with org_id. RLS becomes a near-free filter the planner can push down. The performance hit is measurable only when policies call complex functions (joins to a memberships table on every row, for example), which you should avoid by keeping policies simple equality checks against the session GUC.

When should I move from RLS to a database per tenant?

When a single enterprise contract requires physical isolation, when you enter a regulated industry (HIPAA, PCI, FedRAMP), or when one tenant exceeds 10 to 20 percent of total database load. Plan on 1% to 5% of your tenants eventually needing this; the other 95% stay on the pooled cluster.

Can Drizzle and Prisma enforce tenant scoping automatically?

Yes. Drizzle's relational query helpers and Prisma's client extensions both let you wrap every query with a tenant scope. Combine that with RLS so the database itself refuses to serve cross-tenant rows, even if the helper is bypassed by raw SQL or by a future code change that forgets the wrapper.

All posts