I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 11 min read · Cadence Editorial

How to handle data residency for international SaaS

data residency saas — How to handle data residency for international SaaS
Photo by [Brett Sayles](https://www.pexels.com/@brett-sayles) on [Pexels](https://www.pexels.com/photo/server-racks-on-data-center-5480781/)

How to handle data residency for international SaaS

Data residency for SaaS means storing and processing each customer's data inside the legal jurisdiction they require, usually by routing tenants to region-specific stacks (EU, US, India, Australia, KSA) at the edge. The honest playbook in 2026: pick one of four patterns (single-region with limits, tenant-sharded, active-passive, active-active), pin the AI vendor to the same region, and let auth ride globally. Anything else either burns cash or fails an audit.

This post is engineering guidance, not legal advice. Talk to a privacy lawyer before you sign an enterprise MSA that references residency.

Why data residency matters in 2026

Three years ago you could survive on a single us-east-1 deployment and a friendly DPA addendum. Not anymore. The list of regulations with hard or de-facto residency requirements keeps growing:

  • EU GDPR Article 44+ and Schrems II. Personal data of EU residents can leave the EEA only under specific transfer mechanisms (SCCs, adequacy decisions, the EU-US Data Privacy Framework). Fines hit 4% of global revenue or EUR 20M, whichever is higher. Meta paid USD 1.3B in 2023 for getting this wrong.
  • German BSI C5 and public-sector clouds. Federal and Länder agencies require in-country processing and operator nationality controls.
  • Indian DPDP Act. "Significant data fiduciaries" face data-localization rules the government can flip on for specific sectors (health, finance) at any time.
  • Australian IRAP and government cloud. PROTECTED-classified workloads must run in IRAP-assessed Australian regions.
  • Saudi PDPL and UAE banking circulars. Saudi banking data must remain in-Kingdom; UAE Central Bank Outsourcing Regulations require board approval for any cross-border processing of customer financial data.

The shift in 2026: enterprise procurement teams are now asking about AI-vendor residency in the same questionnaire as database residency. If your support agent runs on OpenAI's US endpoint, you have an unresolved transfer regardless of where Postgres lives. We'll come back to this.

The default approach (and why it breaks at the boundary)

Most teams start with one Postgres in us-east-1 (or eu-west-1 if they're European), Vercel or Render in the same region, and a CDN out front. This is correct until the first enterprise prospect asks "where does our data live?"

Three failure modes show up:

  1. The DPA gate. Procurement won't sign without a residency clause. You either lie, lose the deal, or pause to re-architect mid-sales-cycle.
  2. The audit gate. SOC 2 Type II auditors and ISO 27001 surveyors increasingly ask about data flows by region. A single us-east-1 map for an EU customer is a finding. Our SOC 2 audit preparation guide walks the full evidence list.
  3. The latency gate. A user in Mumbai hitting Virginia eats 220ms RTT before TLS. Single-region looks fine in your dashboard and feels broken to the user.

The default is fine for the first 12 to 18 months. Plan the exit ramp now so you're not retrofitting under enterprise-deal pressure.

The four architecture patterns

PatternSetup timeMonthly infra cost (est)Best forReal downside
Single-region with policy controls1-2 weeks$200-2,000Pre-revenue or single-jurisdictionFails on first non-local enterprise deal
Tenant-sharded by region4-8 weeks$1,500-8,000<50 enterprise tenants, mostly intra-region trafficCross-tenant analytics gets ugly
Multi-region active-passive8-12 weeks$4,000-15,000Regulated industries needing failover + residencyFailover RTO is real (5-30 min)
Multi-region active-active3-6 months$10,000-50,000+Global scale, low-latency writes per regionConflict resolution is its own product

A few honest caveats. Cost ranges assume Postgres-class workloads on AWS or Render, not 100k QPS write-heavy systems. The active-active row hides a six-figure engineering tax in conflict resolution and idempotent design.

Pattern 1: Single-region with contractual controls

Stay in one region. Sign SCCs. Add a Transfer Impact Assessment to the DPA. Works for B2C, internal tools, and products with a mostly intra-region customer base.

Pattern 2: Tenant-sharded by region

One stack per region, each tenant pinned to one region forever. The simplest evolution from a single-region default. You replicate schemas, not data. We pair this with tenant_id discipline, the same pattern in our multi-tenancy implementation guide.

Pattern 3: Multi-region active-passive

One primary region serves writes; secondary regions hold continuously replicated read replicas with promotion logic for failover. AWS Aurora Global Database advertises sub-second cross-region replication lag for typical workloads. Good when the secondary doubles as DR and a future "go live in EU" toggle.

Pattern 4: Multi-region active-active

Each region writes its own data, replicates async to the others, and you handle conflicts with CRDTs, last-writer-wins, or per-tenant pinning. What Linear, Notion, and Figma built up to over years. Don't start here.

Tenant routing: the load-bearing decision

Once you go multi-region, the next question is how a request finds the right stack. Two patterns dominate.

Subdomain to region

Each tenant gets acme.eu.yourapp.com or acme.us.yourapp.com. DNS does the routing. Simple to reason about, expensive to migrate (the URL changes if a tenant relocates).

// Cloudflare Worker example
export default {
  async fetch(req: Request) {
    const url = new URL(req.url);
    const region = url.hostname.split('.')[1]; // eu | us | ap
    const origin = REGION_ORIGINS[region] ?? REGION_ORIGINS.us;
    return fetch(`${origin}${url.pathname}${url.search}`, req);
  },
};

org_id to region (lookup at the edge)

Tenants keep one URL (yourapp.com). On every request, an edge worker looks up the org's home region from a small KV store and forwards. This is the WorkOS / Atlassian pattern and it's the right default for greenfield builds because relocating a tenant is a metadata flip, not a DNS migration.

// Cloudflare Worker, KV-based tenant routing
export default {
  async fetch(req: Request, env: Env) {
    const orgId = req.headers.get('x-org-id') ?? extractOrgFromJWT(req);
    const region = (await env.TENANT_REGIONS.get(orgId)) ?? 'us';
    const origin = env[`ORIGIN_${region.toUpperCase()}`];
    return fetch(`${origin}${new URL(req.url).pathname}`, req);
  },
};

The KV store is the source of truth. Update it during onboarding (the user picks "EU" in the signup flow, or you infer from billing country) and never write to a tenant's data outside their pinned region. Combine this with Zod for API validation so the routing payload is checked at the boundary.

Tools: who hosts what in 2026

LayerEU-region optionIndia-region optionKSA / UAE option
ComputeVercel fra1, Render Frankfurt, Fly.io ams/fraAWS ap-south-1 (Mumbai), Render SingaporeAWS me-south-1 (Bahrain), Azure UAE North
PostgresNeon EU regions, Aurora Global EU, Supabase EUAurora ap-south-1, Neon SingaporeRDS me-south-1, on-prem partner
Edge / routingCloudflare Workers (regional placement), Vercel EdgeCloudflare (Mumbai PoP)Cloudflare (Riyadh PoP, 2025)
Object storageS3 EU buckets, Cloudflare R2 EU jurisdictionalS3 ap-south-1, Wasabi MumbaiS3 me-south-1, local provider
AuthWorkOS, Auth0, Clerk (multi-region 2026)same, with India tenant residencyusually federated to customer IdP

Two 2026 specifics worth knowing. Cloudflare Workers regional placement lets you constrain a Worker (and its D1/Hyperdrive calls) to a specific jurisdiction, useful for deterministic Schrems II evidence. Neon's multi-region launch (2026) finally gives serverless Postgres tenants an EU-only option without paying Aurora Global pricing. For the broader auth question, our authentication implementation guide covers the managed-provider trade-offs.

The AI-vendor residency problem

This is the part most residency posts miss. If your product calls OpenAI or Anthropic from an EU tenant, you've created an EU-to-US transfer the moment a prompt leaves Frankfurt, regardless of where Postgres lives. Procurement teams in 2026 caught up to this.

The 2026 options:

  • OpenAI EU data residency. Available since Feb 2025, expanded Jan 16 2026 to include in-region GPU inference for EEA + Switzerland eligible customers. Authentication still routes through US infrastructure (acceptable to most enterprise buyers, a blocker for a few).
  • Azure OpenAI in EU regions. Deploy GPT-class models inside Azure West Europe / Sweden Central / France Central. Often the cleanest answer for EU public-sector buyers who already trust Azure.
  • Anthropic via AWS Bedrock per-region. Claude is available in Bedrock across eu-central-1, eu-west-3, ap-southeast-2 and others. Calls stay inside the AWS region.
  • Self-hosted open-weight models. Llama 3.x or Mixtral on a per-region GPU pool. Highest sovereignty, highest ops cost.

Two engineering practices help. Route every AI call through a thin gateway service that knows the tenant's region and picks the matching model endpoint. Log the model + endpoint + region per request so an auditor can prove that an EU tenant's data never left the EEA. This is the same discipline as our right-to-be-forgotten data deletion guide: residency, like deletion, is an evidence problem, not just a code problem.

Real cost math: a 3-region rollout

A worked example for a Series A SaaS adding EU and Singapore regions on top of US-East. Numbers are 2026 list prices, rounded.

Line itemUS-East baseline+ EU (Frankfurt)+ SingaporeCombined / month
Compute (Render Pro instances)$400$400$400$1,200
Postgres (Neon Scale plan, per region)$300$300$300$900
Cross-region replication egress (50GB/region)$0$90$90$180
Object storage + egress$200$200$200$600
Cloudflare Workers + KV (single global)$50$0$0$50
AI gateway (Bedrock Claude, mid-volume)$800$800$400$2,000
Total$1,750$1,790$1,390$4,930

Two notes. Egress is the surprise on most invoices, especially if you replicate large object stores; design to keep blobs region-local where possible. The AI line scales hardest; cap it per tenant before cost-control becomes a fire.

Common pitfalls

  • Auth treated as residency-bound. Forcing per-region IdPs blows up SSO complexity for marginal compliance value. The 2026 consensus (WorkOS, Slack, Atlassian) is that auth metadata can ride globally; customer content cannot.
  • Cross-region analytics pipeline ignores residency. Sending raw event payloads from EU to a US Snowflake account undoes everything. Aggregate first, ship counts and hashed IDs only.
  • Backups outside the residency boundary. S3 cross-region replication is convenient and a violation if the destination region is offshore. Always pin backups inside the source jurisdiction.
  • AI vendor routing as an afterthought. Adding the prompt gateway after launch is a multi-week refactor; build it in front of the first OpenAI call.
  • No tenant relocation playbook. Eventually a customer will ask to move from US to EU. Without a documented migration (export, replicate, re-pin, verify, cutover, delete), it becomes a six-week incident.

When you can skip this entirely

If you're pre-revenue, US-only, and not chasing EU enterprise this quarter, single-region with SCCs is the right answer. Best-practice ROI curves are real; data residency engineering is expensive and only pays off when residency-blocked deals start appearing in the pipeline. The trigger is usually the second or third enterprise prospect asking the same question in the same month. Until then, document the architecture you'd build, not the one you're running.

If you're building this out now, audit your stack with Ship-or-Skip before committing to a pattern; it'll surface the dependencies (auth, analytics, AI vendors) that need to move with you.

Steps

  1. Region detection. At signup, capture the tenant's jurisdiction (billing country, explicit picker, or enterprise MSA). Persist on the org record. For self-serve products, default to nearest region but make it user-overridable before any data is written.
  2. Tenant routing. Build an edge worker (Cloudflare Workers, Vercel Edge Middleware, AWS CloudFront SaaS Manager) that reads the tenant's region from a KV store and forwards each request to the matching origin. Treat the KV store as the routing source of truth.
  3. Cross-region replication policy. Document, per data class, where it can and cannot replicate. Customer content stays in-region. Aggregated metrics ship to a central warehouse. Backups stay inside the jurisdiction. Encode the policy in IaC (Terraform tags, AWS SCPs) so an engineer can't accidentally create an offshore replica.
  4. Audit trail. Log every request with (tenant_id, region_handled, downstream_region_called, model_or_db_endpoint). This becomes the evidence pack for SOC 2, ISO 27001, and any regulator inquiry. Sample logs into long-term storage for at least the retention window your DPA promises.

The Cadence connection

Most data-residency rollouts are a 4 to 8 week project for a senior engineer who has done it once before. On Cadence, that's the senior tier ($1,500/week), and every engineer on the platform is AI-native by default (vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings), which matters when the work involves writing IaC, edge workers, and migration scripts in parallel. For multi-region active-active, you'll usually want the lead tier ($2,000/week) for the first 2 to 4 weeks of design.

If you're staring at a residency clause in a six-figure MSA and don't have an engineer who has shipped multi-region Postgres before, the fastest unblock is to book a senior engineer for a 48-hour trial on Cadence. You'll know inside two days whether they can deliver the rollout.

FAQ

Is data residency the same as data sovereignty?

No. Residency is "where data physically sits." Sovereignty adds "and which country's laws have jurisdiction over it, including the operator's nationality." A US-headquartered SaaS storing data in Frankfurt satisfies residency but not full sovereignty under strict reads of German or French public-sector rules.

Does GDPR require EU data residency?

Not literally. GDPR restricts cross-border transfers under Chapter V (SCCs, adequacy, DPF). EU residency is the cleanest way to avoid the transfer question, which is why most enterprise buyers ask for it.

How do I handle a tenant that wants to move regions?

Build a documented migration: pause writes, export the tenant's data, replicate to the destination region, verify counts, flip the routing KV entry, run a parallel-shadow week, then hard-delete from the source. Practice it on an internal tenant before the first customer asks.

Can I use OpenAI for EU customers in 2026?

Yes, with caveats. Use OpenAI's EU data residency offering (in-region inference since Jan 2026) or Azure OpenAI in an EU region. Route every prompt through a region-aware gateway and log the endpoint per request so you can prove no EU-to-US transfer occurred.

What about HIPAA?

HIPAA is a US law and doesn't impose residency directly, but BAAs from cloud and AI vendors are region-scoped. See our HIPAA SaaS compliance guide for the BAA stack and the architecture pattern for PHI workloads.

How long does the full rollout take?

Tenant-sharded: 4 to 8 weeks for a senior engineer who has done it before, 8 to 16 weeks if it's their first time. Multi-region active-passive: 8 to 12 weeks. Active-active: a quarter minimum, usually two.

All posts