
Handling GDPR for a SaaS app is an engineering project, not a legal one. You map every column that holds personal data, ship three endpoints (access, deletion, portability), pin EU customer data to an EU region, and wire a 72-hour breach playbook. The privacy policy is the last 5 percent of the work, not the first.
This guide is operational guidance for engineers shipping a product, not legal advice. For the legal-document side (the public privacy policy, cookie banner, terms), see our companion piece on writing a SaaS privacy policy. This post focuses on the code, schemas, and infra decisions that make compliance real.
GDPR applies if you offer goods or services to people in the EU or monitor their behaviour, regardless of where you host. You are the data controller (you decide what gets collected and why). Your vendors (Stripe, Supabase, Resend, Vercel) are processors acting on your instructions. Fines top out at 4 percent of global annual revenue or 20 million euros, whichever is higher. The realistic risk for an early-stage SaaS is a complaint to a supervisory authority that asks for your Article 30 record of processing and your DSAR response logs. Have both ready or enforcement gets expensive fast.
The seven principles in Article 5 read like engineering constraints: collect only what you need, keep it accurate, store it briefly, secure it well, log who touched it, and let the user pull it out or delete it on demand.
Open a fresh markdown file, list every table in your database, and for each table list every column that contains personal data. Tag each column with a tier: identifier (email, user id), sensitive (IP, location), or special category (health, beliefs, biometrics, which you almost certainly should not be storing).
A typical B2B SaaS PII map has 6 to 12 rows:
| Table | Columns with PII | Tier | Retention | Source |
|---|---|---|---|---|
users | email, name, hashed_password | Identifier | Account lifetime | Direct from user |
sessions | ip_address, user_agent | Sensitive | 30 days | Auth flow |
audit_logs | actor_id, ip_address, action | Sensitive | 1 year | App writes |
support_messages | body, attachments | Identifier | 2 years | User submits |
stripe_customers | stripe_customer_id, last4 | Identifier | Subscription lifetime | Stripe webhook |
email_events | recipient, opened_at | Identifier | 90 days | Resend webhook |
This map is your Article 30 record of processing once you add the legal basis (contract, consent, or legitimate interest) per column. Update it every time a migration adds a column. Most teams forget this and end up reverse-engineering it under regulator pressure 18 months later.
Data Subject Access Requests are three rights bundled together. Article 15 gives users the right to a copy of their data. Article 17 gives them the right to erasure. Article 20 gives them the right to portability in a machine-readable format. You have one calendar month to respond under Article 12(3), extendable to three months for complex cases if you tell them inside the first month.
Build them as code, not as a Notion ticket queue. The minimum viable shape is two authenticated endpoints:
GET /api/me/export -> returns JSON of every row touching the user
DELETE /api/me -> queues an erasure job, returns 202
The export endpoint walks your PII map and runs a SELECT against every table where user_id = $1 (or the equivalent foreign key). It returns a single JSON document. That covers Article 15 and Article 20 in one shot, since JSON is a structured, commonly-used, machine-readable format.
The deletion endpoint is harder. It verifies the request (a re-auth step plus an email confirmation link), queues the work, and runs a soft delete first so you can reverse mistakes inside a 30-day window. After 30 days, a worker hard-deletes from the primary database, purges from any data warehouse mirror, and rotates the user's data out of backups on the next retention cycle.
Do not build a manual DSAR process. The first time a regulator asks for response-time logs, you want to point at a database table that proves you closed every request inside 30 days.
Data minimization (Article 5(1)(c)) is the principle most engineering teams ignore because it requires saying no to the analytics team. The rule is simple: don't collect what you don't need, and don't keep it longer than you need it.
Practical schema-level moves:
If you treat the schema as the enforcement layer, you cannot accidentally over-collect. Privacy by design (Article 25) is a column choice, not a compliance training module.
The lazy answer is "we encrypt everything." The real answer requires picking versions and key custody.
In transit, run TLS 1.3 only. Disable TLS 1.0 and 1.1 at the load balancer (Vercel, Render, Cloudflare all do this by default; verify with SSL Labs). Turn on HSTS with max-age=63072000; includeSubDomains; preload. For internal service-to-service traffic, use mTLS or run inside a private network.
At rest, managed Postgres providers like Supabase and AWS RDS encrypt the underlying volume with AES-256 by default. That covers the disk-stolen-from-datacenter threat model, not the database-credentials-leaked threat model. For genuinely sensitive fields (government IDs, health data), use envelope encryption: a per-row data key encrypted by a master key in AWS KMS or GCP KMS, with the plaintext key never written to disk. If you need a reference, the OpenTelemetry instrumentation guide shows similar key-handling discipline for trace data.
For backups, confirm the provider encrypts them too. Supabase encrypts daily backups; AWS RDS automated snapshots inherit the source volume's encryption.
Every third-party that touches user data is a sub-processor and needs a Data Processing Agreement (DPA). The big four for a typical SaaS:
Maintain a public sub-processor list at /sub-processors on your marketing site. List each vendor, what they process, and the region. Notify customers at least 30 days before adding a new sub-processor and give them a way to object. Most enterprise buyers read this page during procurement; missing it kills deals faster than missing a SOC 2 report.
Run a quarterly review where someone walks the list and confirms every vendor still has a current DPA on file. Ten minutes a quarter beats a regulator letter.
Hosting in the US does not violate GDPR by itself, but EU customer expectations and some German or French enterprise procurement rules require data residency. Solve this with a tenant-level region flag and matching infra.
| Decision | Default choice | EU-strict choice | When to upgrade |
|---|---|---|---|
| Hosting region | Render Oregon | Render Frankfurt | First EU enterprise customer asks |
| Database | Supabase US-East | Supabase EU (Frankfurt) | Customer requires data residency clause |
| Email sending | Resend US | Resend EU | Marketing email to EU residents |
| Object storage | S3 us-east-1 | S3 eu-central-1 | Storing user uploads from EU users |
The cleanest pattern is one deployment per region with a shared control plane. Add a region column to your tenants table, route at the edge based on the customer's tenant, and run two database clusters. It's more ops, but it's the only design that survives a "where exactly is this row stored" question from a German DPA.
If you have no EU enterprise customer yet, skip this. A single US region with a clear DPA is defensible for an early-stage product.
Article 33 requires you to notify the supervisory authority within 72 hours of becoming aware of a personal data breach. The clock starts when you have a reasonable degree of certainty that personal data was compromised, per EDPB Guidelines 9/2022. You cannot delay awareness by failing to invest in detection; regulators have explicitly rejected that defence.
Two engineering investments make the timeline survivable:
First, an append-only audit_logs table that records every authentication event, admin action, PII access, and data export. Include actor id, ip, action, target_id, timestamp. Never let the application delete from it; only a quarterly retention job trims rows older than policy. We covered the access-control half of this in designing role-based access control (RBAC); the audit log is the receipt layer underneath.
Second, a written breach playbook at docs/breach-playbook.md. It lists who to call (DPO or named privacy contact), the ICO online breach form, the EDPB phased-notification template, and a decision tree for "is this notifiable." The EDPB explicitly allows phased notification: file an initial report inside 72 hours and submit details as the investigation progresses.
The deletion worker is the most error-prone piece of GDPR engineering. Here's the pattern we ship:
// workers/delete-user.ts (pseudo-code)
async function deleteUser(userId: string) {
// 1. mark soft-deleted, freeze auth
await db.users.update(userId, { deleted_at: now(), email: null });
// 2. walk the PII map, delete child rows
for (const table of PII_MAP) {
await db[table.name].deleteWhere({ [table.fk]: userId });
}
// 3. revoke external tokens
await stripe.customers.del(stripeCustomerId);
await resend.contacts.remove(email);
// 4. write the receipt
await auditLogs.insert({
action: 'gdpr_erasure',
target_id: userId,
completed_at: now(),
proof: hash(JSON.stringify({ tables: PII_MAP.map(t => t.name) })),
});
// 5. schedule backup purge
await queue.enqueue('purge-backups', { userId, after: addDays(now(), 30) });
}
The script is small. The discipline is the PII map staying current and the worker running idempotently if retried. The wiring (queues, dead-letter handling, CI patterns from our GitHub Actions for Next.js guide) is the same infrastructure work as any other background job.
If you are two founders pre-revenue with no EU users, you do not need region pinning, a DPO, or a sub-processor portal. You do need TLS, encryption at rest (free from any managed Postgres), a DSAR email contact, and a rough PII map in a Notion doc. That posture is defensible until your first EU customer.
If you are pure B2B and your customers are themselves controllers, most obligations push down to them via your DPA. You're the processor; your job is to honour their instructions and let them DSAR their users through your API.
If your product is consumer-facing with European users, ship all seven steps above. The question is not whether, only when.
Most teams find the work takes a senior engineer two to three weeks of focused effort, plus a week of cleanup spread over the next quarter. If you don't have that capacity in-house, the senior tier on Cadence ($1,500/week) is sized for exactly this kind of compliance build-out. Or if you'd rather pressure-test the stack you already have, our ship-or-skip stack auditor gives you an honest grade on your DSAR readiness, encryption posture, and audit-log discipline in about five minutes.
GET /me/export for access plus portability, DELETE /me for erasure. One calendar month response window.region column on tenants, one Frankfurt deploy alongside the US deploy when the first EU enterprise customer asks.docs/breach-playbook.md in the repo, ICO and EDPB links pre-loaded.If GDPR is the next item on your roadmap and the team is already underwater, a senior Cadence engineer can own the full rollout end-to-end. Weekly billing, 48-hour free trial, replace any week. Book the trial.
Only if you do large-scale monitoring of data subjects or process special-category data systematically. Most early-stage B2B SaaS does not need a formal DPO, but you do need a named privacy contact in your privacy policy and a real inbox that gets read.
Within one calendar month under Article 12(3). You can extend by an additional two months for genuinely complex requests, but you must explain the delay to the requester within the first month. Build the response time into your engineering SLAs, not your support team's queue.
Yes, if you offer goods or services to EU residents or monitor their behaviour. Hosting in the US does not exempt you. The territorial scope is in Article 3, and it has been tested repeatedly in enforcement actions against US-based platforms.
Most enforcement starts with a complaint to a supervisory authority by a single user. The authority's first ask is your Article 30 record of processing and your DSAR response logs. Have both ready. Have your sub-processor list public. Most cases resolve at this stage without a fine if you can show real engineering controls.
Yes, with EU Standard Contractual Clauses in your DPA and a transfer impact assessment on file. Stripe, Supabase, and Resend all publish SCC-compliant DPAs. Document the assessment once per vendor and revisit on any major change to the vendor's data handling.