
Handling a GDPR data deletion request means soft-deleting the user's account, waiting a 30-day grace period, hard-deleting with a database CASCADE, fanning the deletion out to every sub-processor (Stripe, Resend, PostHog, Sentry, your AI provider), and writing an audit log of the request itself. You have 30 days under GDPR Article 17, 45 under CCPA, and a similar window under India's DPDP Act. This post is operational guidance for engineers, not legal advice.
The hard part in 2026 is not the database delete. It is the fanout to the eight to fifteen sub-processors a modern SaaS quietly accumulates, plus the audit log paradox: you must prove you deleted the user without retaining anything that identifies them.
Three regulations cover most engineering teams. The shape is similar; the deadlines and penalties differ.
| Regulation | Response window | Erasure scope | Penalty ceiling | Notable exemption |
|---|---|---|---|---|
| GDPR Art. 17 (EU/UK) | 30 days (+2 months) | All processors + sub-processors | €20M or 4% global turnover | Legal claims, tax, freedom of expression |
| CCPA 1798.105 (California) | 45 days (+45) | Business + service providers | $7,500 per intentional violation | Transaction completion, legal compliance |
| DPDP Act §12 (India, 2023) | Without undue delay | Data Fiduciary + processors | ₹250 crore (~$30M) | Legal obligation, public interest |
GDPR Article 17 is the strictest in scope and the one most teams design for. It applies to any company processing EU residents' data, regardless of where the company is based. CCPA bites if you do business in California and hit revenue or data-volume thresholds. DPDP joined the pile in late 2023 and is still building enforcement muscle, but the design is GDPR-shaped, so a single workflow can satisfy all three.
Article 17(3) lists exemptions: legal claims, tax obligations, freedom of expression, public health, and archiving for research. These are narrow and need a documented reason. "We might want this data later" is not an exemption.
When GDPR landed, the typical SaaS had two sub-processors: a payment provider and an email tool. A 2026 SaaS routinely runs through eight or more: Stripe for billing, Resend for transactional email, PostHog or Mixpanel for product analytics, Sentry for errors, OpenAI or Anthropic for AI features, Vercel for hosting, Supabase or Neon for the database, plus whatever niche tool the marketing team added last quarter.
Every one of those holds a copy of some PII. Session replay tools record form inputs. Error trackers capture stack traces with email addresses. AI providers log prompts. Your DELETE FROM users is the easiest 5% of the work.
Most teams reach for this:
DELETE FROM users WHERE id = $1;
It looks correct. It will fail an audit for four reasons.
First, foreign keys. If posts.author_id, comments.user_id, and subscriptions.customer_id reference users.id without ON DELETE CASCADE, your delete either errors out or silently leaves orphan rows that still contain the user's content.
Second, sub-processors. Stripe still has the customer object. Resend still has the contact. PostHog still has session recordings of the user typing their email into a form. None of them get notified when you run a SQL delete.
Third, backups. Your nightly snapshot ran an hour before the deletion. Restore it for any reason and the user reappears.
Fourth, audit. The regulator asks: prove you deleted Sarah Cohen on March 14, 2026. Your only evidence is the absence of a row, which is also exactly what a bug looks like.
The pattern that holds up under SOC 2 and GDPR audits is a five-stage pipeline:
Working code for the first three stages, in TypeScript with Drizzle and a Postgres backend:
// 1. Soft-delete on receipt of the request
export async function softDeleteUser(userId: string, requestId: string) {
await db.transaction(async (tx) => {
await tx
.update(users)
.set({ deletedAt: new Date(), deletionRequestId: requestId })
.where(eq(users.id, userId));
await tx.delete(sessions).where(eq(sessions.userId, userId));
await tx
.update(subscriptions)
.set({ cancelAtPeriodEnd: true })
.where(eq(subscriptions.userId, userId));
});
await deletionQueue.enqueue({
type: "hard-delete",
userId,
requestId,
runAt: addDays(new Date(), 30),
});
}
// 2. Grace queue worker (runs after 30 days)
export async function hardDeleteWorker(job: HardDeleteJob) {
const user = await db.query.users.findFirst({
where: eq(users.id, job.userId),
});
if (!user || !user.deletedAt) return; // user restored, skip
// 3. Hard delete; CASCADE handles posts, comments, sessions, etc.
await db.delete(users).where(eq(users.id, job.userId));
// 4. Fanout (next section)
await subprocessorFanout.enqueue({ requestId: job.requestId, user });
// 5. Audit log
await writeDeletionReceipt(job.requestId, user);
}
Foreign keys do the heavy lifting. The schema needs ON DELETE CASCADE on every table that references users(id):
ALTER TABLE posts
DROP CONSTRAINT posts_author_id_fkey,
ADD CONSTRAINT posts_author_id_fkey
FOREIGN KEY (author_id) REFERENCES users(id) ON DELETE CASCADE;
If your team is shipping schema changes alongside this work, our notes on how to handle database migrations safely in production cover the expand-migrate-contract pattern that keeps the alters non-blocking.
This is the work most posts skip. Each provider has its own delete API, its own SLA, and its own gotchas.
export async function subprocessorFanout(payload: { requestId: string; user: User }) {
const { requestId, user } = payload;
const tasks = [
{ name: "stripe", run: () => stripe.customers.del(user.stripeCustomerId) },
{ name: "resend", run: () => resend.contacts.remove({ email: user.email, audienceId: AUDIENCE }) },
{ name: "posthog", run: () => posthog.delete(user.id) },
{ name: "sentry", run: () => sentryDelete(user.email) },
{ name: "mixpanel", run: () => mixpanel.people.delete_user(user.id) },
{ name: "anthropic", run: () => purgeAnthropicLogs(user.id) },
{ name: "vector-db", run: () => pinecone.delete({ filter: { userId: user.id } }) },
];
for (const task of tasks) {
try {
const receipt = await task.run();
await recordReceipt(requestId, task.name, "ok", receipt);
} catch (err) {
await recordReceipt(requestId, task.name, "failed", { error: String(err) });
await deadLetterQueue.enqueue({ requestId, task: task.name, attempt: 1 });
}
}
}
A few per-provider notes that bite teams in production:
$delete profile event, which removes the profile but keeps anonymous event aggregates (which is correct, because they're anonymous).The fanout job is also a great place to add idempotency. Each task should be safe to retry, because dead-letter recovery a week later is a normal operation, not an emergency.
Article 17(2) requires "reasonable steps, including technical measures." It does not require you to restore every backup, scrub it, and put it back. That would be impossible at most companies and is not what regulators expect.
The accepted pattern is documented backup rotation. Most teams keep daily backups for 7-30 days and weekly snapshots for 90 days, then everything expires. The deletion lives in production immediately; backups eventually catch up as they roll off.
The non-negotiable: write a one-page restore procedure. If you ever restore from a backup taken before a deletion request, the procedure must re-apply pending deletions before the restored database serves traffic. Keep a deletion_requests table that survives the restore (or lives in a separate system) so the worker has a list to replay.
You must prove you deleted Sarah Cohen. Your audit log cannot contain "Sarah Cohen" or her email. So how do you prove anything?
A SHA-256 hash of the original user_id plus the request UUID. The user's row is gone, but the hash gives you a stable, non-reversible identifier you can match against the request that initiated the deletion.
async function writeDeletionReceipt(requestId: string, user: User) {
const hashedSubject = crypto
.createHash("sha256")
.update(`${user.id}:${DELETION_HASH_SALT}`)
.digest("hex");
await db.insert(deletionReceipts).values({
requestId,
hashedSubject,
jurisdiction: user.region === "EU" ? "GDPR" : user.region === "CA" ? "CCPA" : "DPDP",
completedAt: new Date(),
subprocessorReceipts: {}, // populated by fanout
});
}
The deletion_requests table records: request ID, hashed subject, jurisdiction, request received timestamp, soft-delete timestamp, hard-delete timestamp, and a JSON object of sub-processor receipts. That row is your audit evidence. It contains no PII, and it is enough to show a regulator that the request was received, processed within the deadline, and propagated to every downstream system.
If you want to harden this further, our notes on how to do code reviews effectively in 2026 include a checklist for privacy-affecting changes that catches accidental PII in logs before they ship.
A few patterns that look correct in code review and break in production.
users.deleted_at IS NOT NULL but the row still has the email, the user can re-register and find their old data, or worse, an attacker who knows the email can. Null the email and any other PII fields at soft-delete time.user_id.customers row tied to the deleted user's email. Add a "deleted email" deny-list table; webhook handlers check it before creating anything.The same engineering discipline that keeps your microservices monitoring stack healthy applies here: assume every component holds state you forgot about, and audit it explicitly.
A few situations where deletion does not (or should not) happen.
Tax retention. Most jurisdictions require invoice and transaction records for 5-10 years. Article 17(3)(b) covers this explicitly. Your Stripe charges, your invoice PDFs, and your accounting system data stay. The user's account, profile, and behavioral data still go.
Active fraud or chargeback investigation. Article 17(3)(e) covers "establishment, exercise or defence of legal claims." If you have an open fraud case, document a hold on the deletion. The hold is not indefinite; once the investigation closes, the deletion runs.
Anonymization vs deletion. Anonymization (irreversible, with no auxiliary data that could re-identify the person) is an accepted alternative for analytics. Pseudonymization (replacing the email with a token you keep elsewhere) is not. The test: could you, or anyone with reasonable effort, link the data back to the individual? If yes, it is still personal data, and the deletion right still applies.
If you have two founders, fifty users, and no EU customers yet, a Notion checklist plus a manual deletion script is fine. The pipeline above is overkill until you cross roughly 1,000 users or land your first EU customer.
The trigger is usually one of: an EU customer asks for a SOC 2 report, a US enterprise customer asks for a DPA addendum, or you ship a data export feature and realize deletion is the symmetric counterpart. At that point, building the pipeline takes a focused engineer 2-3 weeks. Cadence's senior tier ($1,500/week) handles full GDPR or SOC 2 rollouts of this shape; every engineer on Cadence is AI-native by default, which matters here because most of the work is methodical schema changes, sub-processor research, and test coverage. If you want help mapping your current stack to a regulator-ready workflow, book a senior engineer for a 48-hour trial.
For broader compliance context, our deeper write-ups on GDPR for SaaS, SaaS privacy policy, HIPAA for SaaS, and SOC 2 audit preparation cover the surrounding policy work.
deleted_at, null PII fields, revoke sessions, schedule subscription cancellation.DELETE FROM users and let foreign keys propagate; verify orphan-row counts are zero.Want a regulator-ready deletion pipeline shipped in two weeks instead of two months? Book a senior engineer on Cadence for a 48-hour trial. Weekly billing, no notice period, every engineer vetted on AI-native tooling.
30 days from receipt under Article 12(3), extendable by two more months for complex cases if you notify the user inside the first 30 days. CCPA gives you 45 days. DPDP requires action "without undue delay."
Not immediately. Article 17(2) requires reasonable steps, not impossible ones. The accepted pattern is to let backups expire on a 30-90 day rotation and re-apply pending deletions if you ever restore from one. Document the procedure.
Yes, if anonymization is irreversible. Pseudonymization (recoverable) is not enough; the data must be unlinkable to the individual even with auxiliary information. Aggregated analytics counts are fine; tokenized user IDs are not.
Partially. You must delete the customer profile, but Stripe is allowed (and legally required) to retain transaction records for tax and anti-money-laundering law under Article 17(3)(b). The same logic applies to invoices in your accounting system.
Verify the provider has zero data retention enabled (OpenAI and Anthropic both offer this for API customers). For embeddings stored in your own vector DB, delete them alongside the source rows. If you sent user data to an AI fine-tune, you may need to retrain or filter the model.