May 5, 2026 · 12 min read · Cadence Editorial

How to prepare for a HIPAA audit

hipaa compliance saas — How to prepare for a HIPAA audit
Photo by [Brett Sayles](https://www.pexels.com/@brett-sayles) on [Pexels](https://www.pexels.com/photo/black-hardwares-on-data-server-room-4597280/)

How to prepare for a HIPAA audit

Preparing for a HIPAA audit means proving, in writing and in code, that every system touching protected health information has signed BAAs, encrypted storage and transit, MFA on every access path, and immutable audit logs retained for six years. For a SaaS team that already ships features daily, the prep work is a 3 to 5 month engineering project that costs $35,000 to $80,000 in tooling and consumes 200 to 350 engineer hours, before legal and external auditor fees.

This is operational guidance, not legal or compliance advice. Talk to a HIPAA attorney before signing your first BAA.

What the OCR actually checks in 2026

The HHS Office for Civil Rights runs HIPAA enforcement, and the audit pattern in 2026 is more aggressive than founders expect. After the Change Healthcare breach exposed 192.7 million records through Citrix credentials with no MFA, OCR shifted from random Phase 3 audits to incident-driven investigations that arrive within weeks of any breach report.

Penalties were also reset on January 28, 2026. Willful neglect now hits $63,973 per violation, with an annual cap of $1,919,173 per violation category. The average healthcare breach cost reached $7.42 million in 2025 (IBM Cost of a Data Breach Report). Even a small SaaS handling PHI for one mid-market hospital can trigger a multi-million dollar exposure on a single incident.

Auditors typically request five categories of evidence:

  1. A current risk analysis with executive sign-off, refreshed in the last 12 months
  2. Signed BAAs with the covered entity AND every sub-processor that touches PHI
  3. Access control evidence: MFA enrollment proof, RBAC matrices, deprovisioning logs
  4. Six years of immutable audit logs with NTP synchronization
  5. Incident response playbook with at least one tabletop exercise on file

If you cannot produce any of these inside the typical 10 business day request window, OCR escalates to a corrective action plan or a resolution agreement with monetary penalty.

The shift in 2026: NIST 800-66 Rev 2 killed the addressable loophole

For two decades, HIPAA's Security Rule had two flavors of control: "required" and "addressable". Addressable meant a small team could document why a given safeguard was not reasonable for them and skip the implementation. MFA, encryption at rest, and automatic logoff all sat in the addressable bucket.

NIST 800-66 Revision 2, finalized in early 2026, retired that distinction. The current OCR posture treats every Security Rule specification as effectively required. If your incident response evidence shows you decided MFA was "not feasible" because your team is small, you will not win that argument in 2026.

Practically, this means three things that used to be optional are now table stakes:

  • MFA on every production access path, including admin panels and bastion hosts. SMS codes do not count; FIDO2 keys, TOTP, or push-based authenticators only.
  • AES-256 encryption at rest on every database, object store, and backup. KMS-managed keys with documented rotation.
  • Automatic logoff on every workstation and admin session, with timeouts set to 15 minutes for clinical contexts, 30 minutes for back-office.

The default approach (and why it breaks)

Most teams reach for a compliance automation platform like Vanta, Drata, or Secureframe, sign up for the HIPAA module, and assume the dashboard will herd them through the audit. This works in the small case: the platforms do auto-collect screenshots of MFA settings, sync your AWS configurations, and generate template policies.

It breaks at three boundaries the platforms cannot cross.

First, BAAs are still a legal artifact. Vanta will track which sub-processors you have BAAs with, but it will not negotiate them or tell you that your Twilio account is on the wrong plan to qualify for one. (More on that in a moment.)

Second, your application code is invisible to the platform. Drata cannot see whether your /api/patients endpoint logs request bodies that include PHI to Datadog. That has to be audited by a human reading code.

Third, the BAA flow-down obligation means every sub-processor your sub-processor uses is also in scope. If you use Segment to pipe events to Mixpanel, and Mixpanel uses a third-party for session replay that captures PHI in form fields, you own that exposure. No platform maps that automatically.

The better approach: a 5-step prep playbook

Step 1. PHI inventory and data flow map (40 to 60 engineer hours)

Sit down with one engineer per service and map every place PHI enters, transits, or rests. Patient names, dates of birth, treatment notes, claim IDs, and any combination of demographic data plus health context all qualify.

Output: a single spreadsheet or Lucidchart diagram listing each service, the PHI fields it handles, the storage backend, the egress destinations (analytics, error tracking, email, SMS), and which sub-processor sees each field.

What goes wrong: teams forget about CSV exports, support inboxes, log pipelines, and Slack channels where engineers paste customer screenshots while debugging. Audit each of these explicitly.

Step 2. BAA roundup with the gotchas spelled out (20 to 40 engineer hours plus legal)

You need a BAA from every sub-processor that sees PHI. The ones engineers most often miss:

  • AWS, GCP, Azure: BAA available on every paid account. Enable in AWS Artifact in five minutes. GCP requires a written request.
  • Twilio: BAA only on Twilio Flex Enterprise tier or via custom agreement. Standard pay-as-you-go SMS does not qualify. Switch SKUs before sending the first PHI text.
  • SendGrid: BAA only available with the dedicated HIPAA-compliant package, which sits inside the Twilio enterprise umbrella. The standard SendGrid Pro plan does not carry one.
  • Datadog, Sentry, Honeycomb: BAAs available on Enterprise plans. Free or Pro tiers do not. If your free Sentry org is collecting stack traces with PHI in payloads, that is a breach.
  • Segment, Mixpanel, PostHog, Amplitude: BAAs available on specific enterprise tiers. Check before you ship the analytics SDK.
  • Vercel, Netlify, Cloudflare: BAAs available on Enterprise. Hobby and Pro plans do not qualify.
  • OpenAI, Anthropic: Both offer BAAs on enterprise contracts as of 2026. The API by default does not. Do not pipe PHI into model calls without one.

What goes wrong: a startup ships a feature using the free Sentry tier, captures 18 months of error logs containing PHI, then has to scrub everything when they upgrade. Plan the BAA before you wire the integration.

Step 3. Encryption, MFA, and access controls (60 to 100 engineer hours)

Encryption at rest is one toggle on AWS RDS, S3, EBS, and DynamoDB. The work is auditing every datastore (including dev and staging snapshots) and confirming KMS keys are rotated yearly with the audit trail captured.

Encryption in transit means TLS 1.2 minimum, 1.3 preferred, on every external endpoint and every internal service-to-service call inside your VPC. mTLS for service mesh is the cleaner pattern for new builds. Reach for AWS Certificate Manager or cert-manager on Kubernetes; do not hand-roll.

MFA enforcement: turn on SSO via Okta, Google Workspace, or Microsoft Entra ID, then disable password-only login on every console (AWS, GitHub, Linear, Vercel, your database admin tools, your CRM). This is also where many teams discover that their PostgreSQL admin user has a password and no MFA, which is an instant audit finding.

RBAC: write down who can read PHI, who can write it, who can export it, who can delete it. Codify in IAM policies, then run quarterly access reviews where managers re-attest. The same principles in API design best practices apply here: explicit contracts, least privilege, and audit-ready by default.

Step 4. Six-year audit logging (40 to 80 engineer hours)

This is where most teams underbudget. AWS CloudTrail defaults to 90 days. Google Cloud Logging defaults to 30 days. HIPAA wants 2,190 days. The fix is a CloudTrail trail writing to S3 with Object Lock in compliance mode (so logs cannot be deleted even by root), plus a lifecycle policy moving to Glacier Deep Archive after 90 days.

Cost: roughly $30 per year per 100-employee SaaS for the S3 storage. The work is wiring it correctly, validating the lock, and then doing the same for application-level audit logs (who viewed which patient record, when). Postgres pg_audit plus a stream to S3 is the common pattern.

You also need NTP sync on every host so log timestamps line up. AWS Time Sync Service is free and handles this. Mismatched timestamps make incident reconstruction impossible.

What can go wrong: teams capture the action but not the actor, or vice versa. Every record needs both, plus the patient record ID, timestamp, source IP, and outcome.

Step 5. Risk analysis, policies, and incident response (40 to 70 engineer hours plus legal)

The risk analysis is a written document, owned by a named person, refreshed annually. Vanta and Drata generate the template; you fill in the threats specific to your stack. Aim for 30 to 50 pages covering admin, physical, and technical safeguards.

Policies: 14 to 18 documents covering access control, incident response, breach notification, sanction, contingency planning, device and media controls, and workstation use. Templates from your compliance platform get you 70% there. The remaining 30% is making them match what your team actually does, because auditors will interview engineers and check.

Incident response: write the playbook, then run a tabletop exercise. Pick a scenario (S3 bucket made public, employee laptop stolen, vendor breach disclosure) and walk through the response. Document who decides what, who notifies the covered entity within the 60-day window, who notifies HHS for breaches over 500 records, who calls the press if required.

Real cost breakdown for a 15-person SaaS

Line itemRangeNotes
Compliance automation (Vanta HIPAA)$10,000 to $15,000/yrSub-50 headcount; HIPAA module priced separately from SOC 2
Compliance automation (Drata Foundation)$7,500 to $15,000/yrOne framework; HIPAA-only
Compliance automation (Secureframe)$12,000 to $20,000/yrMore managed onboarding
External HIPAA risk assessment$5,000 to $15,000Required if you sell to enterprise health systems
Penetration test (annual)$8,000 to $25,000OCR expects evidence
Engineer time (200 to 350 hrs)$30,000 to $70,000Loaded cost at $200/hr
Legal review of BAAs and policies$5,000 to $15,000One-time, then $2k/yr ongoing
AWS infrastructure overhead$200 to $1,500/yrKMS, CloudTrail, Object Lock, encrypted snapshots
Year-one total$65,000 to $160,000

If your team did SOC 2 audit preparation recently, expect 40 to 60% of engineer time to overlap. Most controls map one-to-one. HIPAA-specific work concentrates in PHI inventory, BAAs, and 6-year retention.

Common pitfalls that flag in audits

  • Free-tier vendor BAAs. Sentry free, Datadog Pro, Vercel Pro, OpenAI API default. None carry BAAs. If you wired any of these before the inventory pass, expect remediation work.
  • Logs that include PHI. Standard error logging captures request bodies. If a patient lookup endpoint errors, the log line contains the patient ID. Either redact at the SDK level or upgrade to BAA-covered logging.
  • Test data that is real PHI. A common pattern: ops copies a prod snapshot to staging for debugging. Now staging needs the same controls as prod, or the data needs to be irreversibly de-identified per the Safe Harbor method.
  • Shared admin accounts. A dba@company.com shared between three engineers fails every audit. Unique accounts, full stop.
  • No tabletop exercise. Having a written incident response plan is not enough. Auditors ask for evidence you have run the scenario.
  • Marketing pixels on the patient portal. A Meta pixel or Google Analytics tag on a page where patients log in is an active OCR enforcement target. Ripple effects from the 2024 Cerebral and BetterHelp settlements are still unwinding.

When you can skip this entirely

If you are pre-revenue, pre-PHI, and not yet talking to a covered entity, skip the audit prep and focus on shipping. The moment you sign the first BAA with a customer, the 60-day clock on a real risk analysis starts, but not before.

If you are building something that lives near healthcare but never touches PHI (a billing dashboard for clinics that ingests only invoice metadata, an HR tool sold to hospitals), confirm with counsel before assuming HIPAA does not apply. The minimum necessary standard often pulls in tools founders did not expect.

If you operate in California, Texas, New York, or Washington, check state-level overlays. CMIA, the Texas Medical Records Privacy Act, SHIELD, and Washington's My Health My Data Act all add requirements on top of HIPAA, sometimes with broader definitions that catch wellness apps HIPAA does not.

Where Cadence engineers fit

Most HIPAA prep work is not glamorous. It is auditing every sub-processor, writing 14 policies, wiring CloudTrail to S3 Object Lock, and refactoring three years of analytics events that quietly captured PHI. If your team is heads-down on product, this is the kind of project that slips for two quarters until a customer demands evidence.

A senior engineer ($1,500/week) on Cadence can typically own the technical scope (encryption audit, IAM cleanup, log pipeline, BAA inventory, policy drafting) end to end in 6 to 10 weeks. Every Cadence engineer is AI-native by default, which matters here because the bulk of the work is reading existing code, generating evidence artifacts, and drafting policy boilerplate, all tasks where Claude and Cursor accelerate output 3 to 5x. The matching algorithm scores 12,800 engineers in 80ms and includes specialists who have shipped HIPAA programs at health-tech startups before. Median time to first commit is 27 hours after booking.

The honest call: if you have a CTO who wants to learn HIPAA deeply, do it in-house. If you need it shipped before a Q3 customer kickoff, audit your stack with Cadence's ship-or-skip tool and then book a senior who has done it before.

Try it

Run your current setup through the ship-or-skip stack audit for an honest grade, then book a senior on a 48-hour free trial if you need an engineer to own the HIPAA buildout. Weekly billing, replace any week, no notice period.

If you also need to think through which controls to build versus which compliance tool to buy, the build-buy-book decision tool gives a 60-second recommendation.

FAQ

How long does a HIPAA audit take?

If OCR opens an investigation, the typical timeline is 6 to 18 months from initial document request to resolution. The intense engineering response window is the first 30 to 60 days, when you have to produce risk analysis, BAAs, access logs, and incident response evidence. A pre-emptive third-party HIPAA audit (often required by enterprise customers) takes 4 to 8 weeks if your evidence is already collected.

What is the difference between HIPAA and SOC 2 prep?

SOC 2 is a voluntary attestation focused on broad security, availability, and confidentiality. HIPAA is federal law specific to PHI. Roughly 60% of controls overlap (MFA, encryption, logging, access reviews). HIPAA adds: signed BAAs with every sub-processor, 6-year log retention (vs SOC 2's typical 1 year), 60-day breach notification, and PHI data flow mapping. Run both in parallel to save 30 to 40% of the work.

Do I need a HIPAA certification?

No. There is no government-issued HIPAA certification. What exists: third-party attestations (HITRUST CSF, sometimes called "HIPAA certification" colloquially) and the BAA itself, which is the legal document binding you to compliance. Most enterprise health customers ask for a SOC 2 Type II report plus a HIPAA risk analysis plus a signed BAA. HITRUST is heavier and only worth pursuing if a major payer or hospital system explicitly requires it.

What happens if I have a breach during prep?

The breach notification rule applies the moment you handle PHI under a BAA, regardless of audit status. You must notify the covered entity within 60 days. If the breach affects 500 or more individuals, HHS gets notified within 60 days and the breach goes on the public OCR Wall of Shame. Penalties scale with whether the breach was due to "willful neglect" (the most expensive category, up to $1.9M per violation category in 2026) versus "reasonable cause" or "unknowing".

Can a small team realistically do this without a compliance tool?

Yes, but expect 50% more engineer hours. Vanta, Drata, and Secureframe automate evidence collection, policy templates, and continuous monitoring. Without them, you are manually screenshotting MFA settings, copying IAM policies into a doc folder, and tracking access reviews in a spreadsheet. For a team under 10 engineers spending less than 20% of headcount on compliance work, the $10k-15k/yr tool cost pays back in saved hours within the first quarter.

All posts