May 7, 2026 · 9 min read · Cadence Editorial

Neon Postgres review in 2026

neon postgres review — Neon Postgres review in 2026
Photo by [Brett Sayles](https://www.pexels.com/@brett-sayles) on [Pexels](https://www.pexels.com/photo/web-banner-with-online-information-on-computer-3803517/)

Neon Postgres review in 2026

Neon in 2026 is the right serverless Postgres host for product teams that ship preview environments per pull request, run spiky or intermittent workloads, or need pgvector close to their app. It is the wrong pick for sustained high-throughput OLTP, complex replication topologies, or regulated workloads that require dedicated tenancy. Verdict: worth it for most product startups, with real caveats.

This is a Neon-only deep review. If you want the head-to-head with Supabase, that is a separate post.

The verdict in one paragraph

Neon has matured. The product separates Postgres compute from storage, gives you instant copy-on-write branches, and scales to zero when idle. After Databricks acquired Neon in May 2025, storage pricing fell 80% (from $1.75 to $0.35 per GB-month) and compute fell 15-25%. Cold starts now land in the 300-800ms range with a 500ms median, which is short enough that most apps never notice. If your workload looks like a typical SaaS or AI product, Neon is the cheapest and most pleasant Postgres you can run in 2026.

What Neon actually is (and isn't)

Neon is a managed Postgres service that runs an unmodified Postgres binary on top of a custom storage engine. The storage layer holds your data as a log of pages, and any compute node can attach to a point in that log. Compute scales independently from storage, which is the trick that makes branching cheap and scale-to-zero possible.

It is not a fork of Postgres. You get the real psql shell, the real wire protocol, real EXPLAIN plans. Your ORM, your migration tool, your pg_dump backup script all work without changes.

What it doesn't try to be: a multi-region active-active database, a high-throughput OLTP engine like Aurora, or a Postgres-flavored OLAP warehouse. If you want any of those, Neon is the wrong tool.

Branching is the headline feature

Branches are the reason most teams pick Neon, and rightly so. A branch in Neon is a metadata pointer at a specific point in your write-ahead log. Creating one is an O(1) operation that completes in under a second regardless of database size. Pages are only copied when the branch writes to them (copy-on-write), so a branch costs you almost nothing until it actively diverges.

The practical workflow this enables: every pull request gets its own preview Postgres branch, seeded from production, isolated from teammates, gone when the PR closes. The Neon GitHub Action handles the create-and-destroy cycle for you. Engineers stop fighting over a shared staging database. Migration tests run against real data shapes. Demos go up against fresh seeds in seconds, not hours.

If you have ever spent an afternoon debugging why two engineers' migrations corrupted the same staging database, you already understand the value. The standard workaround (a separate RDS instance per environment) costs hundreds of dollars a month per branch. Neon makes the same workflow effectively free.

Autoscale and cold starts in 2026

Neon's compute scales automatically from 0.25 CU up to your tier's ceiling, then suspends after 5 minutes of idle by default. This is the feature that produces the marketing line "scale to zero," and in 2026 it actually works.

The honest cold-start numbers, after Neon's 2025 optimization push:

ScenarioCold startTime to first query
Branch on Launch tier (small)300-500ms500-700ms
Branch on Scale tier (medium)400-700ms700-900ms
Database with large pgvector index500-800ms800ms-1.2s
Aurora Serverless v2 (for context)10-15s12-18s

For a web app or API that gets a request every few minutes, that 500ms is paid by the first user after idle. Most apps already eat that cost on cold Lambda boots. If your app needs to keep p99 below 100ms on the first request after idle, disable scale-to-zero (you pay for an always-on compute, but you skip the cold start). For background workers, AI agents, or any system where 500ms is invisible, leave it on and pocket the savings.

Pricing at three real scales

Neon's pricing is unusually clean. There are three plans most teams ever touch:

PlanComputeStorageMinimumBest for
Free100 CU-hours0.5 GB$0Side projects, prototypes
Launch$0.106/CU-hour$0.35/GB-month$5/moMost early-stage SaaS
Scale$0.222/CU-hour$0.35/GB-month$69/moProduction with branches and uptime SLAs

What this looks like in practice:

Solo founder with a side project. Free tier is real. 100 CU-hours of compute is enough to run a side project that gets a few thousand visits a month. You will outgrow the 0.5 GB storage limit before you outgrow the compute. Estimated total: $0.

Growing SaaS, 10,000 monthly active users. Launch tier with a Postgres database under 10 GB and four engineers running PR-preview branches. Compute averages around 300 CU-hours/month (about $32) plus storage at $3.50. Add a couple of always-on read replicas for analytics and you land around $50-80/month total. A comparable RDS setup with separate dev/staging instances runs $300+.

High-traffic app, 100,000 MAUs. Scale tier, 50 GB database, autoscale ceiling at 4 CU. Compute around 800 CU-hours/month ($178), storage $17.50, plus extras for read replicas and longer point-in-time recovery. Total $250-400. Still cheaper than equivalent Aurora, but the gap narrows.

If your workload is sustained at 4 CU or higher 24/7, do the math: a provisioned RDS instance starts winning around the 5,000-10,000 sustained-CPU-second-per-hour mark. Neon is built for spiky workloads, not flat ones.

Postgres compatibility, the honest version

Neon runs real Postgres, but it is a managed service, which means a few sharp edges. Here is what you actually get and don't get.

What works: all the common extensions, including pgvector for AI embeddings, PostGIS, pg_stat_statements, full-text search, JSON operators, generated columns, partitions, foreign data wrappers, common ORMs (Prisma, Drizzle, SQLAlchemy, ActiveRecord). Neon ships 80+ extensions out of the box. If your stack worked on RDS, it works on Neon.

What's awkward: logical replication is supported but configured at the project level, not as freely as on a self-hosted instance. You don't get superuser, so any extension that needs raw filesystem access is out. Custom C extensions you compile yourself are out. pg_cron works but only one schedule per project. If you need multi-tenant Postgres extensions like Citus, Neon is not your home.

What's missing: dedicated single-tenant compute is enterprise-only. HIPAA BAAs are enterprise-only. The free tier doesn't get point-in-time recovery beyond 24 hours, and Launch caps you at 7 days; you need Scale or higher for 30 days. If you are in a regulated space, budget for the enterprise plan or pick a different host.

For honest reads on the broader landscape, the Planetscale review after 2 years in production is a good comparison point if you can stomach MySQL.

What changed after the Databricks acquisition

Databricks acquired Neon in May 2025 for around $1 billion. A year later, here's the actual delta.

Things that got better. The price cuts above are real. Storage went from $1.75 to $0.35 per GB-month, an 80% drop. Compute fell 15-25% across all tiers. The free plan doubled its compute allowance from 50 to 100 CU-hours. Neon also leaned hard into AI agents as a customer; by late 2025, over 80% of new databases on the platform were provisioned by AI agents, not humans, which influenced the API design (faster project creation, fewer permission prompts, branch-on-spawn workflows).

Things that stayed the same. The wire protocol, the CLI, the open-source storage engine. Neon kept its OSS roots; the storage code is still on GitHub. Existing customers' connection strings did not change. Migration paths from RDS or Supabase still work the same way.

Things to watch. Databricks owns Neon now, which means roadmap priorities will tilt toward the data + AI workload patterns Databricks cares about. If you are a non-AI SaaS that just wants boring durable Postgres, you are not the priority customer anymore. The product is unlikely to regress, but expect the marketing and the headline features to skew toward agentic workloads through 2027.

When Neon breaks

Every tool has failure modes. Neon's are predictable.

  • High-throughput OLTP at sustained load. If you push more than ~5,000 transactions per second 24/7, the autoscale latency on bursts and the per-tier compute caps start hurting. You want a provisioned database at that scale.
  • Long-running analytical queries. Queries that scan tens of GB and run for minutes will time out or get suspended. Neon is not a warehouse. Use the Databricks side of the house, or BigQuery, or DuckDB.
  • Complex replication topologies. If you need active-active multi-region writes, or you are building a globally distributed app where every region needs local writes, Neon is not it. Look at CockroachDB or Spanner.
  • Strict regulated workloads. HIPAA, PCI Level 1, FedRAMP all require enterprise plans. If you are bootstrapping a healthcare product, the cost jumps fast.
  • Spiky writes against a hot row. Branching helps for reads, not writes; if your write contention is on a single page, branches don't save you.

The pattern: Neon shines on read-heavy, branchy, intermittent, AI-flavored workloads. It struggles on flat, write-heavy, low-latency-on-cold-start workloads.

Who should pick Neon (and who shouldn't)

WorkloadNeon fitWhy
AI app with embeddingsStrongpgvector + branching + scale-to-zero
B2B SaaS, mid trafficStrongbranching per PR, predictable Launch tier costs
Internal tool / dashboardStrongscale-to-zero saves money on dormant tools
Agency client workStrongper-client branch, easy handoff
High-throughput OLTP (>5k TPS sustained)Weakcompute caps, autoscale lag on bursts
Regulated (HIPAA strict, PCI Level 1)Weakshared infra, BAA only on enterprise
Multi-region active-activeWeaknot the architecture

If your team needs hands to build the AI feature on top of Neon, that is the kind of work Cadence sends engineers for routinely. Every Cadence engineer is AI-native by baseline (vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings), and the platform's 12,800-engineer pool ships a 27-hour median time to first commit. Senior tier ($1,500/week) is the right slot for an engineer who can land schema design, pgvector indexing, and scale-to-zero tradeoffs without hand-holding.

If a tool review like this leaves you wondering whether your stack is right, the Claude Code review covers the AI assistant side, and Datadog review for SaaS observability covers the monitoring layer Neon doesn't include.

What to do next

If you are starting a new project, default to Neon's Launch tier. The branching workflow alone pays for the migration, and the price floor is low enough that you can fail cheaply. Wire up the GitHub Action for preview branches in the first week; you will not go back to a shared staging DB.

If you are on RDS or Supabase and your bill is climbing without a clear reason, run the math on Neon for one service first. Migrate a non-critical app, measure cold-start impact for a week, and decide. Most teams who try this end up with both, then consolidate after a quarter.

If you are auditing your stack and wondering whether Neon, Supabase, or RDS is the right pick for your shape of workload, audit your tooling with Cadence's Ship-or-Skip tool. It grades each layer of your stack against your actual scale and team, with no upsell.

FAQ

Is Neon Postgres production ready in 2026?

Yes for web apps, SaaS, and AI products with moderate or spiky traffic. Cold starts in the 300-800ms range, scale-to-zero, and 80+ Postgres extensions cover most product workloads. Skip Neon for sustained high-throughput OLTP or strict regulated workloads.

What changed after Databricks acquired Neon?

Storage prices fell 80%, compute fell 15-25%, and the free plan doubled its compute. The wire protocol, the API, and the OSS storage engine did not change. Existing connection strings still work.

How fast are Neon cold starts?

Typical cold starts run 300-800ms with a 500ms median in 2026. That compares to 10-15 seconds for Aurora Serverless v2. Disable scale-to-zero if your p99 budget on the first request after idle is below 200ms.

Does Neon support pgvector?

Yes, pgvector is first-class. Neon ships 80+ Postgres extensions including PostGIS, pg_stat_statements, and pgvector with HNSW indexing. AI app teams are the largest growing customer segment on the platform.

When should I not use Neon?

Skip Neon for sustained high-throughput OLTP (above ~5,000 TPS), long-running analytical queries, multi-region active-active writes, and strict regulated workloads (HIPAA, PCI Level 1) unless you are on the enterprise plan. Provisioned RDS, Aurora, or CockroachDB are better fits for those shapes.

All posts