
Migrating from MySQL to PostgreSQL in 2026 typically costs $2,000 to $24,000 in engineer time depending on database size and application coupling. A sub-100GB database with a clean ORM costs 2 to 4 engineer-weeks. A 100GB to 1TB database with raw SQL spread across services costs 6 to 12 weeks. Anything over 1TB with live cutover requirements lands at 12+ weeks.
The migration cost is almost never the database itself. It's schema translation, application code rewrites, and the live cutover dance. Tooling (pgloader, AWS DMS, Debezium) handles the bytes. Engineers handle everything around them.
Three variables decide whether your migration takes 2 weeks or 4 months:
GROUP_CONCAT, STR_TO_DATE, backticks), and case-insensitive identifier quirks all need code-level fixes.Most teams underestimate the second variable. The schema diff is mechanical. Hunting down every ON DUPLICATE KEY UPDATE across a 5-year codebase is not.
Postgres and MySQL look similar at first glance. They diverge in a dozen small ways that all bite you on cutover day.
| MySQL pattern | Postgres equivalent | Notes |
|---|---|---|
AUTO_INCREMENT | GENERATED ALWAYS AS IDENTITY or SERIAL | Sequences are first-class objects in Postgres |
TINYINT(1) for booleans | BOOLEAN | True boolean type, not 0/1 integer |
DATETIME | TIMESTAMP or TIMESTAMPTZ | Postgres has real timezone-aware timestamps |
JSON | JSONB | JSONB is binary, indexed, and queryable; almost always what you want |
ENUM('a','b') | CREATE TYPE enum or CHECK constraint | Postgres enums are types, not column-level lists |
utf8mb4 charset | UTF-8 by default | Postgres skips the MySQL utf8 vs utf8mb4 trap |
ON UPDATE CURRENT_TIMESTAMP | Trigger or app-level | Postgres has no equivalent column attribute |
| Backticks for identifiers | Double quotes | Case sensitivity rules also flip |
LIMIT 10, 20 (offset, count) | LIMIT 20 OFFSET 10 | Argument order is reversed |
IFNULL(a,b) | COALESCE(a,b) | Different function name |
GROUP_CONCAT | STRING_AGG | Same idea, different signature |
The biggest unforced error: assuming JSON and JSONB are interchangeable. Postgres JSONB is the reason most teams want to migrate. It supports GIN indexes, path operators (->>), and containment queries (@>). Use it.
The second biggest: ignoring ON UPDATE CURRENT_TIMESTAMP. MySQL gives you a free "updated_at" auto-update. Postgres makes you write a trigger or set the column in your application's update statement. Forgetting this leads to silently stale updated_at values for months.
The tooling has consolidated. Most teams pick one of four paths:
For most teams under 1TB with a maintenance window, pgloader plus a 1-hour cutover is the right answer. For zero-downtime over 1TB, AWS DMS if you're on AWS, Debezium if you're not.
The schema translation is one engineer-day. The application changes are everything else. Common gotchas:
User and user as different identifiers. Every unquoted table name, column name, and string comparison needs an audit.GROUP BY strictness. MySQL lets you SELECT a, b FROM t GROUP BY a and silently picks a value for b. Postgres rejects this query. Every non-aggregated column needs to appear in GROUP BY or be wrapped in an aggregate.IS NULL will silently break.GROUP_CONCAT, IFNULL, STR_TO_DATE, FROM_UNIXTIME, DATE_FORMAT. All have Postgres equivalents with different names and slightly different behavior.user, order, group are reserved in Postgres. If you have a users.user column, every query needs quoting.If your codebase uses an ORM (Prisma, Drizzle, SQLAlchemy), most of this is handled. Work shrinks to raw queries, custom database functions, and migration scripts. If your codebase has raw SQL strings spread across services, budget 2x to 3x the schema translation time for application changes.
Zero-downtime migration is the most expensive part. The shape:
This dance adds 4 to 8 weeks. Most teams don't need it; if you can take a 2-hour Sunday-morning window, skip everything except steps 1, 2, and 5.
Three checks that catch 95% of bugs:
SELECT md5(string_agg(t::text, '')) FROM (SELECT * FROM table ORDER BY id) t. Run on both sides, compare. Slow on large tables; sample 10% if needed.GROUP BY strictness issue.Skip validation and you'll discover the bug 3 weeks after cutover when a customer reports a missing record. Don't skip validation.
Real engineer time, real cost, real trade-offs.
| Approach | Cost | Timeline | Pros | Cons |
|---|---|---|---|---|
| In-house senior engineer | $8,000–$24,000 (loaded cost) | 4–12 weeks | Owns the system long-term; full context | Senior engineers are expensive; opportunity cost on roadmap |
| US dev agency | $30,000–$120,000 | 6–16 weeks | Process and accountability | Slow ramp-up; high markup; weak ownership post-handoff |
| Toptal contractor | $10,000–$40,000 | 4–12 weeks | Vetted senior talent; flexible | Hourly billing; slow vetting (1–2 weeks to start) |
| AWS Professional Services | $50,000–$200,000 | 8–20 weeks | Deep AWS DMS expertise; enterprise sign-off | Expensive; AWS-only solution path |
| Upwork freelancer | $1,500–$10,000 | 3–10 weeks | Cheap; fast to hire | Quality variance is high; rollback discipline often missing |
| Cadence | $1,500–$8,000 (1–4 weeks at senior tier) | 48-hour trial, ship in 2–8 weeks | AI-native engineers (Cursor, Claude Code daily); weekly billing; replace any week | Less suited to enterprise procurement gates |
The math: a senior Cadence engineer at $1,500/week, working 4 weeks on a 200GB migration, costs $6,000 total. The same migration at a US agency typically lands at $50,000+. The agency markup pays for project management, not migration speed.
The honest case for migrating:
The honest case for not migrating:
The strongest signal you should migrate: you've worked around MySQL three times in six months. JSON queries are slow. You're building tenant isolation in application code because RLS doesn't exist. You want pgvector for a new AI feature.
The strongest signal you shouldn't: nothing has actually broken. If your team has experience tuning MySQL and your app sits behind an ORM, the marginal benefit of switching is small.
For a deeper view on the trade-offs of moving infrastructure between providers, our analysis on the cost to migrate Firebase to Supabase covers the same dual-write and cutover patterns in a different stack. If your migration also involves changing hosting providers, the cost to migrate Heroku to AWS breakdown covers the parallel engineering effort.
Five tactics that compress timelines without skipping validation:
GROUP BY issue or a LIMIT/OFFSET swap. Cheap insurance.If you're considering a fresh stack altogether rather than migrating, our PlanetScale vs Neon comparison covers when each makes sense for new projects. And if your real complaint is query performance, you might fix that without a migration; see our piece on how to optimize Postgres queries or apply the same techniques to MySQL first.
Three steps:
grep for raw SQL strings. Count MySQL-specific functions. Estimate ORM coverage. This 1-day audit tells you whether you're a 2-week migration or a 12-week one.Most migrations stall on engineer availability, not technical complexity. A senior with no other context ships a 200GB migration in 3 to 4 weeks. Same migration takes 12 weeks as a side project.
If you're staring at a 6-month migration backlog with no spare senior engineer, book a senior on Cadence for the cutover work. 48-hour free trial, $1,500/week, replace any week. Most database migrations finish in under 8 weeks at the senior tier.
Sub-100GB databases with ORM coverage take 2 to 4 engineer-weeks. 100GB to 1TB databases with mixed raw SQL take 6 to 12 weeks. Over 1TB with zero-downtime requirements takes 12+ weeks. The database size matters less than how coupled your application code is to MySQL-specific syntax.
Yes, with AWS DMS or Debezium handling change data capture and a dual-write strategy in your application. This adds 4 to 8 weeks to the project. If you can take a 2-hour maintenance window, skip the live cutover entirely; you save weeks of work and most of the risk.
Silent data corruption from GROUP BY strictness differences, case-sensitivity flips, or ON UPDATE CURRENT_TIMESTAMP columns that no longer auto-update. None of these throw errors at cutover. They surface weeks later as missing data or stale timestamps. Run row-level checksums and a 24-hour shadow-read diff before declaring victory.
pgloader for one-shot migrations under 500GB with a maintenance window. AWS DMS if both your source and target live in AWS RDS and you need continuous replication. Debezium if you need a permanent CDC pipeline (often as part of an event-streaming architecture you'll keep after migration). Most teams should start with pgloader.
Probably not. Migrate when you have a concrete reason: you need JSONB query performance, row-level security for multi-tenant isolation, pgvector for AI features, or an extension MySQL doesn't have. Don't migrate because Postgres is fashionable; the engineering cost is real and the marginal benefit is small for a working stack.
Mostly. Prisma, Drizzle, SQLAlchemy, and Django ORM emit dialect-specific SQL and handle 90% of the translation. The remaining 10% is raw SQL strings in your codebase, custom database functions, and migration scripts written for MySQL syntax. Plan for at least a week of grep-and-fix work even with a clean ORM setup.