I am a...
Learn more
How it worksPricingFAQ
Account
May 8, 2026 · 9 min read · Cadence Editorial

Sentry review for error tracking

sentry review — Sentry review for error tracking
Photo by [Markus Spiske](https://www.pexels.com/@markusspiske) on [Pexels](https://www.pexels.com/photo/laptop-screen-in-close-up-shot-8247921/)

Sentry review for error tracking

Sentry is the right error-tracking pick in 2026 for almost every web and mobile stack under 5 million errors per month. The SDK quality, error grouping, and the Seer AI debugger justify the price. Past that volume, or if you want full APM coverage, the per-event billing punishes you and a purpose-built tool wins.

That is the verdict. The rest of this review walks through pricing, the strongest features, the real weaknesses, and where Sentry stops being a deal.

The 2026 verdict, by team stage

Sentry is rarely a one-size answer. The right call depends on what you ship and how loud your app is.

Team stageRecommendation
Solo / pre-revenueStay on the free Developer plan. 5K errors covers you.
Seed to Series ATeam plan at $26/month plus a small PAYG budget.
Series B and upBusiness plan at $80/month, negotiate overages above 1M errors.
Past 5M errors/monthReconsider. Either harden filters or evaluate Datadog / self-host.

If your app is genuinely quiet (under 50K errors/month) and you mostly want stack traces, you can sit on the Team plan for years without thinking about it. Most teams cross into pain because of noisy exceptions, not legitimate volume growth.

What Sentry actually is in 2026 (and what it isn't)

Sentry started as an error tracker. In 2026 it ships seven product surfaces from one SDK: errors, performance tracing, session replay, profiling (UI and continuous), cron monitoring, uptime checks, and logs. The bet is that the developer-side of observability belongs in one tool.

Sentry is not a full APM. Datadog and New Relic still beat it for infrastructure metrics, log search at terabyte scale, and SRE-style dashboards. Sentry is also not a cheap commodity logger. It costs more per gigabyte than Loki or Cloudwatch because it ties logs back to user-facing errors.

The right way to read Sentry is: it is a developer tool for the application layer, not an SRE tool for the infrastructure layer.

Real Sentry pricing for 2026

Three plan tiers, plus Enterprise. Every paid plan can have a PAYG (pay-as-you-go) budget on top, which is how you cover overages without a surprise bill.

PlanCostErrorsReplaysSpansCronUptimeLogs
DeveloperFree5K505M115GB
Team$26/mo50K505M115GB
Business$80/mo50K505M115GB
EnterpriseCustomNegotiatedNegotiatedNegotiatedNegotiatedNegotiatedNegotiated

Both Team and Business start at the same quotas; the difference is feature depth (advanced search, custom dashboards, SAML SSO, longer history). Quotas grow only when you add a PAYG budget.

Overage prices that matter:

  • Cron monitors: $0.78 each beyond the first
  • Uptime alerts: $1.00 each beyond the first
  • UI profiling: $0.25 per hour
  • Continuous profiling: $0.0315 per hour
  • Logs: $0.50 per GB

Errors, replays, and spans are billed per event, with rates that depend on your prepaid commit. The cleanest way to estimate cost is to run a week in production with default sampling, then extrapolate.

Real cost examples we have seen quoted:

  • 100K errors + 10M spans + 500 replays per month: roughly $50/month
  • 2M errors + 100M spans + 10K replays per month: roughly $847/month
  • 20M errors + 500M spans + 50K replays per month: roughly $7,883/month

That last number is why teams over 10M errors/month start asking whether Sentry is still the right shape.

Where Sentry wins

Five places Sentry is genuinely the best tool, not a tie.

SDK breadth and quality. 100+ official SDKs, all maintained, all consistent. The JavaScript SDK handles React, Next.js, Remix, Vue, Svelte, and React Native with first-class integrations. The Python SDK has Django, Flask, FastAPI, and Celery patches that capture context the way you would expect. Compare this to a homegrown Bugsnag or Honeybadger setup where you write a lot of boilerplate per framework.

Error grouping intelligence. Sentry's fingerprinting collapses identical errors into a single issue even when stack traces churn (line numbers shift, variable names change, frames get added). For a noisy frontend, this is the difference between 200 issues and 20,000 alerts.

Seer, the AI debugger. Seer went GA in 2026. It scans open issues, assigns each an "actionability" score (1-10 ish, internally), and for high-actionability issues it runs Autofix: pulls in the relevant source files, proposes a root cause, and opens a PR in GitHub with the patch. It is not a magic wand. It works best on bugs that have a clear stack trace, a reproducible cause, and a small blast radius. It is weak on flaky tests, race conditions, and anything cross-service. For us, the value is the triage: instead of an engineer reading 40 issues a morning, Seer flags the 5 worth looking at.

Session replay tied to errors. When a user hits an error, Sentry can show you the DOM-level replay of the 30 seconds before the crash. You see the click, the scroll, the form input that broke things. This is one feature that justifies Sentry over a pure error tracker like Honeybadger if your product has any UI complexity.

Source maps and release health. Upload source maps in CI, get unminified stack traces in production. Tag a release, get crash-free user rate per release. This is table stakes if you ship more than once a week, and Sentry's tooling here is sharp. Teams pairing Sentry with feature flag platforms like LaunchDarkly or Statsig get a tight feedback loop: ship behind a flag, watch crash-free rate, kill the flag if it dips.

Where Sentry breaks

Be honest about the tool's limits.

Per-event billing punishes noisy apps. A single unhandled promise rejection in a third-party script can fire millions of times before you catch it. Sentry deduplicates into one issue but still bills per event. Set up ignoreErrors and inbound filters before you turn on production.

Scaling beyond 5M errors/month gets expensive. The Business plan's overage rate is 2-3x higher than Team for the same volume of error events. At 20M+ errors a month you will negotiate Enterprise terms, and the math starts to compete with Datadog's flat-fee logic.

Sentry as full APM is a stretch. The performance tracing is good for finding the 95th-percentile slow endpoint inside one service. It is weak for distributed traces across 30 microservices, and the dashboards are not as configurable as Datadog APM. If you already pay for Datadog, do not add Sentry tracing on top; pick one.

Logs are still catching up. Sentry Logs shipped in 2025 and is improving fast, but for raw log search at high volume, Loki, Datadog Logs, or Cloudwatch are more mature. Use Sentry Logs for logs you want correlated to issues, not for everything.

Self-host has hidden ops cost. More on this below.

Sentry vs the alternatives in one table

ToolBest forPrice entryWhere it winsWhere it loses
SentryWeb/mobile teams that care about errors firstFree, then $26/moBest DX, Seer AI, replay tied to errorsPer-event billing past 5M errors/mo
DatadogEnterprises that want one APM~$15/host/mo plus per-featureFull APM, infra, logs, RUMCost balloons fast, complex setup
HoneybadgerPure error tracking on a budget$26/moPredictable pricing, simpleThin on replay, profiling
PostHogTeams wanting analytics + replay + errorsFree + usageReplay, experiments, errors in oneErrors are newer, less mature

If you want a wider pivot table across the category, our best error tracking tools roundup compares ten options including Bugsnag, Rollbar, and Highlight.io.

Self-hosted Sentry and the FSL license

Sentry's source is available on GitHub under the Functional Source License (FSL). FSL is "fair source," not OSI-approved open source. You can deploy it inside your enterprise, fork it, modify it. You cannot resell it as a SaaS product, and you cannot use it to build a direct Sentry competitor. Two years after each commit ships, that commit converts to Apache 2.0 automatically.

The catch: Seer and the other AI/ML features are closed source and unavailable in self-host. So you trade away the best 2026 feature for control.

The infrastructure: PostgreSQL for app state, Redis for cache, Kafka for event ingestion, ClickHouse for the event store. Plus Symbolicator if you process native crashes. None of this is exotic, all of it has to be patched, monitored, and scaled. Most teams that try self-host for cost reasons end up paying more in engineering time than they would have paid SaaS.

Self-host makes sense in three cases: you have a hard data-residency requirement, you process more than 50M errors/month and Enterprise pricing still hurts, or you already operate ClickHouse and Kafka in production for other reasons. Outside those cases, pay for SaaS.

Who should buy Sentry, who should not

Buy Sentry if:

  • You ship a web or mobile product and care about user-facing errors.
  • Your team is under 50 engineers and you want one tool for errors, replay, and basic perf.
  • Developer DX matters to you (it should).
  • You want AI-assisted triage that actually works on real bugs.

Skip Sentry if:

  • You already pay for Datadog or New Relic full APM. Use their error products.
  • Your error volume is wildly unpredictable and a $5K surprise bill would hurt.
  • You only need uptime and crons. Better Stack or other dedicated status-page tools cost less.
  • You operate in a market where SaaS observability is regulated out (defense, certain financial verticals).

For most startups, the answer is "yes, with a PAYG budget cap and inbound filters set on day one." Engineers who treat Sentry hygiene as a habit (not a fire drill) keep the bill predictable. On Cadence, every engineer is AI-native by default, vetted on Cursor / Claude / Copilot fluency before they unlock bookings, so wiring up Sentry filters and Seer Autofix is a normal week-one task, not a learning curve. If you want the work done without the hire, book a Senior at $1,500/week and the Sentry setup, sampling rules, and CI source-map upload will be in production by Friday.

What to do next

Three concrete moves, in order:

  1. Install the Sentry SDK on your noisiest service. Run it for a week with default sampling and watch the event volume.
  2. Add ignoreErrors, inbound filters, and a release tag in CI. This usually cuts event volume 30-60% on day one.
  3. Turn on Seer for your top 20 issues. Watch which Autofix PRs you would actually merge. That tells you whether to expand to your whole org.

If you want a second opinion on whether Sentry is the right shape for your stack right now, run a 5-minute audit on Ship or Skip and get an honest take, not a sales pitch. Free, no signup.

FAQ

Is Sentry worth the money in 2026?

Yes for any team under 5M errors per month on a web or mobile stack. The SDK quality and Seer AI debugger pay back the cost in engineer-hours saved. Past that volume, look at Datadog APM, dedicated APM tooling, or self-hosting.

Sentry vs Datadog: which should I pick?

Pick Sentry if errors are your first observability problem and you want fast time-to-value with developer-friendly DX. Pick Datadog if you already need full APM, infrastructure metrics, log search, and RUM in one bill. Running both is a common but expensive mistake.

Can I self-host Sentry?

Yes. The source is available under the Functional Source License (FSL) and converts to Apache 2.0 two years after each commit. You will run PostgreSQL, Redis, Kafka, and ClickHouse, and you lose the AI features (Seer is closed source). Self-host fits data-residency, ultra-high-volume, or already-on-ClickHouse teams.

How does Sentry pricing work?

Per-event quotas across errors, replays, spans, profiles, cron monitors, uptime checks, and logs. Each event type has its own price. The Team plan is $26/mo with 50K errors and 50 replays included; Business is $80/mo with the same quotas plus advanced features. A PAYG budget on top covers overages so you do not bounce off the cap.

What is Seer in Sentry?

Seer is Sentry's AI debugging agent, generally available in 2026. It scans open issues, scores actionability, runs Autofix to find root causes, opens PRs with patches in GitHub, and answers questions about your codebase. It is strong on clear stack-trace bugs, weak on race conditions and cross-service issues. Worth turning on for triage even if you do not auto-merge.

All posts