Sentry complements observability platforms
Sentry
This setup shows that error tracking and observability solve different jobs, and the best teams usually buy both. Sentry is where developers sort noisy crashes into clean issue groups, inspect stack traces and breadcrumbs, and often connect an issue back to the likely code change. Datadog and New Relic are where the same team asks what happened around the error across services, logs, traces, and infrastructure, especially in larger distributed systems.
-
Sentry was built around the developer triage loop. Its SDK sits inside the app, captures exceptions automatically, groups duplicates, and sends them to a dedicated issue view with stack trace and device context. That is why teams often keep Sentry even after standardizing on a broader observability stack.
-
Datadog and New Relic fold error tracking into a wider system map. Datadog groups errors from traces and logs using fingerprints based on service, message, and stack frames, then links back to related traces and logs. New Relic Errors Inbox groups errors from APM, browser, mobile, and serverless into one queue with logs and assignment workflows.
-
The spending logic is straightforward. Sentry started with small team error tracking, around $1,500 ARPU, while full observability vendors sell across the engineering org at roughly $50,000 to $100,000 ARPU. That gap explains why Sentry is pushing into performance monitoring and replay, while customers still pair it with a general tool.
Over time, the boundary will narrow as observability suites improve their error workflows and Sentry keeps broadening into APM. The likely end state is not one tool replacing the other overnight. It is a stack where Sentry remains the fastest place to fix code level failures, while broader platforms remain the system of record for everything happening around them.