Agents, LLMs, vector stores, custom logic—visibility can’t stop at the model call.
Agents, LLMs, vector stores, custom logic—visibility can’t stop at the model call.
Tolerated by 4 million developers
LLM endpoints can silently fail with downtimes, transient API errors, or rate limits. Imagine your AI-powered search, chat, or summarization just stopping without explanation.
Sentry monitors for these failures and alerts you instantly, so you can fix them before your users are impacted.
When AI features break (wrong prompts, missing inputs, bad responses, etc) Sentry sends you an alert right away, and gets you to the line of code, the suspect commit, and the developer who owns it.
With support for providers like OpenAI, Vercel AI, and most custom setups that use OpenTelemetry, you can fix issues fast, before they hit more users.
You're already juggling complex prompts, unpredictable model output, and edge cases you didn’t even know existed. Now something’s breaking, and you’re stuck guessing.
Seer, Sentry’s AI-powered debugging agent, analyzes your error data to surface root causes fast (with 94.5% accuracy), so you can spend less time digging and more time fixing. It’s our AI... fixing your AI.
Bugs happen, whether it’s your UI, API, or the code connecting to your LLM. Sentry shows you exactly where things break, with full context and the replay of a user engaging with your frontend.
So when bad data hits your LLM, you know why, and where to fix it.
When something breaks in your AI agent, like a model timing out or a tool failing silently, traditional logs miss the full picture.
Sentry shows the entire agent run: prompts, model calls, tool spans, and errors, linked to the user action that triggered it. You see what broke, why it happened, and how it affected the experience.
Models sometimes return malformed JSON or outputs that don’t match what your app expects. These failures are hard to predict and easy to miss.
Sentry captures the raw model output, shows where it was generated, and traces how it broke downstream. So you can quickly debug the issue and build smarter fallbacks for unpredictable AI behavior.
LLMs can introduce unpredictable delays: one prompt returns in seconds, another takes much longer due to provider load or network issues.
Sentry shows you how your LLM calls are performing over time, with breakdowns by provider, endpoint, and prompt. It’s easy to spot slowdowns, debug performance issues, and keep your AI features fast and reliable for users.
A single large input or unexpected spike can drive up token usage fast.
Sentry gives you real-time visibility and alerts, so you can catch unusual patterns early and keep LLM costs under control.
Sentry continuously tracks token consumption and LLM-related costs.
If usage patterns shift unexpectedly or costs begin to escalate, Sentry sends immediate alerts so you can investigate, pause, or throttle high-cost activity before it becomes a problem.
Sentry provides granular analytics on token usage and costs at the provider, endpoint, and even individual request level.
You can easily spot which queries, workflows, or features are consuming the most tokens, then dig into the details to optimize prompt design and trim waste.
We support every technology (except the ones we don't).
Get started with just a few lines of code.
Grab the Sentry JavaScript SDK:
<script src="https://browser.sentry-cdn.com/<VERSION>/bundle.min.js"></script>
Configure your DSN:
Sentry.init({ dsn: 'https://<key>@sentry.io/<project>' });
That's it. Check out our documentation to ensure you have the latest instructions.
increase in developer productivity
engineers rely on Sentry to ship code
faster incident resolution
Get started with the only application monitoring platform that empowers developers to fix application problems without compromising on velocity.
Here’s a quick look at how Sentry handles your personal information (PII).
×We collect PII about people browsing our website, users of the Sentry service, prospective customers, and people who otherwise interact with us.
What if my PII is included in data sent to Sentry by a Sentry customer (e.g., someone using Sentry to monitor their app)? In this case you have to contact the Sentry customer (e.g., the maker of the app). We do not control the data that is sent to us through the Sentry service for the purposes of application monitoring.
Am I included?We may disclose your PII to the following type of recipients:
You may have the following rights related to your PII:
If you have any questions or concerns about your privacy at Sentry, please email us at compliance@sentry.io.
If you are a California resident, see our Supplemental notice.