Comparison
MCP-native, zero-code observability vs SDK-based instrumentation with cloud-first architecture. Two different philosophies for monitoring your AI agents.
TL;DR
Feature Comparison
| Feature | Iris | LangSmith |
|---|---|---|
| Integration method | MCP config (zero code) | SDK imports + @traceable decorators |
| Self-hosting complexity | Single SQLite file | Enterprise-only, license key required |
| Performance overhead | Zero (no SDK in hot path) | Async tracing via SDK in your process |
| Eval rules | 12 built-in + 8 custom types, heuristic (<1ms) | LLM-as-Judge + human review workflows |
| Cost tracking | Per-trace USD cost | Token + latency per trace and tool call |
| MCP support | Protocol-native (IS an MCP server) | A2A & MCP protocol support for deployment |
| License | MIT (fully permissive) | Proprietary platform (SDK is MIT) |
| Pricing | Free + Cloud waitlist | Free tier (5k traces/mo), Plus $39/seat/mo, Enterprise custom |
| Dashboard | Real-time dark-mode UI | Auto-clustering, pattern detection, custom dashboards |
| Framework support | Any MCP-compatible agent | LangChain, OpenAI, Anthropic, Vercel AI, LlamaIndex + more |
| Data retention | Unlimited (your SQLite, your storage) | 14 days (free) / 400 days (paid) |
| Enterprise features | Roadmap (v0.5) | SSO, BYOC, SOC 2, dedicated support |
Decision Guide
Last verified: March 2026. This comparison is based on publicly available documentation and may not reflect recent changes to LangSmith. We aim to keep this page accurate and fair.
See something outdated or incorrect? Report an inaccuracy — we review and update within 48 hours.
Add Iris to your MCP config. First trace in 60 seconds. No SDK, no signup, no infrastructure.