Comparison
MCP-native, zero-code observability vs SDK-based instrumentation. Two fundamentally different approaches to understanding your AI agents.
TL;DR
Feature Comparison
| Feature | Iris | Langfuse |
|---|---|---|
| Integration method | MCP config (zero code) | SDK imports + @observe decorators |
| Self-hosting complexity | Single SQLite file | PostgreSQL + ClickHouse + Redis + S3 + 2 containers |
| Performance overhead | Zero (no SDK in hot path) | 0.1 – multiple seconds latency reported #6331 |
| Eval rules | 12 built-in + 8 custom types, heuristic (<1ms) | LLM-as-Judge (powerful but slow / costly) |
| Cost tracking | Per-trace USD cost | Token / cost per user, session, model |
| MCP support | Protocol-native (IS an MCP server) | MCP server for prompt management only |
| License | MIT (fully permissive) | MIT core + commercial enterprise modules |
| Independence | Independent, founder-led | Acquired by ClickHouse (Jan 2026) |
| Dashboard | Real-time dark-mode UI | Customizable multi-dimension dashboards |
| Framework support | Any MCP-compatible agent | 20+ framework integrations |
| Prompt management | Not included | Full versioned prompt management |
| Enterprise features | Roadmap (v0.5) | SOC 2, ISO 27001, HIPAA, SCIM |
Decision Guide
Last verified: March 2026. This comparison is based on publicly available documentation and may not reflect recent changes to Langfuse. We aim to keep this page accurate and fair.
See something outdated or incorrect? Report an inaccuracy — we review and update within 48 hours.
Add Iris to your MCP config. First trace in 60 seconds. No SDK, no signup, no infrastructure.