AI Observability & MLOps

Langfuse

Open-source LLM engineering platform — trace, evaluate, and debug your AI application in production.

4.7
3,200 reviews
Free
Pricing Tier
Easy
Learning Curve
Hours (add SDK wrapper)
Implementation
small, medium, large
Best For
Visit website ↗🔖 Save to StackAsk AI about this tool
Use when

Every team running LLM applications in production. Langfuse makes debugging, cost tracking, and quality evaluation possible.

Avoid when

Simple prototyping — adds overhead before you have traffic worth monitoring.

What is Langfuse?

Langfuse provides observability, evaluation, and prompt management for LLM applications. Trace every LLM call, score outputs, run evals, and manage prompt versions. Self-hostable and open-source, making it the privacy-first observability choice for companies that cannot send data to third parties.

Key features

Full LLM call tracing with latency and cost
Custom evaluation scoring (human + automated)
Prompt versioning and A/B testing
Dataset management for evals
Self-hostable for data sovereignty

Integrations

LangChainLlamaIndexOpenAIAnthropic

Third-party ratings

GitHub
4.7· 3,200 reviews
💰 Real-world pricing

What people actually pay

No price data yet — be the first to share

Sign in to share

No price data yet for Langfuse. Help the community — share what you pay (anonymized).

User Reviews

Be the first to review this tool

Sign in to review
Loading reviews...