Supports multiple programming languages and AI frameworks
Provides deep, granular insights into AI application behavior
Easy to integrate with minimal code changes
Open-source and community-driven
Cons
Requires familiarity with OpenTelemetry concepts for advanced usage
Initial setup might involve configuring environment variables and API keys
Relies on external observability platforms for data visualization and analysis
Key Features
Standardized tracing for AI workflows using OpenTelemetry spans and attributesZero-config setup with drop-in instrumentation for minimal code changesMulti-framework support with 50+ integrations across Python, TypeScript, Java, and C#Vendor-agnostic, compatible with any OpenTelemetry backend (e.g., Datadog, Grafana, Jaeger)Rich context capture including prompts, completions, tokens, and model parametersProduction-ready with async support, streaming, error handling, and performance optimization
traceAI is an open-source AI tracing framework built on OpenTelemetry, designed to provide full visibility into AI applications. It captures detailed information about every LLM call, prompt, token count, retrieval step, and agent decision, transforming this data into structured traces. These traces are then sent to any OpenTelemetry-compatible backend, such as Datadog, Grafana, Jaeger, or Future AGI, allowing users to leverage their existing observability tools without needing new vendors or dashboards.
The tool is ideal for developers and teams building AI applications who need deep insights into the performance and behavior of their LLMs, agents, and other AI components. It supports over 50 AI frameworks across Python, TypeScript, Java, and C#, offering zero-config tracing and consistent APIs. By providing rich context including prompts, completions, tokens, model parameters, and tool calls, traceAI helps in debugging, optimizing, and understanding complex AI workflows in production environments.
How does traceAI ensure compatibility with various AI frameworks and programming languages?
traceAI achieves broad compatibility by offering specific instrumentors for over 50 AI frameworks across Python, TypeScript, Java, and C#. These instrumentors provide consistent APIs and are designed for zero-config setup, allowing developers to easily integrate tracing into their existing AI applications regardless of the underlying framework or language.
What kind of data does traceAI capture from an AI application, beyond basic LLM calls?
Beyond basic LLM calls, traceAI captures rich contextual data including the full prompt, completion details, token counts, model parameters, tool calls made by agents, and decisions made during agent execution. It also traces retrieval steps in RAG (Retrieval Augmented Generation) systems, providing a comprehensive view of the AI workflow.
Can traceAI be used with a custom OpenTelemetry collector or a self-hosted OpenTelemetry backend?
Yes, traceAI is built on OpenTelemetry and is designed to be vendor-agnostic. This means it can send structured traces to any OpenTelemetry-compatible backend, including custom OpenTelemetry collectors or self-hosted solutions, in addition to popular commercial offerings like Datadog, Grafana, and Jaeger.
What are the specific environment variables or configuration steps required to get traceAI working with an OpenAI application?
For an OpenAI application, you typically need to set FI_API_KEY, FI_SECRET_KEY (for traceAI's own backend if used, or for project registration), and OPENAI_API_KEY. After installing the relevant traceai-openai package, you register a tracer provider and then instrument the OpenAI client, which automatically captures tracing data.