A Ruby library for instrumenting LLM applications using OpenTelemetry and the OpenTelemetry GenAI standards. This library provides easy-to-use tracing capabilities for LLM workflows, agents, API calls, and tool usage.
- OpenTelemetry Integration: Built on top of OpenTelemetry SDK for industry-standard tracing
- GenAI Standards: Follows OpenTelemetry GenAI semantic conventions
- Multiple Span Types: Support for workflow, agent, LLM call, and tool call spans with clean separation of concerns
- Easy Integration: Simple API that can be easily integrated into existing LLM applications
- Flexible Configuration: Support for various OpenTelemetry exporters and configurations
This library implements the official OpenTelemetry GenAI semantic conventions, including:
- Standardized Attribute Names: All attributes use the gen_ai.*prefix as specified in the standard
- Operation Names: Support for standardized operations like chat,embeddings,execute_tool,generate_content, etc.
- System Values: Predefined constants for all major GenAI systems (OpenAI, Anthropic, Google Gemini, AWS Bedrock, etc.)
- Output Types: Standardized output type constants (text,image,json,speech)
- Error Handling: Standardized error attributes (error.type,error.message)
- Conversation Tracking: Support for conversation IDs across multi-turn interactions
Add this line to your application's Gemfile:
gem 'llm_tracer'And then execute:
bundle installOr install it yourself as:
gem install llm_tracerrequire 'llm_tracer'
# Initialize OpenTelemetry (basic console exporter for development)
OpenTelemetry::SDK.configure
# Create a simple workflow span
LlmTracer.workflow(name: "my_workflow", version: "1.0.0") do |span|
  # Your workflow logic here
  puts "Executing workflow: #{span.name}"
end# Trace an LLM API call using standardized attributes
LlmTracer.llm_call(
  model: "gpt-4",
  provider: "OpenAI",
  operation_name: LlmTracer::GenAI::Operations::CHAT,
  system: LlmTracer::GenAI::Systems::OPENAI,
  prompt: "Generate a creative story",
  temperature: 0.8,
  max_tokens: 500,
  conversation_id: "conv_123",
  output_type: LlmTracer::GenAI::OutputTypes::TEXT
) do |span|
  # Make your LLM API call here
  response = make_llm_call(prompt)
  # Add response information to the span
  LlmTracer.tracer.add_llm_response(
    span,
    response_model: "gpt-4",
    response_provider: "OpenAI",
    finish_reason: "stop",
    usage: {
      total_tokens: 150,
      prompt_tokens: 25,
      completion_tokens: 125
    }
  )
end# Trace a tool or function call using standardized attributes
LlmTracer.tool_call(
  name: "weather_api",
  operation_name: LlmTracer::GenAI::Operations::EXECUTE_TOOL,
  system: LlmTracer::GenAI::Systems::OPENAI,
  input: { city: "San Francisco", country: "US" },
  conversation_id: "conv_123"
) do |span|
  # Execute your tool here
  result = weather_api.get_weather(city, country)
  # Add the result to the span
  LlmTracer.tracer.add_tool_call_result(span, output: result)
end# Create an agent using standardized attributes
LlmTracer.agent(
  name: "Customer Support Agent",
  version: "1.0.0",
  operation_name: LlmTracer::GenAI::Operations::CREATE_AGENT,
  system: LlmTracer::GenAI::Systems::OPENAI,
  conversation_id: "conv_123"
) do |agent_span|
  # Agent creation logic
  # Invoke the agent
  LlmTracer.agent(
    name: "Customer Support Agent",
    version: "1.0.0",
    operation_name: LlmTracer::GenAI::Operations::INVOKE_AGENT,
    system: LlmTracer::GenAI::Systems::OPENAI,
    conversation_id: "conv_123"
  ) do |invoke_span|
    # Agent invocation logic
  end
endThe library provides constants for all standardized GenAI operations:
LlmTracer::GenAI::Operations::CHAT              # Chat completion
LlmTracer::GenAI::Operations::CREATE_AGENT      # Create GenAI agent
LlmTracer::GenAI::Operations::EMBEDDINGS        # Embeddings generation
LlmTracer::GenAI::Operations::EXECUTE_TOOL     # Execute a tool
LlmTracer::GenAI::Operations::GENERATE_CONTENT # Multimodal content generation
LlmTracer::GenAI::Operations::INVOKE_AGENT     # Invoke GenAI agent
LlmTracer::GenAI::Operations::TEXT_COMPLETION  # Text completionPredefined constants for all major GenAI systems:
LlmTracer::GenAI::Systems::OPENAI              # OpenAI
LlmTracer::GenAI::Systems::ANTHROPIC           # Anthropic
LlmTracer::GenAI::Systems::GCP_GEMINI          # Google Gemini
LlmTracer::GenAI::Systems::AWS_BEDROCK         # AWS Bedrock
LlmTracer::GenAI::Systems::AZURE_AI_OPENAI     # Azure OpenAI
LlmTracer::GenAI::Systems::COHERE              # Cohere
LlmTracer::GenAI::Systems::GROQ                # Groq
LlmTracer::GenAI::Systems::MISTRAL_AI          # Mistral AI
# ... and many moreConstants for expected output types:
LlmTracer::GenAI::OutputTypes::TEXT   # Plain text
LlmTracer::GenAI::OutputTypes::IMAGE  # Image
LlmTracer::GenAI::OutputTypes::JSON   # JSON object
LlmTracer::GenAI::OutputTypes::SPEECH # SpeechAll attributes follow the official OpenTelemetry GenAI specification:
# Common attributes
"gen_ai.operation.name"      # Operation being performed
"gen_ai.system"              # GenAI system being used
"gen_ai.conversation.id"     # Conversation identifier
"gen_ai.data_source.id"      # Data source identifier
"gen_ai.output.type"         # Expected output type
# Request attributes
"gen_ai.request.model"       # Model being used
"gen_ai.request.provider"    # Provider of the model
# Response attributes
"gen_ai.response.model"      # Response model
"gen_ai.response.provider"   # Response provider
# Error attributes
"error.type"                 # Error type
"error.message"              # Error messageThe library uses a clean architecture with dedicated classes for each span type:
- SpanTypes::WorkflowSpan- Configuration for workflow spans
- SpanTypes::AgentSpan- Configuration for agent spans
- SpanTypes::LlmCallSpan- Configuration for LLM call spans
- SpanTypes::ToolCallSpan- Configuration for tool call spans
Each SpanTypes class handles the configuration, attributes, and span kind for its respective span type, providing clean separation of concerns and making the code more maintainable.
Represents a high-level workflow or business process.
Attributes:
- gen_ai.workflow.name- Name of the workflow
- gen_ai.workflow.version- Version of the workflow
Usage:
LlmTracer.workflow(name: "workflow_name", version: "1.0.0") do |span|
  # Workflow logic
endRepresents an AI agent or component within a workflow.
Attributes:
- gen_ai.agent.name- Name of the agent
- gen_ai.agent.version- Version of the agent
- gen_ai.operation.name- Operation being performed
- gen_ai.system- GenAI system being used
- gen_ai.conversation.id- Conversation identifier
Usage:
LlmTracer.agent(
  name: "agent_name",
  version: "1.0.0",
  operation_name: LlmTracer::GenAI::Operations::CREATE_AGENT,
  system: LlmTracer::GenAI::Systems::OPENAI,
  conversation_id: "conv_123"
) do |span|
  # Agent logic
endRepresents a call to an LLM service.
Attributes:
- gen_ai.request.model- Model being used
- gen_ai.request.provider- Provider of the model
- gen_ai.operation.name- Operation being performed
- gen_ai.system- GenAI system being used
- gen_ai.llm.request.prompt- Prompt sent to the LLM
- gen_ai.llm.request.temperature- Temperature setting
- gen_ai.llm.request.max_tokens- Maximum tokens setting
- gen_ai.conversation.id- Conversation identifier
- gen_ai.output.type- Expected output type
- gen_ai.response.model- Response model
- gen_ai.response.provider- Response provider
- gen_ai.llm.response.finish_reason- Finish reason
- gen_ai.llm.response.usage.*- Token usage information
Usage:
LlmTracer.llm_call(
  model: "gpt-4",
  provider: "OpenAI",
  operation_name: LlmTracer::GenAI::Operations::CHAT,
  system: LlmTracer::GenAI::Systems::OPENAI,
  prompt: "Your prompt here",
  temperature: 0.7,
  max_tokens: 1000,
  conversation_id: "conv_123",
  output_type: LlmTracer::GenAI::OutputTypes::TEXT
) do |span|
  # LLM call logic
endRepresents a call to an external tool or function.
Attributes:
- gen_ai.tool_call.name- Name of the tool
- gen_ai.operation.name- Operation being performed
- gen_ai.system- GenAI system being used
- gen_ai.tool_call.input- Input parameters
- gen_ai.tool_call.output- Output result
- gen_ai.tool_call.error- Error information
- gen_ai.conversation.id- Conversation identifier
Usage:
LlmTracer.tool_call(
  name: "tool_name",
  operation_name: LlmTracer::GenAI::Operations::EXECUTE_TOOL,
  system: LlmTracer::GenAI::Systems::OPENAI,
  input: { param1: "value1" },
  conversation_id: "conv_123"
) do |span|
  # Tool execution logic
endrequire 'opentelemetry/sdk'
OpenTelemetry::SDK.configure do |c|
  c.add_span_processor(
    OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
      OpenTelemetry::SDK::Trace::Export::ConsoleSpanExporter.new
    )
  )
endrequire 'opentelemetry/exporter/otlp'
OpenTelemetry::SDK.configure do |c|
  c.add_span_processor(
    OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
      OpenTelemetry::Exporter::OTLP::Exporter.new(
        endpoint: "http://localhost:4317"
      )
    )
  )
endOpenTelemetry::SDK.configure do |c|
  c.resource = OpenTelemetry::SDK::Resources::Resource.create(
    OpenTelemetry::SemanticConventions::Resource::SERVICE_NAME => "my_llm_app",
    OpenTelemetry::SemanticConventions::Resource::SERVICE_VERSION => "1.0.0",
    OpenTelemetry::SemanticConventions::Resource::DEPLOYMENT_ENVIRONMENT => "production"
  )
  # Add span processors...
endclass OpenAIClient
  def initialize(api_key)
    @api_key = api_key
  end
  def chat_completion(messages, model: "gpt-4", temperature: 0.7)
    LlmTracer.llm_call(
      model: model,
      provider: "openai",
      prompt: messages.map { |m| "#{m[:role]}: #{m[:content]}" }.join("\n"),
      temperature: temperature
    ) do |span|
      # Make actual API call
      response = make_api_call(messages, model, temperature)
      # Add response info to span
      LlmTracer.tracer.add_llm_response(
        span,
        response_model: response[:model],
        response_provider: "openai",
        finish_reason: response[:finish_reason],
        usage: response[:usage]
      )
      response
    end
  end
endclass AnthropicClient
  def messages(prompt, model: "claude-3-sonnet")
    LlmTracer.llm_call(
      model: model,
      provider: "anthropic",
      prompt: prompt
    ) do |span|
      # Make actual API call
      response = make_api_call(prompt, model)
      # Add response info to span
      LlmTracer.tracer.add_llm_response(
        span,
        response_model: response[:model],
        response_provider: "anthropic",
        finish_reason: response[:stop_reason],
        usage: response[:usage]
      )
      response
    end
  end
endYou can add custom attributes to any span:
LlmTracer.workflow(name: "custom_workflow") do |span|
  span.set_attribute("business.customer_id", "12345")
  span.set_attribute("business.priority", "high")
  span.set_attribute("custom.metric", 42.5)
endLlmTracer.llm_call(model: "gpt-4", provider: "openai") do |span|
  begin
    response = make_llm_call()
    # Process response
  rescue => e
    span.set_attribute("error", true)
    span.set_attribute("error.message", e.message)
    span.set_attribute("error.type", e.class.name)
    raise
  end
endSpans automatically inherit parent context, creating a trace hierarchy:
LlmTracer.workflow(name: "parent_workflow") do |workflow_span|
  # This span is a child of the workflow span
  LlmTracer.agent(name: "child_agent") do |agent_span|
    # This span is a child of the agent span
    LlmTracer.llm_call(model: "gpt-4", provider: "openai") do |llm_span|
      # This span is a child of the agent span
    end
  end
endSee the examples/ directory for complete working examples:
- basic_usage.rb- Basic usage examples
- llm_provider_integration.rb- Integration with LLM providers
- configuration.rb- Various configuration options
After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.
Bug reports and pull requests are welcome on GitHub at https://github.com/chatwoot/llm_tracer. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
The gem is available as open source under the terms of the MIT License.
Everyone interacting in the LlmTracer project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.