Skip to content

Commit c0afc0d

Browse files
improve instrumentation docs (#1625)
Co-authored-by: Alex Hall <[email protected]>
1 parent f86b702 commit c0afc0d

26 files changed

+208
-78
lines changed

docs/img/logfire-evals-case-trace.png

-444 KB
Loading

docs/img/logfire-evals-case.png

-407 KB
Loading

docs/img/logfire-evals-overview.png

-342 KB
Loading
-218 KB
Loading

docs/img/logfire-run-python-code.png

-245 KB
Loading

docs/img/logfire-simple-agent.png

94.5 KB
Loading

docs/img/logfire-weather-agent.png

-12.3 KB
Loading

docs/img/logfire-with-httpx.png

-133 KB
Loading

docs/img/logfire-without-httpx.png

-87.9 KB
Loading

docs/img/otel-tui-simple.png

494 KB
Loading

docs/img/otel-tui-weather.png

618 KB
Loading

docs/logfire.md

Lines changed: 154 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ LLM Observability tools that just let you understand how your model is performin
1515

1616
## Pydantic Logfire
1717

18-
[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs.
18+
[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform developed by the team who created and maintain Pydantic and PydanticAI. Logfire aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs, all using OpenTelemetry.
1919

2020
!!! tip "Pydantic Logfire is a commercial product"
2121
Logfire is a commercially supported, hosted platform with an extremely generous and perpetual [free tier](https://pydantic.dev/pricing/).
@@ -27,15 +27,17 @@ Here's an example showing details of running the [Weather Agent](examples/weathe
2727

2828
![Weather Agent Logfire](img/logfire-weather-agent.png)
2929

30+
A trace is generated for the agent run, and spans are emitted for each model request and tool call.
31+
3032
## Using Logfire
3133

32-
To use logfire, you'll need a logfire [account](https://logfire.pydantic.dev), and logfire installed:
34+
To use Logfire, you'll need a Logfire [account](https://logfire.pydantic.dev), and the Logfire Python SDK installed:
3335

3436
```bash
3537
pip/uv-add "pydantic-ai[logfire]"
3638
```
3739

38-
Then authenticate your local environment with logfire:
40+
Then authenticate your local environment with Logfire:
3941

4042
```bash
4143
py-cli logfire auth
@@ -49,34 +51,40 @@ py-cli logfire projects new
4951

5052
(Or use an existing project with `logfire projects use`)
5153

52-
Then add logfire to your code:
53-
54-
```python {title="adding_logfire.py"}
55-
import logfire
54+
This will write to a `.logfire` directory in the current working directory, which the Logfire SDK will use for configuration at run time.
5655

57-
logfire.configure()
58-
```
56+
With that, you can start using Logfire to instrument PydanticAI code:
5957

60-
and enable instrumentation in your agent:
58+
```python {title="instrument_pydantic_ai.py" hl_lines="1 5 6"}
59+
import logfire
6160

62-
```python {title="instrument_agent.py"}
6361
from pydantic_ai import Agent
6462

65-
agent = Agent('openai:gpt-4o', instrument=True)
66-
# or instrument all agents to avoid needing to add `instrument=True` to each agent:
67-
Agent.instrument_all()
63+
logfire.configure() # (1)!
64+
logfire.instrument_pydantic_ai() # (2)!
65+
66+
agent = Agent('openai:gpt-4o', instructions='Be concise, reply with one sentence.')
67+
result = agent.run_sync('Where does "hello world" come from?') # (3)!
68+
print(result.output)
69+
"""
70+
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
71+
"""
6872
```
6973

70-
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire,
71-
including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/),
72-
[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).
74+
1. [`logfire.configure()`][logfire.configure] configures the SDK, by default it will find the write token from the `.logfire` directory, but you can also pass a token directly.
75+
2. [`logfire.instrument_pydantic_ai()`][logfire.Logfire.instrument_pydantic_ai] enables instrumentation of PydanticAI.
76+
3. Since we've enabled instrumentation, a trace will be generated for each run, with spans emitted for models calls and tool function execution
77+
78+
_(This example is complete, it can be run "as is")_
7379

74-
Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector.
80+
Which will display in Logfire thus:
7581

76-
Once you have logfire set up, there are two primary ways it can help you understand your application:
82+
![Logfire Simple Agent Run](img/logfire-simple-agent.png)
7783

78-
* **Debugging** — Using the live view to see what's happening in your application in real-time.
79-
* **Monitoring** — Using SQL and dashboards to observe the behavior of your application, Logfire is effectively a SQL database that stores information about how your application is running.
84+
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use Logfire,
85+
including how to instrument other libraries like [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).
86+
87+
Since Logfire is built on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector, see [below](#using-opentelemetry).
8088

8189
### Debugging
8290

@@ -90,65 +98,161 @@ We can also query data with SQL in Logfire to monitor the performance of an appl
9098

9199
![Logfire monitoring PydanticAI](img/logfire-monitoring-pydanticai.png)
92100

93-
### Monitoring HTTPX Requests
101+
### Monitoring HTTP Requests
94102

95-
In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration.
103+
!!! tip ""F**k you, show me the prompt.""
104+
As per Hamel Husain's influential 2024 blog post ["Fuck You, Show Me The Prompt."](https://hamel.dev/blog/posts/prompt/)
105+
(bear with the capitalization, the point is valid), it's often useful to be able to view the raw HTTP requests and responses made to model providers.
96106

97-
Instrumentation is as easy as adding the following three lines to your application:
107+
To observe raw HTTP requests made to model providers, you can use `logfire`'s [HTTPX instrumentation](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) since all provider SDKs use the [HTTPX](https://www.python-httpx.org/) library internally.
98108

99-
```py {title="instrument_httpx.py" test="skip" lint="skip"}
100-
import logfire
101-
logfire.configure()
102-
logfire.instrument_httpx(capture_all=True) # (1)!
109+
110+
=== "With HTTP instrumentation"
111+
112+
```py {title="with_logfire_instrument_httpx.py" hl_lines="7"}
113+
import logfire
114+
115+
from pydantic_ai import Agent
116+
117+
logfire.configure()
118+
logfire.instrument_pydantic_ai()
119+
logfire.instrument_httpx(capture_all=True) # (1)!
120+
agent = Agent('openai:gpt-4o')
121+
result = agent.run_sync('What is the capital of France?')
122+
print(result.output)
123+
#> Paris
124+
```
125+
126+
1. See the [`logfire.instrument_httpx` docs][logfire.Logfire.instrument_httpx] more details, `capture_all=True` means both headers and body are captured for both the request and response.
127+
128+
![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)
129+
130+
=== "Without HTTP instrumentation"
131+
132+
```py {title="without_logfire_instrument_httpx.py"}
133+
import logfire
134+
135+
from pydantic_ai import Agent
136+
137+
logfire.configure()
138+
logfire.instrument_pydantic_ai()
139+
140+
agent = Agent('openai:gpt-4o')
141+
result = agent.run_sync('What is the capital of France?')
142+
print(result.output)
143+
#> Paris
144+
```
145+
146+
![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png)
147+
148+
## Using OpenTelemetry
149+
150+
PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/) (OTel), which Logfire is based on.
151+
152+
This means you can debug and monitor PydanticAI with any OpenTelemetry backend.
153+
154+
PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), so while we think you'll have the best experience using the Logfire platform :wink:, you should be able to use any OTel service with GenAI support.
155+
156+
### Logfire with an alternative OTel backend
157+
158+
You can use the Logfire SDK completely freely and send the data to any OpenTelemetry backend.
159+
160+
Here's an example of configuring the Logfire library to send data to the excellent [otel-tui](https://github.com/ymtdzzz/otel-tui) — an open source terminal based OTel backend and viewer (no association with Pydantic).
161+
162+
Run `otel-tui` with docker (see [the otel-tui readme](https://github.com/ymtdzzz/otel-tui) for more instructions):
163+
164+
```txt title="Terminal"
165+
docker run --rm -it -p 4318:4318 --name otel-tui ymtdzzz/otel-tui:latest
103166
```
104167

105-
1. See the [logfire docs](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) for more `httpx` instrumentation details.
168+
then run,
106169

107-
In particular, this can help you to trace specific requests, responses, and headers:
170+
```python {title="otel_tui.py" hl_lines="7 8" test="skip"}
171+
import os
108172

109-
```py {title="instrument_httpx_example.py", test="skip" lint="skip"}
110173
import logfire
174+
111175
from pydantic_ai import Agent
112176

113-
logfire.configure()
114-
logfire.instrument_httpx(capture_all=True) # (1)!
177+
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318' # (1)!
178+
logfire.configure(send_to_logfire=False) # (2)!
179+
logfire.instrument_pydantic_ai()
180+
logfire.instrument_httpx(capture_all=True)
115181

116-
agent = Agent('openai:gpt-4o', instrument=True)
182+
agent = Agent('openai:gpt-4o')
117183
result = agent.run_sync('What is the capital of France?')
118184
print(result.output)
119-
# > The capital of France is Paris.
185+
#> Paris
120186
```
121187

122-
1. Capture all of headers, request body, and response body.
188+
1. Set the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable to the URL of your OpenTelemetry backend. If you're using a backend that requires authentication, you may need to set [other environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/). Of course, these can also be set outside the process, e.g. with `export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318`.
189+
2. We [configure][logfire.configure] Logfire to disable sending data to the Logfire OTel backend itself. If you removed `send_to_logfire=False`, data would be sent to both Logfire and your OpenTelemetry backend.
123190

124-
=== "With `httpx` instrumentation"
191+
Running the above code will send tracing data to `otel-tui`, which will display like this:
125192

126-
![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)
193+
![otel tui simple](img/otel-tui-simple.png)
127194

128-
=== "Without `httpx` instrumentation"
195+
Running the [weather agent](examples/weather-agent.md) example connected to `otel-tui` shows how it can be used to visualise a more complex trace:
129196

130-
![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png)
197+
![otel tui weather agent](img/otel-tui-weather.png)
131198

132-
!!! tip
133-
`httpx` instrumentation might be of particular utility if you're using a custom `httpx` client in your model in order to get insights into your custom requests.
199+
For more information on using the Logfire SDK to send data to alternative backends, see
200+
[the Logfire documentation](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/).
134201

135-
## Using OpenTelemetry
202+
### OTel without Logfire
203+
204+
You can also emit OpenTelemetry data from PydanticAI without using Logfire at all.
205+
206+
To do this, you'll need to install and configure the OpenTelemetry packages you need. To run the following examples, use
136207

137-
PydanticAI's instrumentation uses [OpenTelemetry](https://opentelemetry.io/), which Logfire is based on. You can use the Logfire SDK completely freely and follow the [Alternative backends](https://logfire.pydantic.dev/docs/how-to-guides/alternative-backends/) guide to send the data to any OpenTelemetry collector, such as a self-hosted Jaeger instance. Or you can skip Logfire entirely and use the OpenTelemetry Python SDK directly.
208+
```txt title="Terminal"
209+
uv run \
210+
--with 'pydantic-ai-slim[openai]' \
211+
--with opentelemetry-sdk \
212+
--with opentelemetry-exporter-otlp \
213+
raw_otel.py
214+
```
215+
216+
```python {title="raw_otel.py" test="skip"}
217+
import os
218+
219+
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
220+
from opentelemetry.sdk.trace import TracerProvider
221+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
222+
from opentelemetry.trace import set_tracer_provider
223+
224+
from pydantic_ai.agent import Agent
225+
226+
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318'
227+
exporter = OTLPSpanExporter()
228+
span_processor = BatchSpanProcessor(exporter)
229+
tracer_provider = TracerProvider()
230+
tracer_provider.add_span_processor(span_processor)
231+
232+
set_tracer_provider(tracer_provider)
233+
234+
Agent.instrument_all()
235+
agent = Agent('openai:gpt-4o')
236+
result = agent.run_sync('What is the capital of France?')
237+
print(result.output)
238+
#> Paris
239+
```
138240

139241
## Data format
140242

141243
PydanticAI follows the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/), with one caveat. The semantic conventions specify that messages should be captured as individual events (logs) that are children of the request span. By default, PydanticAI instead collects these events into a JSON array which is set as a single large attribute called `events` on the request span. To change this, use [`InstrumentationSettings(event_mode='logs')`][pydantic_ai.agent.InstrumentationSettings].
142244

143245
```python {title="instrumentation_settings_event_mode.py"}
144-
from pydantic_ai import Agent
145-
from pydantic_ai.agent import InstrumentationSettings
246+
import logfire
146247

147-
instrumentation_settings = InstrumentationSettings(event_mode='logs')
248+
from pydantic_ai import Agent
148249

149-
agent = Agent('openai:gpt-4o', instrument=instrumentation_settings)
150-
# or instrument all agents:
151-
Agent.instrument_all(instrumentation_settings)
250+
logfire.configure()
251+
logfire.instrument_pydantic_ai(event_mode='logs')
252+
agent = Agent('openai:gpt-4o')
253+
result = agent.run_sync('What is the capital of France?')
254+
print(result.output)
255+
#> Paris
152256
```
153257

154258
For now, this won't look as good in the Logfire UI, but we're working on it.

docs/troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,4 +24,4 @@ If you're running into issues with setting the API key for your model, visit the
2424

2525
You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime.
2626

27-
It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above.
27+
It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-http-requests) to monitor the above.

examples/pydantic_ai_examples/chat_app.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,9 @@
3838

3939
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
4040
logfire.configure(send_to_logfire='if-token-present')
41+
logfire.instrument_pydantic_ai()
4142

42-
agent = Agent('openai:gpt-4o', instrument=True)
43+
agent = Agent('openai:gpt-4o')
4344
THIS_DIR = Path(__file__).parent
4445

4546

examples/pydantic_ai_examples/flight_booking.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717

1818
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
1919
logfire.configure(send_to_logfire='if-token-present')
20+
logfire.instrument_pydantic_ai()
2021

2122

2223
class FlightDetails(BaseModel):
@@ -49,7 +50,6 @@ class Deps:
4950
system_prompt=(
5051
'Your job is to find the cheapest flight for the user on the given date. '
5152
),
52-
instrument=True,
5353
)
5454

5555

examples/pydantic_ai_examples/pydantic_model.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414

1515
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
1616
logfire.configure(send_to_logfire='if-token-present')
17+
logfire.instrument_pydantic_ai()
1718

1819

1920
class MyModel(BaseModel):
@@ -23,7 +24,7 @@ class MyModel(BaseModel):
2324

2425
model = os.getenv('PYDANTIC_AI_MODEL', 'openai:gpt-4o')
2526
print(f'Using model: {model}')
26-
agent = Agent(model, output_type=MyModel, instrument=True)
27+
agent = Agent(model, output_type=MyModel)
2728

2829
if __name__ == '__main__':
2930
result = agent.run_sync('The windy city in the US of A.')

examples/pydantic_ai_examples/question_graph.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,9 @@
2525

2626
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
2727
logfire.configure(send_to_logfire='if-token-present')
28+
logfire.instrument_pydantic_ai()
2829

29-
ask_agent = Agent('openai:gpt-4o', output_type=str, instrument=True)
30+
ask_agent = Agent('openai:gpt-4o', output_type=str)
3031

3132

3233
@dataclass

examples/pydantic_ai_examples/rag.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@
4040
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
4141
logfire.configure(send_to_logfire='if-token-present')
4242
logfire.instrument_asyncpg()
43+
logfire.instrument_pydantic_ai()
4344

4445

4546
@dataclass
@@ -48,7 +49,7 @@ class Deps:
4849
pool: asyncpg.Pool
4950

5051

51-
agent = Agent('openai:gpt-4o', deps_type=Deps, instrument=True)
52+
agent = Agent('openai:gpt-4o', deps_type=Deps)
5253

5354

5455
@agent.tool

examples/pydantic_ai_examples/roulette_wheel.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ class Deps:
2828
system_prompt=(
2929
'Use the `roulette_wheel` function to determine if the customer has won based on the number they bet on.'
3030
),
31-
instrument=True,
3231
)
3332

3433

examples/pydantic_ai_examples/sql_gen.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
3131
logfire.configure(send_to_logfire='if-token-present')
3232
logfire.instrument_asyncpg()
33+
logfire.instrument_pydantic_ai()
3334

3435
DB_SCHEMA = """
3536
CREATE TABLE records (
@@ -96,7 +97,6 @@ class InvalidRequest(BaseModel):
9697
# Type ignore while we wait for PEP-0747, nonetheless unions will work fine everywhere else
9798
output_type=Response, # type: ignore
9899
deps_type=Deps,
99-
instrument=True,
100100
)
101101

102102

examples/pydantic_ai_examples/stream_markdown.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,9 @@
2020

2121
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
2222
logfire.configure(send_to_logfire='if-token-present')
23+
logfire.instrument_pydantic_ai()
2324

24-
agent = Agent(instrument=True)
25+
agent = Agent()
2526

2627
# models to try, and the appropriate env var
2728
models: list[tuple[KnownModelName, str]] = [

0 commit comments

Comments
 (0)