You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`gen_ai.request.available_tools`| string | optional | List of dictionaries describing the available tools. **[0]**|`"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"`|
28
-
|`gen_ai.request.frequency_penalty`| float | optional | Model configuration parameter. |`0.5`|
29
-
|`gen_ai.request.max_tokens`| int | optional | Model configuration parameter. |`500`|
28
+
|`gen_ai.request.frequency_penalty`| float | optional | Model configuration parameter. |`0.5`|
29
+
|`gen_ai.request.max_tokens`| int | optional | Model configuration parameter. |`500`|
30
30
|`gen_ai.request.messages`| string | optional | List of dictionaries describing the messages (prompts) sent to the LLM **[0]**, **[1]**|`"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"`|
31
-
|`gen_ai.request.presence_penalty`| float | optional | Model configuration parameter. |`0.5`|
32
-
|`gen_ai.request.temperature`| float | optional | Model configuration parameter. |`0.1`|
33
-
|`gen_ai.request.top_p`| float | optional | Model configuration parameter. |`0.7`|
34
-
|`gen_ai.response.tool_calls`| string | optional | The tool calls in the model’s response. **[0]**|`"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`|
35
-
|`gen_ai.response.text`| string | optional | The text representation of the model’s responses. **[0]**|`"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`|
36
-
|`gen_ai.usage.input_tokens.cached`| int | optional | The number of cached tokens used in the AI input (prompt) |`50`|
37
-
|`gen_ai.usage.input_tokens`| int | optional | The number of tokens used in the AI input (prompt). |`10`|
38
-
|`gen_ai.usage.output_tokens.reasoning`| int | optional | The number of tokens used for reasoning. |`30`|
39
-
|`gen_ai.usage.output_tokens`| int | optional | The number of tokens used in the AI response. |`100`|
40
-
|`gen_ai.usage.total_tokens`| int | optional | The total number of tokens used to process the prompt. (input and output) |`190`|
31
+
|`gen_ai.request.presence_penalty`| float | optional | Model configuration parameter. |`0.5`|
32
+
|`gen_ai.request.temperature`| float | optional | Model configuration parameter. |`0.1`|
33
+
|`gen_ai.request.top_p`| float | optional | Model configuration parameter. |`0.7`|
34
+
|`gen_ai.response.tool_calls`| string | optional | The tool calls in the model’s response. **[0]**|`"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`|
35
+
|`gen_ai.response.text`| string | optional | The text representation of the model’s responses. **[0]**|`"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`|
36
+
|`gen_ai.usage.input_tokens.cached`| int | optional | The number of cached tokens used in the AI input (prompt) |`50`|
37
+
|`gen_ai.usage.input_tokens`| int | optional | The number of tokens used in the AI input (prompt). |`10`|
38
+
|`gen_ai.usage.output_tokens.reasoning`| int | optional | The number of tokens used for reasoning. |`30`|
39
+
|`gen_ai.usage.output_tokens`| int | optional | The number of tokens used in the AI response. |`100`|
40
+
|`gen_ai.usage.total_tokens`| int | optional | The total number of tokens used to process the prompt. (input and output) |`190`|
41
41
42
42
-**[0]:** As span attributes only allow primitive data types (like `int`, `float`, `boolean`, `string`) this needs to be a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
43
43
-**[1]:** Each item in the list of messages has the format `{role:"", content:""}` where `role` can be `"user"`, `"assistant"`, or `"system"` and `content` can either be a string or a list of dictionaries.
@@ -52,22 +52,22 @@ This span represents a request to an AI model or service that generates a respon
52
52
53
53
Additional attributes on the span:
54
54
55
-
| Data Attribute | Type | Requirement Level | Description | Example |
|`gen_ai.request.available_tools`| string | optional | List of dictionaries describing the available tools. **[0]**|`"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"`|
58
-
|`gen_ai.request.frequency_penalty`| float | optional | Model configuration parameter. |`0.5`|
59
-
|`gen_ai.request.max_tokens`| int | optional | Model configuration parameter. |`500`|
58
+
|`gen_ai.request.frequency_penalty`| float | optional | Model configuration parameter. |`0.5`|
59
+
|`gen_ai.request.max_tokens`| int | optional | Model configuration parameter. |`500`|
60
60
|`gen_ai.request.messages`| string | optional | List of dictionaries describing the messages (prompts) sent to the LLM **[0]**, **[1]**|`"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"`|
61
-
|`gen_ai.request.presence_penalty`| float | optional | Model configuration parameter. |`0.5`|
62
-
|`gen_ai.request.temperature`| float | optional | Model configuration parameter. |`0.1`|
63
-
|`gen_ai.request.top_p`| float | optional | Model configuration parameter. |`0.7`|
64
-
|`gen_ai.response.tool_calls`| string | optional | The tool calls in the model’s response. **[0]**|`"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`|
65
-
|`gen_ai.response.text`| string | optional | The text representation of the model’s responses. **[0]**|`"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`|
66
-
|`gen_ai.usage.input_tokens.cached`| int | optional | The number of cached tokens used in the AI input (prompt) |`50`|
67
-
|`gen_ai.usage.input_tokens`| int | optional | The number of tokens used in the AI input (prompt). |`10`|
68
-
|`gen_ai.usage.output_tokens.reasoning`| int | optional | The number of tokens used for reasoning. |`30`|
69
-
|`gen_ai.usage.output_tokens`| int | optional | The number of tokens used in the AI response. |`100`|
70
-
|`gen_ai.usage.total_tokens`| int | optional | The total number of tokens used to process the prompt. (input and output) |`190`|
61
+
|`gen_ai.request.presence_penalty`| float | optional | Model configuration parameter. |`0.5`|
62
+
|`gen_ai.request.temperature`| float | optional | Model configuration parameter. |`0.1`|
63
+
|`gen_ai.request.top_p`| float | optional | Model configuration parameter. |`0.7`|
64
+
|`gen_ai.response.tool_calls`| string | optional | The tool calls in the model’s response. **[0]**|`"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`|
65
+
|`gen_ai.response.text`| string | optional | The text representation of the model’s responses. **[0]**|`"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`|
66
+
|`gen_ai.usage.input_tokens.cached`| int | optional | The number of cached tokens used in the AI input (prompt) |`50`|
67
+
|`gen_ai.usage.input_tokens`| int | optional | The number of tokens used in the AI input (prompt). |`10`|
68
+
|`gen_ai.usage.output_tokens.reasoning`| int | optional | The number of tokens used for reasoning. |`30`|
69
+
|`gen_ai.usage.output_tokens`| int | optional | The number of tokens used in the AI response. |`100`|
70
+
|`gen_ai.usage.total_tokens`| int | optional | The total number of tokens used to process the prompt. (input and output) |`190`|
71
71
72
72
-**[0]:** As span attributes only allow primitive data types this needs to be a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
73
73
-**[1]:** Each item in the list has the format `{role:"", content:""}` where `role` can be `"user"`, `"assistant"`, or `"system"` and `content` can either be a string or a list of dictionaries.
@@ -112,8 +112,8 @@ Some attributes are common to all AI Agents spans:
112
112
113
113
**[0]** Well defined values for data attribute `gen_ai.system`:
0 commit comments