Skip to content

Commit 58f01a0

Browse files
committed
reformat
1 parent 5238db7 commit 58f01a0

File tree

1 file changed

+32
-32
lines changed

1 file changed

+32
-32
lines changed

develop-docs/sdk/telemetry/traces/modules/ai-agents.mdx

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -22,22 +22,22 @@ Describes AI agent invocation.
2222

2323
Additional attributes on the span:
2424

25-
| Data Attribute | Type | Requirement Level | Description | Example |
26-
| :------------------------------------- | :----- | :---------------- | :--------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------- |
25+
| Data Attribute | Type | Requirement Level | Description | Example |
26+
| :------------------------------------- | :----- | :---------------- | :-------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------- |
2727
| `gen_ai.request.available_tools` | string | optional | List of dictionaries describing the available tools. **[0]** | `"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"` |
28-
| `gen_ai.request.frequency_penalty` | float | optional | Model configuration parameter. | `0.5` |
29-
| `gen_ai.request.max_tokens` | int | optional | Model configuration parameter. | `500` |
28+
| `gen_ai.request.frequency_penalty` | float | optional | Model configuration parameter. | `0.5` |
29+
| `gen_ai.request.max_tokens` | int | optional | Model configuration parameter. | `500` |
3030
| `gen_ai.request.messages` | string | optional | List of dictionaries describing the messages (prompts) sent to the LLM **[0]**, **[1]** | `"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"` |
31-
| `gen_ai.request.presence_penalty` | float | optional | Model configuration parameter. | `0.5` |
32-
| `gen_ai.request.temperature` | float | optional | Model configuration parameter. | `0.1` |
33-
| `gen_ai.request.top_p` | float | optional | Model configuration parameter. | `0.7` |
34-
| `gen_ai.response.tool_calls` | string | optional | The tool calls in the model’s response. **[0]** | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"` |
35-
| `gen_ai.response.text` | string | optional | The text representation of the model’s responses. **[0]** | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"` |
36-
| `gen_ai.usage.input_tokens.cached` | int | optional | The number of cached tokens used in the AI input (prompt) | `50` |
37-
| `gen_ai.usage.input_tokens` | int | optional | The number of tokens used in the AI input (prompt). | `10` |
38-
| `gen_ai.usage.output_tokens.reasoning` | int | optional | The number of tokens used for reasoning. | `30` |
39-
| `gen_ai.usage.output_tokens` | int | optional | The number of tokens used in the AI response. | `100` |
40-
| `gen_ai.usage.total_tokens` | int | optional | The total number of tokens used to process the prompt. (input and output) | `190` |
31+
| `gen_ai.request.presence_penalty` | float | optional | Model configuration parameter. | `0.5` |
32+
| `gen_ai.request.temperature` | float | optional | Model configuration parameter. | `0.1` |
33+
| `gen_ai.request.top_p` | float | optional | Model configuration parameter. | `0.7` |
34+
| `gen_ai.response.tool_calls` | string | optional | The tool calls in the model’s response. **[0]** | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"` |
35+
| `gen_ai.response.text` | string | optional | The text representation of the model’s responses. **[0]** | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"` |
36+
| `gen_ai.usage.input_tokens.cached` | int | optional | The number of cached tokens used in the AI input (prompt) | `50` |
37+
| `gen_ai.usage.input_tokens` | int | optional | The number of tokens used in the AI input (prompt). | `10` |
38+
| `gen_ai.usage.output_tokens.reasoning` | int | optional | The number of tokens used for reasoning. | `30` |
39+
| `gen_ai.usage.output_tokens` | int | optional | The number of tokens used in the AI response. | `100` |
40+
| `gen_ai.usage.total_tokens` | int | optional | The total number of tokens used to process the prompt. (input and output) | `190` |
4141

4242
- **[0]:** As span attributes only allow primitive data types (like `int`, `float`, `boolean`, `string`) this needs to be a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
4343
- **[1]:** Each item in the list of messages has the format `{role:"", content:""}` where `role` can be `"user"`, `"assistant"`, or `"system"` and `content` can either be a string or a list of dictionaries.
@@ -52,22 +52,22 @@ This span represents a request to an AI model or service that generates a respon
5252

5353
Additional attributes on the span:
5454

55-
| Data Attribute | Type | Requirement Level | Description | Example |
56-
| :------------------------------------- | :----- | :---------------- | :--------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------- |
55+
| Data Attribute | Type | Requirement Level | Description | Example |
56+
| :------------------------------------- | :----- | :---------------- | :-------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------- |
5757
| `gen_ai.request.available_tools` | string | optional | List of dictionaries describing the available tools. **[0]** | `"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"` |
58-
| `gen_ai.request.frequency_penalty` | float | optional | Model configuration parameter. | `0.5` |
59-
| `gen_ai.request.max_tokens` | int | optional | Model configuration parameter. | `500` |
58+
| `gen_ai.request.frequency_penalty` | float | optional | Model configuration parameter. | `0.5` |
59+
| `gen_ai.request.max_tokens` | int | optional | Model configuration parameter. | `500` |
6060
| `gen_ai.request.messages` | string | optional | List of dictionaries describing the messages (prompts) sent to the LLM **[0]**, **[1]** | `"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"` |
61-
| `gen_ai.request.presence_penalty` | float | optional | Model configuration parameter. | `0.5` |
62-
| `gen_ai.request.temperature` | float | optional | Model configuration parameter. | `0.1` |
63-
| `gen_ai.request.top_p` | float | optional | Model configuration parameter. | `0.7` |
64-
| `gen_ai.response.tool_calls` | string | optional | The tool calls in the model’s response. **[0]** | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"` |
65-
| `gen_ai.response.text` | string | optional | The text representation of the model’s responses. **[0]** | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"` |
66-
| `gen_ai.usage.input_tokens.cached` | int | optional | The number of cached tokens used in the AI input (prompt) | `50` |
67-
| `gen_ai.usage.input_tokens` | int | optional | The number of tokens used in the AI input (prompt). | `10` |
68-
| `gen_ai.usage.output_tokens.reasoning` | int | optional | The number of tokens used for reasoning. | `30` |
69-
| `gen_ai.usage.output_tokens` | int | optional | The number of tokens used in the AI response. | `100` |
70-
| `gen_ai.usage.total_tokens` | int | optional | The total number of tokens used to process the prompt. (input and output) | `190` |
61+
| `gen_ai.request.presence_penalty` | float | optional | Model configuration parameter. | `0.5` |
62+
| `gen_ai.request.temperature` | float | optional | Model configuration parameter. | `0.1` |
63+
| `gen_ai.request.top_p` | float | optional | Model configuration parameter. | `0.7` |
64+
| `gen_ai.response.tool_calls` | string | optional | The tool calls in the model’s response. **[0]** | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"` |
65+
| `gen_ai.response.text` | string | optional | The text representation of the model’s responses. **[0]** | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"` |
66+
| `gen_ai.usage.input_tokens.cached` | int | optional | The number of cached tokens used in the AI input (prompt) | `50` |
67+
| `gen_ai.usage.input_tokens` | int | optional | The number of tokens used in the AI input (prompt). | `10` |
68+
| `gen_ai.usage.output_tokens.reasoning` | int | optional | The number of tokens used for reasoning. | `30` |
69+
| `gen_ai.usage.output_tokens` | int | optional | The number of tokens used in the AI response. | `100` |
70+
| `gen_ai.usage.total_tokens` | int | optional | The total number of tokens used to process the prompt. (input and output) | `190` |
7171

7272
- **[0]:** As span attributes only allow primitive data types this needs to be a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
7373
- **[1]:** Each item in the list has the format `{role:"", content:""}` where `role` can be `"user"`, `"assistant"`, or `"system"` and `content` can either be a string or a list of dictionaries.
@@ -112,8 +112,8 @@ Some attributes are common to all AI Agents spans:
112112

113113
**[0]** Well defined values for data attribute `gen_ai.system`:
114114

115-
| Value | Description |
116-
| :---------------- | :-------------------------------- |
115+
| Value | Description |
116+
| :------------------ | :-------------------------------- |
117117
| `"anthropic"` | Anthropic |
118118
| `"aws.bedrock"` | AWS Bedrock |
119119
| `"az.ai.inference"` | Azure AI Inference |
@@ -132,8 +132,8 @@ Some attributes are common to all AI Agents spans:
132132

133133
**[1]** Well defined values for data attribute `gen_ai.operation.name`:
134134

135-
| Value | Description |
136-
| :----------------- | :---------------------------------------------------------------------- |
135+
| Value | Description |
136+
| :------------------- | :---------------------------------------------------------------------- |
137137
| `"chat"` | Chat completion operation such as OpenAI Chat API |
138138
| `"create_agent"` | Create GenAI agent |
139139
| `"embeddings"` | Embeddings operation such as OpenAI Create embeddings API |

0 commit comments

Comments
 (0)