You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: llms.txt
+22Lines changed: 22 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -203,6 +203,7 @@ ADK supports various types of tools:
203
203
* **Google Search**: Allows the agent to perform web searches using Google Search, compatible with Gemini 2 models.
204
204
* **Code Execution**: Enables the agent to execute code using the built\_in\_code\_execution tool, typically with Gemini 2 models, for calculations or data manipulation.
205
205
* **Vertex AI Search**: Uses Google Cloud Vertex AI Search for agents to search across private, configured data stores.
206
+
* **GKE Code Executor (`GkeCodeExecutor`)**: Provides a secure and scalable method for running LLM-generated code by leveraging a gVisor-sandboxed GKE environment. It creates ephemeral, isolated Kubernetes Jobs for each execution request.
206
207
* **Limitations**: Currently, each root agent or single agent only supports one built-in tool, and no other tools of any type can be used in the same agent. Built-in tools are also not supported within sub-agents.
207
208
* **Third-Party Tools**: Integrates tools from other AI Agent frameworks like CrewAI and LangChain, enabling faster development and reuse of existing tools.
208
209
* **LangChain Tools**: Uses the LangchainTool wrapper to integrate tools from the LangChain ecosystem (e.g., Tavily search tool).
@@ -246,6 +247,17 @@ Meaningful, multi-turn conversations require agents to understand context. ADK m
246
247
* **State (session.state)**: A dictionary or Map within each Session for storing and updating dynamic details needed during the conversation. It holds serializable key-value pairs and its persistence depends on the SessionService. State can be organized using prefixes: no prefix for session-specific, user: for user-specific across sessions, app: for app-wide, and temp: for temporary state not persisted. State should be updated by adding an Event to the session history via session\_service.append\_event(), either through output\_key for agent text responses or EventActions.state\_delta for complex updates.
247
248
* **Memory**: A searchable store of information that can span multiple past sessions or include external data sources. MemoryService defines the interface for managing this long-term knowledge, handling ingestion of session information (add\_session\_to\_memory) and searching (search\_memory). Implementations include InMemoryMemoryService (in-memory, non-persistent) and VertexAiRagMemoryService (persistent, leverages Vertex AI RAG Corpus).
248
249
250
+
#### **Accessing State in Instructions**
251
+
`LlmAgent` instructions can directly inject session state values using `{key}` templating. The framework replaces the placeholder with the value from `session.state` before sending the instruction to the LLM.
252
+
* **Syntax**: `{key}` for required keys, `{key?}` for optional keys.
253
+
* **Bypassing Injection**: To use literal `{{` and `}}`, provide the instruction as a function (an `InstructionProvider`) instead of a string. The `InstructionProvider` receives a `ReadonlyContext` object.
254
+
255
+
#### **Updating State**
256
+
State should be updated as part of an `Event` to ensure tracking and persistence.
257
+
1. **`output_key`**: The simplest method for an `LlmAgent`. The agent's final text response is automatically saved to `session.state[output_key]`.
258
+
2. **`EventActions.state_delta`**: For complex updates, manually construct a dictionary of changes and assign it to the `state_delta` of an `EventActions` object when creating an `Event`.
259
+
3. **`CallbackContext` or `ToolContext`**: The recommended method within callbacks and tools. Directly modify the `state` attribute on the provided context object (e.g., `tool_context.state['my_key'] = 'new_value'`). The framework automatically captures these changes and includes them in the event's `state_delta`.
260
+
249
261
#### **Events**
250
262
251
263
Events are the fundamental units of information flow within ADK, representing every significant occurrence during an agent's interaction lifecycle. An Event is an immutable record capturing user messages, agent replies, tool requests, tool results, state changes, control signals, and errors. Events are central for communication, signaling state/artifact changes, controlling flow, and providing history. Events can be identified by event.author (e.g., 'user', 'AgentName'), event.content (text, tool call, tool result), and event.partial for streaming output. Key information can be extracted from event.content.parts\[0\].text for text, event.get\_function\_calls() for tool calls, and event.get\_function\_responses() for tool results. The event.actions object signals changes and side effects, including state\_delta, artifact\_delta, transfer\_to\_agent, escalate, and skip\_summarization. event.is\_final\_response() is a helper to identify complete, user-facing responses.
@@ -300,6 +312,16 @@ ADK agents can be deployed to various environments based on production needs or
300
312
301
313
### **7\. Evaluation and Safety**
302
314
315
+
#### **Callbacks**
316
+
Callbacks are functions that hook into an agent's execution lifecycle, allowing for observation, customization, and control. They are associated with an agent at creation.
* **Context Objects**: Callbacks receive `CallbackContext` or `ToolContext`, providing access to session state and runtime information.
319
+
* **Control Flow**:
320
+
* **`return None` (or `Optional.empty()` in Java)**: Allows the default ADK behavior to proceed.
321
+
* **`return <Specific Object>`**: Overrides the default behavior. For example, returning an `LlmResponse` from `before_model_callback` skips the LLM call and uses the returned object as the response. Returning a `dict` from `before_tool_callback` skips the tool execution and uses the dictionary as the tool's result. This is the core mechanism for implementing guardrails and custom logic.
322
+
323
+
---
324
+
303
325
#### **Why Evaluate Agents?**
304
326
305
327
Evaluating agents is crucial for ensuring they operate safely, securely, and align with brand values. Traditional software testing is insufficient due to the probabilistic nature of LLM agents. Evaluation involves assessing the quality of both the final output and the agent's trajectory (sequence of steps).
0 commit comments