Skip to content

Conversation

@jgordley
Copy link
Contributor

@jgordley jgordley commented Oct 1, 2025

This PR adds LangMem style tool factories to provide to the agent for saving events and searching for memories at runtime.

It is based on the LangMem tool factories with the ability to inject the base store at runtime and use the runtime config actor_id and session_id to save and search for memories according to the runtime config.

Reference implementation: https://github.com/langchain-ai/langmem/blob/main/src/langmem/knowledge/tools.py

# Create the memory save tool using the factory
# The actor ID and session ID are handled under the hood at runtime to save to the actor and session based on the invocation below
save_events_to_memory_tool = create_store_event_tool(
    name="save_events_to_memory",
)

# Create the memory search tool using the factory
retrieve_past_conversation_facts_tool = create_search_memory_tool(
    namespace=("facts", "{actor_id}"), # Placeholder for actor ID, specifying the namespace we defined /facts/{actorId} in AgentCore Memory
    instructions="Retrieve facts and user preferences about the user that might be helpful in answering vague questions",
    name="get_past_conversation_facts",
)

Comes with a sample notebook.

@bergjaak
Copy link

My thinking is, we designed our system so that CreateEvent should always be called for all LLM messages, and then the decision to extract a memory record from the message belongs to our service backend. So then I can see it being more straightforward for the customer if we just tell them: "Store all messages," and so if we take that stance, it would imply that we don't want to have a tool for calling CreateEvent (since then it confuses customers into adopting the anti-pattern where they don't store all events)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants