Replies: 1 comment 1 reply
-
|
it sounds like you're dealing with an issue where the llm is unpredictably handling sensitive parameters. to fix this, consider separating the handling of sensitive parameters from the llm's responsibilities. instead of relying on the llm to pass these critical params, you can set up a mechanism where these params are manually injected into the tool calls after the llm determines the intent or tool needed. this way, you maintain control over the sensitive data and only use the llm for the less critical parts of the request. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I’m using a multi-agent setup (AgentOS → Team → Agents → MCP tools).
Each request contains a Context block with SOME mandatory params that must be passed in every tool call.
Current Approach
Example Request
Question
Has anyone handled this scenario where SENSITIVE params are passed via context but LLM makes them unreliable?
What’s the best way to guarantee these params are preserved correctly without depending on the LLM?
IS THERE A WAY BY WHICH AGNO LETS THE PART OF THESE SENSITIVE PARAMS TO BE PASSED MANUALLY , rather than having to depend on LLM for this , for non sensitive args let it be constructed by the LLM itself.
Beta Was this translation helpful? Give feedback.
All reactions