-
Notifications
You must be signed in to change notification settings - Fork 563
feat(llm): pass llm params directly #1387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
0358cd7 to
3240dc9
Compare
242de1f to
cbfcf20
Compare
21e33e2 to
2f57ec4
Compare
cbfcf20 to
621aafb
Compare
2f57ec4 to
ed234d6
Compare
d25c548 to
ef88f7f
Compare
ed234d6 to
4c34032
Compare
ef88f7f to
5f99209
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR migrates from using context managers for LLM parameter management to passing parameters directly to the llm_call function. The change leverages LangChain's universal .bind() method to pass parameters like temperature and max_tokens directly to LLM models without temporarily modifying their state.
Key changes:
- Added
llm_paramsparameter tollm_callfunction for direct parameter passing - Replaced all
with llm_params(...)context manager usage with direct parameter passing - Updated tests to cover the new parameter passing approach
Reviewed Changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
nemoguardrails/actions/llm/utils.py |
Added llm_params parameter to llm_call and implemented LLM binding |
tests/test_tool_calling_utils.py |
Added comprehensive tests for new parameter passing functionality |
tests/test_llm_params_e2e.py |
New end-to-end tests for LLM parameter functionality with real providers |
tests/test_llm_params.py |
Added migration tests comparing context manager with direct parameter approach |
| Various action files | Updated all LLM calls to use direct parameter passing instead of context managers |
docs/user-guides/advanced/prompt-customization.md |
Updated documentation example to show new parameter passing syntax |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
4c34032 to
c8ff064
Compare
tgasser-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, 4k LOC is too large for a single PR.
I'm a little confused about a few things:
- Why did we use a context-manager to pass a dict of LLM parameters in the first place? Normally they're used to make sure we close files/DB connections so we don't forget.
- Does a context-manager break some Langchain functionality?
- Can you add some local integration tests to make sure this works calling tools with production LLMs?
c8ff064 to
5792bea
Compare
Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage add unit and e2e tests fix failing tests
5f99209 to
748e844
Compare
Documentation preview |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
The context manager was a hack to provide early support for running LLM calls for the same Langchain model with different parameters. I guess the binding support in Langchain did not exist back then and @drazvan used a context manager to add different parameters to each call. This worked fine when calls where serial, but I think the mechanism was braking when we implemented parallel rails. At that point I realized that Langchain supports binding especially for running different generations with different parameters, by binding them to a call. And I talked with @Pouyanpi about doing this cleanly now, with the binding support from Langchain. Hope this helps. |
trebedea
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
55393d3 to
3d8268f
Compare
94b4652 to
6041ebe
Compare
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
* feat(llm): add llm_params option to llm_call Extend llm_call to accept an optional llm_params dictionary for passing configuration parameters (e.g., temperature, max_tokens) to the language model. This enables more flexible control over LLM behavior during calls. refactor(llm): replace llm_params context manager with argument Update all usages of the llm_params context manager to pass llm_params as an argument to llm_call instead. This simplifies parameter handling and improves code clarity for LLM calls. docs: clarify prompt customization and llm_params usage update LLMChain config usage
Description
langchain-community models support .bind() method
langchain-core (contains Runnable interface with .bind())
── langchain-openai (inherits .bind())
── langchain-community (inherits .bind())
── langchain-anthropic (inherits .bind())
── langchain-* (all inherit .bind() ? )
Related Issue(s)
#1408
Checklist