Skip to content

Conversation

@Pouyanpi
Copy link
Collaborator

@Pouyanpi Pouyanpi commented Sep 10, 2025

Description

langchain-community models support .bind() method

  • All models inherit from langchain-core Runnable interface
  • .bind() is universally available across all LangChain packages

langchain-core (contains Runnable interface with .bind())
── langchain-openai (inherits .bind())
── langchain-community (inherits .bind())
── langchain-anthropic (inherits .bind())
── langchain-* (all inherit .bind() ? )

Related Issue(s)

#1408

Checklist

  • I've read the CONTRIBUTING guidelines.
  • I've updated the documentation if applicable.
  • I've added tests if applicable.
  • @mentions of the person or team responsible for reviewing proposed changes.

@Pouyanpi Pouyanpi changed the title Feat/llm params feat(llm): pass llm params directly Sep 10, 2025
@Pouyanpi Pouyanpi self-assigned this Sep 10, 2025
@Pouyanpi Pouyanpi added the enhancement New feature or request label Sep 10, 2025
@Pouyanpi Pouyanpi added this to the v0.17.0 milestone Sep 10, 2025
@Pouyanpi Pouyanpi marked this pull request as ready for review September 15, 2025 06:46
@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch from 0358cd7 to 3240dc9 Compare September 15, 2025 09:35
@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch 2 times, most recently from 21e33e2 to 2f57ec4 Compare September 15, 2025 09:46
@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch from 2f57ec4 to ed234d6 Compare September 15, 2025 09:54
@Pouyanpi Pouyanpi force-pushed the feat/llm-params branch 2 times, most recently from d25c548 to ef88f7f Compare September 15, 2025 09:55
@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch from ed234d6 to 4c34032 Compare September 15, 2025 10:05
@Pouyanpi Pouyanpi requested a review from Copilot September 15, 2025 10:36
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR migrates from using context managers for LLM parameter management to passing parameters directly to the llm_call function. The change leverages LangChain's universal .bind() method to pass parameters like temperature and max_tokens directly to LLM models without temporarily modifying their state.

Key changes:

  • Added llm_params parameter to llm_call function for direct parameter passing
  • Replaced all with llm_params(...) context manager usage with direct parameter passing
  • Updated tests to cover the new parameter passing approach

Reviewed Changes

Copilot reviewed 21 out of 21 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
nemoguardrails/actions/llm/utils.py Added llm_params parameter to llm_call and implemented LLM binding
tests/test_tool_calling_utils.py Added comprehensive tests for new parameter passing functionality
tests/test_llm_params_e2e.py New end-to-end tests for LLM parameter functionality with real providers
tests/test_llm_params.py Added migration tests comparing context manager with direct parameter approach
Various action files Updated all LLM calls to use direct parameter passing instead of context managers
docs/user-guides/advanced/prompt-customization.md Updated documentation example to show new parameter passing syntax

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch from 4c34032 to c8ff064 Compare September 15, 2025 11:01
Copy link
Collaborator

@tgasser-nv tgasser-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, 4k LOC is too large for a single PR.

I'm a little confused about a few things:

  • Why did we use a context-manager to pass a dict of LLM parameters in the first place? Normally they're used to make sure we close files/DB connections so we don't forget.
  • Does a context-manager break some Langchain functionality?
  • Can you add some local integration tests to make sure this works calling tools with production LLMs?

@Pouyanpi Pouyanpi force-pushed the feat/tool-calling-input branch from c8ff064 to 5792bea Compare September 22, 2025 09:11
Base automatically changed from feat/tool-calling-input to develop September 22, 2025 09:39
Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage

add unit and e2e tests

fix failing tests
@github-actions
Copy link
Contributor

Documentation preview

https://nvidia-nemo.github.io/Guardrails/review/pr-1387

@trebedea
Copy link
Member

Looks good, 4k LOC is too large for a single PR.

I'm a little confused about a few things:

  • Why did we use a context-manager to pass a dict of LLM parameters in the first place? Normally they're used to make sure we close files/DB connections so we don't forget.
  • Does a context-manager break some Langchain functionality?
  • Can you add some local integration tests to make sure this works calling tools with production LLMs?

The context manager was a hack to provide early support for running LLM calls for the same Langchain model with different parameters. I guess the binding support in Langchain did not exist back then and @drazvan used a context manager to add different parameters to each call. This worked fine when calls where serial, but I think the mechanism was braking when we implemented parallel rails.

At that point I realized that Langchain supports binding especially for running different generations with different parameters, by binding them to a call. And I talked with @Pouyanpi about doing this cleanly now, with the binding support from Langchain.

Hope this helps.

Copy link
Member

@trebedea trebedea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@Pouyanpi Pouyanpi merged commit 55c9641 into develop Sep 23, 2025
10 checks passed
@Pouyanpi Pouyanpi deleted the feat/llm-params branch September 23, 2025 16:15
Pouyanpi added a commit that referenced this pull request Sep 26, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
Pouyanpi added a commit that referenced this pull request Oct 1, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
tgasser-nv pushed a commit that referenced this pull request Oct 14, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
tgasser-nv pushed a commit that referenced this pull request Oct 14, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
tgasser-nv pushed a commit that referenced this pull request Oct 14, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
tgasser-nv pushed a commit that referenced this pull request Oct 28, 2025
* feat(llm): add llm_params option to llm_call

Extend llm_call to accept an optional llm_params dictionary for passing
configuration parameters (e.g., temperature, max_tokens) to the language
model. This enables more flexible control over LLM behavior during calls.

refactor(llm): replace llm_params context manager with argument

Update all usages of the llm_params context manager to pass llm_params as
an argument to llm_call instead. This simplifies parameter handling and
improves code clarity for LLM calls.

docs: clarify prompt customization and llm_params usage

update LLMChain config usage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants