Skip to content

Conversation

@Pouyanpi
Copy link
Collaborator

Description

When providing a Main LLM model object directly to the LLMRails constructor while having an empty models list in the YAML config, the system would throw an IndexError:

File "/nemoguardrails/llm/prompts.py", line 142, in get_task_model
    return _models[0]
           ~~~~~~~^^^
IndexError: list index out of range

Root Cause

The get_task_model function in prompts.py was attempting to access the first element of the _models list without checking if it was empty. This occurred when:

  1. An LLM was provided via the constructor (not in config)
  2. The config had an empty models list models: []
  3. The system tried to find a model for prompt selection

Solution

Implemented a tactical fix by adding a safety check before accessing the list:

  • Check if _models list is not empty before accessing _models[0]
  • Return None when no matching models are found
  • The existing code already handles None return gracefully by defaulting to "unknown" model

Notes

This is a tactical/bandaid fix as discussed in the bug report.

@Pouyanpi Pouyanpi added this to the v0.16.0 milestone Aug 15, 2025
@Pouyanpi Pouyanpi requested a review from Copilot August 15, 2025 09:50
@Pouyanpi Pouyanpi self-assigned this Aug 15, 2025
@Pouyanpi Pouyanpi added the bug Something isn't working label Aug 15, 2025
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes an IndexError that occurred when an LLM was provided via the constructor but the config had an empty models list. The fix adds a safety check to prevent accessing an empty list.

  • Added a safety check in get_task_model() to prevent IndexError when models list is empty
  • Added comprehensive test coverage for the edge case scenarios
  • Ensured the function gracefully returns None when no models are found

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
nemoguardrails/llm/prompts.py Added safety check to prevent IndexError when accessing empty models list
tests/test_llmrails.py Added integration test for LLMRails constructor with empty models config
tests/test_llm_task_manager.py Added unit tests for get_task_model function edge cases

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@Pouyanpi Pouyanpi changed the title fix(prompts): prevent IndexError when LLM provided via constructor wi… fix(prompts): prevent IndexError when LLM provided via constructor with empty models config Aug 15, 2025
…th empty models config

- Add check in get_task_model to handle empty _models list gracefully
- Return None instead of throwing IndexError when no models match
- Add comprehensive test coverage for various model configuration scenarios

Fixes the issue where providing an LLM object directly to LLMRails constructor
would fail if the YAML config had an empty models list.
@Pouyanpi Pouyanpi force-pushed the fix/index-error-prompts branch from 7dd02d7 to 68b5d24 Compare August 15, 2025 09:53
@codecov-commenter
Copy link

codecov-commenter commented Aug 15, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.70%. Comparing base (52ac7ed) to head (3d89353).
⚠️ Report is 4 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1334      +/-   ##
===========================================
+ Coverage    70.63%   70.70%   +0.07%     
===========================================
  Files          161      161              
  Lines        16304    16313       +9     
===========================================
+ Hits         11516    11534      +18     
+ Misses        4788     4779       -9     
Flag Coverage Δ
python 70.70% <100.00%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
nemoguardrails/llm/prompts.py 91.76% <100.00%> (+0.09%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Collaborator

@tgasser-nv tgasser-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, a couple of nits to address before merging.

Longer-term, having a class/function to resolve all the potential conflicts between the ways we can initialize the App and other LLMs would help simplify this code. I'd prefer to raise Exceptions if we don't have a correct, complete config than passing None around and having to deal with fallback cases at inference-tme

@Pouyanpi Pouyanpi merged commit 533ef13 into develop Aug 19, 2025
17 checks passed
@Pouyanpi Pouyanpi deleted the fix/index-error-prompts branch August 19, 2025 15:42
Pouyanpi added a commit that referenced this pull request Oct 1, 2025
…th empty models config (#1334)

* fix(prompts): prevent IndexError when LLM provided via constructor with empty models config

- Add check in get_task_model to handle empty _models list gracefully
- Return None instead of throwing IndexError when no models match
- Add comprehensive test coverage for various model configuration scenarios

Fixes the issue where providing an LLM object directly to LLMRails constructor
would fail if the YAML config had an empty models list.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants