-
Notifications
You must be signed in to change notification settings - Fork 564
fix(prompts): prevent IndexError when LLM provided via constructor with empty models config #1334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes an IndexError that occurred when an LLM was provided via the constructor but the config had an empty models list. The fix adds a safety check to prevent accessing an empty list.
- Added a safety check in
get_task_model()to prevent IndexError when models list is empty - Added comprehensive test coverage for the edge case scenarios
- Ensured the function gracefully returns None when no models are found
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| nemoguardrails/llm/prompts.py | Added safety check to prevent IndexError when accessing empty models list |
| tests/test_llmrails.py | Added integration test for LLMRails constructor with empty models config |
| tests/test_llm_task_manager.py | Added unit tests for get_task_model function edge cases |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
…th empty models config - Add check in get_task_model to handle empty _models list gracefully - Return None instead of throwing IndexError when no models match - Add comprehensive test coverage for various model configuration scenarios Fixes the issue where providing an LLM object directly to LLMRails constructor would fail if the YAML config had an empty models list.
7dd02d7 to
68b5d24
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #1334 +/- ##
===========================================
+ Coverage 70.63% 70.70% +0.07%
===========================================
Files 161 161
Lines 16304 16313 +9
===========================================
+ Hits 11516 11534 +18
+ Misses 4788 4779 -9
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
tgasser-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, a couple of nits to address before merging.
Longer-term, having a class/function to resolve all the potential conflicts between the ways we can initialize the App and other LLMs would help simplify this code. I'd prefer to raise Exceptions if we don't have a correct, complete config than passing None around and having to deal with fallback cases at inference-tme
…th empty models config (#1334) * fix(prompts): prevent IndexError when LLM provided via constructor with empty models config - Add check in get_task_model to handle empty _models list gracefully - Return None instead of throwing IndexError when no models match - Add comprehensive test coverage for various model configuration scenarios Fixes the issue where providing an LLM object directly to LLMRails constructor would fail if the YAML config had an empty models list.
Description
When providing a Main LLM model object directly to the
LLMRailsconstructor while having an empty models list in the YAML config, the system would throw anIndexError:Root Cause
The
get_task_modelfunction inprompts.pywas attempting to access the first element of the_modelslist without checking if it was empty. This occurred when:models: []Solution
Implemented a tactical fix by adding a safety check before accessing the list:
_modelslist is not empty before accessing_models[0]Nonewhen no matching models are foundNonereturn gracefully by defaulting to "unknown" modelNotes
This is a tactical/bandaid fix as discussed in the bug report.