-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to choose LLM prompt template for value generation and flow generation #903
base: develop
Are you sure you want to change the base?
Conversation
) -> str: | ||
"""Render a template using the provided context information. | ||
|
||
:param template_str: The template to render. | ||
:param context: The context for rendering the prompt. | ||
:param events: The history of events so far. | ||
:param out_variables: If not None the dict will be populated with variables set in the template |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be wrong, but maybe
:param out_variables: If not None the dict will be populated with variables set in the template | |
:param out_variables: If not None, the dict will be populated with the variables that were set in the template after rendering. This allows the caller to access the values of the variables used in the rendered template. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sklinglernv what do you think of adding a test case for a non-existing template so that we can test behavior when out_variables
is missing expected keys in value generation.
For example something like:
colang_content="""
flow main
match UtteranceUserActionFinished(final_transcript="hi")
$test = ..."a random bird name{{% set template = 'non_existent_template' %}}"
await UtteranceBotAction(script=$test)
"""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we were using llm_calls
in other tests, using something like
info = chat.app.explain()
assert info.llm_calls[0].prompt == expected_prompt
as in test_embeddings_only_user_messages.py which works for Colang 2.0 and in test_general_instructions.py
I think something is not allowing you to use it here, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @sklinglernv, It looks good 👍🏻 , let me know what you think about my suggestions.
Description
Adds the ability to select a LLM prompt template for value generations and for flow generations by setting a Jiinja variable
{% set template = "<name of template here>" %}
this allows for greater flexibility and selection LLM prompts based on the intended use case.This can be used like this
with the following config.yaml
Related Issue(s)
Checklist