Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to choose LLM prompt template for value generation and flow generation #903

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

sklinglernv
Copy link
Collaborator

Description

Adds the ability to select a LLM prompt template for value generations and for flow generations by setting a Jiinja variable {% set template = "<name of template here>" %} this allows for greater flexibility and selection LLM prompts based on the intended use case.

This can be used like this

import core

flow generate antonym
  """
  {% set template = "generate_antonym" %}
  bot say "lucky"
  """
  ...

flow main
  when user said "bird"
    $test = ..."a random bird name{{% set template = 'repeat' %}}"
    await bot say $test
  or when user said "antonym"
    generate antonym

with the following config.yaml

colang_version: "2.x"

models:
    - type: main
      engine: openai
      model: gpt-3.5-turbo-instruct

prompts:
    - task: generate_antonym
      models:
          - openai/gpt-3.5-turbo
          - openai/gpt-4
      messages:
          - type: user
            content: |-
                Generate the antonym of the bot expression below. Use the syntax: bot say "<antonym goes here>".
          - type: user
            content: |-
                YOUR TASK:
                {{ flow_nld }}

    - task: repeat
      models:
          - openai/gpt-3.5-turbo
          - openai/gpt-4
      messages:
          - type: system
            content: |
                Your are a value generation bot that needs to generate a value for the ${{ var_name }} variable based on instructions form the user.
                Be very precised and always pick the most suitable variable type (e.g. double quotes for strings). Only generated the value and do not provide any additional response.
          - type: user
            content: |
                {{ instructions }} three times
                Assign the generated value to:
                ${{ var_name }} =

Related Issue(s)

Checklist

  • I've read the CONTRIBUTING guidelines.
  • I've updated the documentation if applicable.
  • I've added tests if applicable.
  • @mentions of the person or team responsible for reviewing proposed changes.

) -> str:
"""Render a template using the provided context information.

:param template_str: The template to render.
:param context: The context for rendering the prompt.
:param events: The history of events so far.
:param out_variables: If not None the dict will be populated with variables set in the template
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wrong, but maybe

Suggested change
:param out_variables: If not None the dict will be populated with variables set in the template
:param out_variables: If not None, the dict will be populated with the variables that were set in the template after rendering. This allows the caller to access the values of the variables used in the rendered template.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sklinglernv what do you think of adding a test case for a non-existing template so that we can test behavior when out_variables is missing expected keys in value generation.

For example something like:

        colang_content="""
        flow main
          match UtteranceUserActionFinished(final_transcript="hi")
          $test = ..."a random bird name{{% set template = 'non_existent_template' %}}"
          await UtteranceBotAction(script=$test)
        """

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we were using llm_calls in other tests, using something like

    info = chat.app.explain()
    assert info.llm_calls[0].prompt == expected_prompt

as in test_embeddings_only_user_messages.py which works for Colang 2.0 and in test_general_instructions.py

I think something is not allowing you to use it here, right?

Copy link
Collaborator

@Pouyanpi Pouyanpi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @sklinglernv, It looks good 👍🏻 , let me know what you think about my suggestions.

@Pouyanpi Pouyanpi added this to the v0.12.0 milestone Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants