Skip to content

Converted state management from file saving to prompt engine #114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

abhimasand
Copy link
Contributor

@abhimasand abhimasand commented Jul 30, 2022

This PR focuses on changing the file config method with prompt engine. This has 2 major benefits - Easy State Management and Faster Reload Times.

Prompt Engine support has majorly changed the implementation of these below items:-

  1. The txt config files have been converted to YAML.
    1.1. This allows for easy change of model parameters without having to worry about unintended space characters in the config causing an issue.
    1.2. Additionally, now the config won't be in the OpenAI prediction string while querying and will only be used for constructing the prompt.
    1.3 Prompt sharing can be done via the YAML files directly, and if there is some problem in the config declaration, the YAML decoding process will pre-intimate while loading the YAML.
  2. Global variables have been replaced by function passing and have been made much more streamlined and provided access to functions which need access to those variables. Moved around some logic from some functions to others to allow this, for example, the shell prefix.
  3. Readline functions to get custom input parameters like 0.2 in "set temperature 0.2" have been converted to regex.

@abhimasand abhimasand changed the title Abhishek/prompt engine Converted state management from file saving to prompt engine Jul 30, 2022
@abhimasand
Copy link
Contributor Author

Tested on Windows and ZSH, to be tested on Bash

@ryanvolum
Copy link
Contributor

@codehruv, @dluc, mind taking a look at @abhimasand's PR? It makes some fairly major changes - all look positive to me (code cleanup, factoring in prompt-engine, etc.).

else:
entry = sys.stdin.read()
# Remove extreaneous newlines and hashtag from the input string
Copy link
Collaborator

@dluc dluc Aug 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required by the dependency on prompt-engine, or is it a separate fix? could you expand the comment to explain why these chars are not allowed? By removing \n isn't there a risk of concatenating two words?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: if this is a requirement of prompt-engine, shouldn't this be done by the lib?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terminal, when the input is captured, it will capture the hashtag before the input, and the newline when you click enter. For example, you input "# whats my ip" and click enter, there will be a newline at the end, which is something we don't need. Prompt engine handles everything, so we only need whats my ip

command_result, prompt_file = get_command_result(entry, prompt_file)
try:
command_result, prompt_generator = get_command_result(entry, prompt_generator)
except Exception as e:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the scenarios potentially leading to an exception? is it ok to swallow the exception and continue? or should the code stop after showing the error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error will mostly happen in the cases where the terminal / command line is unable to execute the command. If the command is unable to execute because its incorrect, I believe its ok to print the error / or at the most print a generic reply such as "There was an error in executing the command"

try:
command_result, prompt_generator = get_command_result(entry, prompt_generator)
except Exception as e:
print (str(e))

# if input is not a command, then query Codex, otherwise exit command has been run successfully
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you revisit this comment, I'm not sure it reflects the logic below, it's quite hard to understand the rationale deciding when to exit - thanks

if shell == "zsh":
return '#!/bin/zsh\n\n'
elif shell == "bash":
return '#!/bin/bash\n\n'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the most appropriate way (same for zsh). I think the code should use #!/usr/bin/env bash

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above if condition is the same as the one in the previous implementation of codex-cli, I can change that, but we'll have to run tests with the updated prefix

Interaction class is used to store the model config to be used in the prompt engine
"""
def __init__(self, **kwargs):
self.engine = kwargs['engine'] if ('engine' in kwargs and kwargs['engine']) else 'code-davinci-002'
Copy link
Collaborator

@dluc dluc Aug 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is code-davinci-002 globally available? I think there was some discussion about helping users detecting available models, e.g. scanning OpenAI API

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for instance, we could opt for not having a default, requiring an explicit configuration - wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code-davinci-002 is globally available. We can definitely make another command to list the available models to the user. I think it has been implemented in some other repo, but was not present in codex-cli. I think it should come in a separate PR though


# read in the engine name from openaiapirc
config = configparser.ConfigParser()
config.read(API_KEYS_LOCATION)
ENGINE = config['openai']['engine'].strip('"').strip("'")
engine = config['openai']['engine'].strip('"').strip("'")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is strip still required now that values come from a yaml file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although the engine is present in the yaml file, this is the local openairpc file that gets created. This can override the yaml file, and for that strip is needed

@dluc
Copy link
Collaborator

dluc commented Aug 8, 2022

Overall it seems working, however I have the impression that the PR includes fixes and refactoring not needed by the move to prompt-engine, which makes the review harder. If that's the case, I would have preferred splitting the PR in 2, one to clean up and one to migrate (or viceversa).

@dluc
Copy link
Collaborator

dluc commented Aug 8, 2022

wrt the migration from TXT to YAML, also considering #112, I think there was value in the simplicity of TXT files quite easy to edit, while YAML will cause some friction for users looking to customize the examples. Not sure if we have it, is there some documentation helping with this task, e.g. "how to add more examples" and "how to include custom suggestions" ?

@abhimasand abhimasand marked this pull request as ready for review September 16, 2022 01:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants