-
Notifications
You must be signed in to change notification settings - Fork 198
Converted state management from file saving to prompt engine #114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Tested on Windows and ZSH, to be tested on Bash |
@codehruv, @dluc, mind taking a look at @abhimasand's PR? It makes some fairly major changes - all look positive to me (code cleanup, factoring in prompt-engine, etc.). |
else: | ||
entry = sys.stdin.read() | ||
# Remove extreaneous newlines and hashtag from the input string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this required by the dependency on prompt-engine, or is it a separate fix? could you expand the comment to explain why these chars are not allowed? By removing \n
isn't there a risk of concatenating two words?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: if this is a requirement of prompt-engine, shouldn't this be done by the lib?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terminal, when the input is captured, it will capture the hashtag before the input, and the newline when you click enter. For example, you input "# whats my ip" and click enter, there will be a newline at the end, which is something we don't need. Prompt engine handles everything, so we only need whats my ip
command_result, prompt_file = get_command_result(entry, prompt_file) | ||
try: | ||
command_result, prompt_generator = get_command_result(entry, prompt_generator) | ||
except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the scenarios potentially leading to an exception? is it ok to swallow the exception and continue? or should the code stop after showing the error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error will mostly happen in the cases where the terminal / command line is unable to execute the command. If the command is unable to execute because its incorrect, I believe its ok to print the error / or at the most print a generic reply such as "There was an error in executing the command"
try: | ||
command_result, prompt_generator = get_command_result(entry, prompt_generator) | ||
except Exception as e: | ||
print (str(e)) | ||
|
||
# if input is not a command, then query Codex, otherwise exit command has been run successfully |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you revisit this comment, I'm not sure it reflects the logic below, it's quite hard to understand the rationale deciding when to exit - thanks
if shell == "zsh": | ||
return '#!/bin/zsh\n\n' | ||
elif shell == "bash": | ||
return '#!/bin/bash\n\n' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the most appropriate way (same for zsh). I think the code should use #!/usr/bin/env bash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above if condition is the same as the one in the previous implementation of codex-cli, I can change that, but we'll have to run tests with the updated prefix
Interaction class is used to store the model config to be used in the prompt engine | ||
""" | ||
def __init__(self, **kwargs): | ||
self.engine = kwargs['engine'] if ('engine' in kwargs and kwargs['engine']) else 'code-davinci-002' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is code-davinci-002
globally available? I think there was some discussion about helping users detecting available models, e.g. scanning OpenAI API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for instance, we could opt for not having a default, requiring an explicit configuration - wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code-davinci-002 is globally available. We can definitely make another command to list the available models to the user. I think it has been implemented in some other repo, but was not present in codex-cli. I think it should come in a separate PR though
|
||
# read in the engine name from openaiapirc | ||
config = configparser.ConfigParser() | ||
config.read(API_KEYS_LOCATION) | ||
ENGINE = config['openai']['engine'].strip('"').strip("'") | ||
engine = config['openai']['engine'].strip('"').strip("'") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is strip
still required now that values come from a yaml file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the engine is present in the yaml file, this is the local openairpc file that gets created. This can override the yaml file, and for that strip is needed
Overall it seems working, however I have the impression that the PR includes fixes and refactoring not needed by the move to prompt-engine, which makes the review harder. If that's the case, I would have preferred splitting the PR in 2, one to clean up and one to migrate (or viceversa). |
wrt the migration from TXT to YAML, also considering #112, I think there was value in the simplicity of TXT files quite easy to edit, while YAML will cause some friction for users looking to customize the examples. Not sure if we have it, is there some documentation helping with this task, e.g. "how to add more examples" and "how to include custom suggestions" ? |
This PR focuses on changing the file config method with prompt engine. This has 2 major benefits - Easy State Management and Faster Reload Times.
Prompt Engine support has majorly changed the implementation of these below items:-
1.1. This allows for easy change of model parameters without having to worry about unintended space characters in the config causing an issue.
1.2. Additionally, now the config won't be in the OpenAI prediction string while querying and will only be used for constructing the prompt.
1.3 Prompt sharing can be done via the YAML files directly, and if there is some problem in the config declaration, the YAML decoding process will pre-intimate while loading the YAML.