Adapting Prompts Per Model: Should Protocols Account for It? #180
Unanswered
adi-miller
asked this question in
Q&A
Replies: 2 comments
-
This is potentially similar to Sampling, where modelPreferences is supported? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Could you accomplish this using a prompt with input arguments? So in the server providing the model-specific prompts, you could have the client pass in a model parameter, and the server could have some logic to select the most appropriate prompt. Depending on the exact flow I think you could use sampling in a similar way, either in combination with a dynamic prompt, or as a system prompt included with the sampling request. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Pre-submission Checklist
Question Category
Your Question
I wonder if anyone has encountered the need to adjust prompts based on the model being used?
Background: My team conducts rigorous evaluations of response quality and we often need to adjust the prompts when switching between models. In other words, there is some coupling between the model and the prompt.
If others face a similar challenge, would it make sense to include an indication of the client’s model in the protocol, allowing the server to adjust prompts accordingly?
Beta Was this translation helpful? Give feedback.
All reactions