You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context tokens are supplied from the generate endpoint. Allow supplying these as an input to additional generations to allow responses that span multiple generations.
The text was updated successfully, but these errors were encountered:
I figured out how to do in manual way.
First capture the context.
inline nlohmann::json Keepcontext;
if (response.as_json()["done"]==true) {
prompt_AI.busy=false;
if (response.as_json().contains("context")){
Keepcontext = response.as_json()["context"];
}
}
Then you put it in request in the next time.
ollama::request request(ollama::message_type::generation);
if (!Keepcontext.empty()){
request["context"] = Keepcontext;
}
Context tokens are supplied from the
generate
endpoint. Allow supplying these as an input to additional generations to allow responses that span multiple generations.The text was updated successfully, but these errors were encountered: