Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem with output_log_probs #632

Open
Alireza3242 opened this issue Oct 28, 2024 · 3 comments
Open

problem with output_log_probs #632

Alireza3242 opened this issue Oct 28, 2024 · 3 comments
Assignees
Labels
need more info question Further information is requested

Comments

@Alireza3242
Copy link

I run triton with tensorrtllm. But when i give long text to llm, triton returns a long array of zeros named output_log_probs in every token. If my text be longer than some number, the request not work truly.

can you add a option to config.pbtxt that prevent sending output_log_probs?

@hello-11
Copy link
Collaborator

Could you tell me more details about your questions?

@Alireza3242
Copy link
Author

Alireza3242 commented Nov 4, 2024

@hello-11
For example i send this request:

curl -X POST my_ip:8000/v2/models/ensemble/generate_stream -d '{"text_input": "hello!, How are you?", "max_tokens":1024, "temperature":0.1, "top_p":0.9, "top_k":1, "repetition_penalty":1.15, "stream":true}'

answer is:

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":""}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":"\n"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":""}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":"model"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":"\n"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":"I"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":" am"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":" doing"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":" well"}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":","}

data: {"context_logits":0.0,"cum_log_probs":0.0,"generation_logits":0.0,"model_name":"ensemble","model_version":"1","output_log_probs":[0.0,0.0,0.0,0.0,0.0,0.0,0.0],"sequence_end":false,"sequence_id":0,"sequence_start":false,"text_output":" thank"}
...

I dont want to get output_log_probs.

see line: 375 in this file:
https://github.com/triton-inference-server/tensorrtllm_backend/blob/v0.14.0/all_models/inflight_batcher_llm/tensorrt_llm/1/model.py

@byshiue
Copy link
Collaborator

byshiue commented Nov 19, 2024

Since triton server does not support to cancel returning some outputs, you will recieve the output_log_probs if you use curl to send request.

There are two ways to prevent the issues:

  1. [Suggested] Refer to inflight_batcher_llm/client/inflight_batcher_llm_client.py or some client example. In these examples, we send request and receive the results, and parse the required outputs from the requests.
  2. Modify the config.pbtxt to remove the output_log_probs.

@byshiue byshiue added the question Further information is requested label Nov 19, 2024
@byshiue byshiue self-assigned this Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need more info question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants