Skip to content

Latest commit

 

History

History
58 lines (31 loc) · 1.25 KB

File metadata and controls

58 lines (31 loc) · 1.25 KB

Inference your Fine-tuning models

After fine-tuning, access the new model using ONNX Runtime GenAI.

Install ORT GenAI SDK

pip install numpy -U

pip install onnxruntime-genai

Inference Model

Use the following code to perform inference with your fine-tuned model:

import onnxruntime_genai as og

model = og.Model('Your onnx model folder location')
tokenizer = og.Tokenizer(model)
tokenizer_stream = tokenizer.create_stream()

search_options = {"max_length": 1024,"temperature":0.3}

params = og.GeneratorParams(model)
params.try_use_cuda_graph_with_max_batch_size(1)
params.set_search_options(**search_options)

prompt = "prompt = "<|user|>Who are you not allowed to marry in the UK?<|end|><|assistant|>""
input_tokens = tokenizer.encode(prompt)
params.input_ids = input_tokens

generator = og.Generator(model, params)

while not generator.is_done():
                generator.compute_logits()
                generator.generate_next_token()

                new_token = generator.get_next_tokens()[0]
                print(tokenizer_stream.decode(new_token), end='', flush=True)

Testing Your result

Run the above code to test the inference and observe the generated output.

result