Skip to content

Commit

Permalink
update docs for Yi-VL
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy committed Feb 1, 2024
1 parent 8644253 commit 03e04b2
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 7 deletions.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -357,9 +357,11 @@ python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port
- Llama
- Mistral
- Mixtral
- Qwen / Qwen 2
- LLaVA
- `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
- Qwen / Qwen 2
- Yi-VL
- see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).
- AWQ quantization

## Benchmark And Performance
Expand Down
9 changes: 3 additions & 6 deletions examples/quick_start/srt_example_yi_vl.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,9 @@ def batch():


if __name__ == "__main__":
runtime = sgl.Runtime(model_path="BabyChou/Yi-VL-6B",
tokenizer_path="BabyChou/Yi-VL-6B")
runtime = sgl.Runtime(model_path="BabyChou/Yi-VL-6B")
# runtime = sgl.Runtime(model_path="BabyChou/Yi-VL-34B")
sgl.set_default_backend(runtime)
# Or you can use API models
# sgl.set_default_backend(sgl.OpenAI("gpt-4-vision-preview"))
# sgl.set_default_backend(sgl.VertexAI("gemini-pro-vision"))

# Run a single request
print("\n========== single ==========\n")
Expand All @@ -65,4 +62,4 @@ def batch():
print("\n========== batch ==========\n")
batch()

runtime.shutdown()
runtime.shutdown()

0 comments on commit 03e04b2

Please sign in to comment.