We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
麻烦问下这个代码库中的数据集格式和qwen2-vl官网提供的数据集格式好像有点不一样,数据集格式看起来和qwen-vl的格式一样。另外想请问这个微调代码是参考的哪个和官网的有点不一样。最后想问一下这个lora微调后看起来并没有合并到原模型中,希望解答感谢
The text was updated successfully, but these errors were encountered:
你好,感谢你对代码的反馈:
model = Qwen2VLForConditionalGeneration.from_pretrained( "./Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) model = PeftModel.from_pretrained(model, model_id="./output/Qwen2-VL-2B/checkpoint-62", config=config)
Sorry, something went wrong.
你好,感谢你对代码的反馈: 代码是自己写的 在微调时,Qwen-VL格式的数据仍然可以适用于Qwen2-VL,这一块也可以看看Qwen2-VL-2B-LaTexOCR,这一块的代码做了更多的精简和改进,微调效果也work 推理实际已经加载了微调后的lora模型,可以看一下这两行,逻辑是先加载原预训练模型,然后再挂载lora训练后的模型。 model = Qwen2VLForConditionalGeneration.from_pretrained( "./Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) model = PeftModel.from_pretrained(model, model_id="./output/Qwen2-VL-2B/checkpoint-62", config=config)
Hi, 请问这个Lora微调后该怎么通过web去调用?
No branches or pull requests
麻烦问下这个代码库中的数据集格式和qwen2-vl官网提供的数据集格式好像有点不一样,数据集格式看起来和qwen-vl的格式一样。另外想请问这个微调代码是参考的哪个和官网的有点不一样。最后想问一下这个lora微调后看起来并没有合并到原模型中,希望解答感谢
The text was updated successfully, but these errors were encountered: