Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

搭建web端示例时报错了 #540

Open
RoninBang opened this issue Dec 12, 2024 · 6 comments
Open

搭建web端示例时报错了 #540

RoninBang opened this issue Dec 12, 2024 · 6 comments

Comments

@RoninBang
Copy link

我在basic_demo中使用如下指令搭建一个web端
python web_demo.py --from_pretrained ../cogagent-chat-hf --version chat --bf16
cogagent-chat-hf 目录是我在huggingface上下载的模型
但是他报了这个错误:

(visual-llm) root@ubuntu:~/visual-LLM/CogVLM/basic_demo# python web_demo.py --from_pretrained ../cogagent-chat-hf --version chat --bf16
[2024-12-12 11:42:00,560] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
Please build and install Nvidia apex package with option '--cuda_ext' according to https://github.com/NVIDIA/apex#from-source .
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:234 in │
│ │
│ 231 │ rank = int(os.environ.get('RANK', 0)) │
│ 232 │ world_size = int(os.environ.get('WORLD_SIZE', 1)) │
│ 233 │ args = parser.parse_args() │
│ ❱ 234 │ main(args) │
│ 235 │
│ │
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:165 in main │
│ │
│ 162 │
│ 163 def main(args): │
│ 164 │ global model, image_processor, cross_image_processor, text_processor_infer, is_groun │
│ ❱ 165 │ model, image_processor, cross_image_processor, text_processor_infer = load_model(arg │
│ 166 │ is_grounding = 'grounding' in args.from_pretrained │
│ 167 │ │
│ 168 │ gr.close_all() │
│ │
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:65 in load_model │
│ │
│ 62 from sat.quantization.kernels import quantize │
│ 63 │
│ 64 def load_model(args): │
│ ❱ 65 │ model, model_args = AutoModel.from_pretrained( │
│ 66 │ │ args.from_pretrained, │
│ 67 │ │ args=argparse.Namespace( │
│ 68 │ │ deepspeed=None, │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/model/base_model.py:342 in │
│ from_pretrained │
│ │
│ 339 │ @classmethod
│ 340 │ def from_pretrained(cls, name, args=None, *, home_path=None, url=None, prefix='', bu │
│ 341 │ │ if build_only or 'model_parallel_size' not in overwrite_args: │
│ ❱ 342 │ │ │ return cls.from_pretrained_base(name, args=args, home_path=home_path, url=ur │
│ 343 │ │ else: │
│ 344 │ │ │ new_model_parallel_size = overwrite_args['model_parallel_size'] │
│ 345 │ │ │ if new_model_parallel_size != 1 or new_model_parallel_size == 1 and args.mod │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/model/base_model.py:323 in │
│ from_pretrained_base │
│ │
│ 320 │ │ │ null_args = True │
│ 321 │ │ else: │
│ 322 │ │ │ null_args = False │
│ ❱ 323 │ │ args = update_args_with_file(args, path=os.path.join(model_path, 'model_config.j │
│ 324 │ │ args = overwrite_args_by_dict(args, overwrite_args=overwrite_args) │
│ 325 │ │ if not hasattr(args, 'model_class'): │
│ 326 │ │ │ raise ValueError('model_config.json must have key "model_class" for AutoMode │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/arguments.py:469 in │
│ update_args_with_file │
│ │
│ 466 │
│ 467 │
│ 468 def update_args_with_file(args, path): │
│ ❱ 469 │ with open(path, 'r', encoding='utf-8') as f: │
│ 470 │ │ config = json.load(f) │
│ 471 │ # expand relative path │
│ 472 │ folder = os.path.dirname(path) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
FileNotFoundError: [Errno 2] No such file or directory: '../cogagent-chat-hf/model_config.json'

我不知道该怎么办,还有,能告诉我composite_demo目录是做什么的吗,里面的文件如何使用,我没有在readme中看到使用方法

@MachineDora
Copy link

这个项目没有说清楚,web_demo的只能加载sat的模型,你得下sat的模型

@MachineDora
Copy link

composite_demo就是一个整合了CogVLM,CogAgent-chat,CogAgent-agent三个模型的web_demo,应该就python main.py就行了

@MachineDora
Copy link

sat版本的模型在这里:https://hf-mirror.com/THUDM/CogAgent

@RoninBang
Copy link
Author

感谢,我已经能够使用web_demo了,但是我在运行composite_demo里面的main.py时报了这个错误

(visual-llm) root@ubuntu:~/visual-LLM/CogVLM/composite_demo# python main.py
2024-12-13 15:46:56.978 WARNING streamlit.runtime.scriptrunner_utils.script_run_context: Thread 'MainThread': missing ScriptRunContext! This warning can be ignored when running in bare mode.
Traceback (most recent call last):
File "/root/visual-LLM/CogVLM/composite_demo/main.py", line 42, in
import demo_chat_cogvlm, demo_agent_cogagent, demo_chat_cogagent
File "/root/visual-LLM/CogVLM/composite_demo/demo_chat_cogvlm.py", line 8, in
from client import get_client
File "/root/visual-LLM/CogVLM/composite_demo/client.py", line 11, in
from huggingface_hub.inference._text_generation import TextGenerationStreamResponse, Token
ModuleNotFoundError: No module named 'huggingface_hub.inference._text_generation'

@MachineDora
Copy link

感谢,我已经能够使用web_demo了,但是我在运行composite_demo里面的main.py时报了这个错误

(visual-llm) root@ubuntu:~/visual-LLM/CogVLM/composite_demo# python main.py 2024-12-13 15:46:56.978 WARNING streamlit.runtime.scriptrunner_utils.script_run_context: Thread 'MainThread': missing ScriptRunContext! This warning can be ignored when running in bare mode. Traceback (most recent call last): File "/root/visual-LLM/CogVLM/composite_demo/main.py", line 42, in import demo_chat_cogvlm, demo_agent_cogagent, demo_chat_cogagent File "/root/visual-LLM/CogVLM/composite_demo/demo_chat_cogvlm.py", line 8, in from client import get_client File "/root/visual-LLM/CogVLM/composite_demo/client.py", line 11, in from huggingface_hub.inference._text_generation import TextGenerationStreamResponse, Token ModuleNotFoundError: No module named 'huggingface_hub.inference._text_generation'

没pip install huggingface_hub吧

@RoninBang
Copy link
Author

(visual-llm) root@ubuntu:~/visual-LLM/CogVLM/composite_demo# streamlit run main.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501
Network URL: http://172.16.3.154:8501

Snipaste_2024-12-13_16-38-14

我在使用composite_demo里面的main.py时,出现了一直加载的状况,并且我的GPU没有使用,我不知道这是什么情况
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants