Replies: 2 comments 6 replies
-
|
@1049451037 @zRzRzRzRzRzRzR 麻烦帮忙解答一下哈,感谢~ |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
你给的报错信息啥也看不出来…… |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
您好,我的问题稍有点复杂。
我想在本地电脑的docker中配置一个对cogvlm-base-490进行lora fintune的环境,然后把docker打包上传到云服务器上使用。由于我的电脑只有一张24G的显卡,没法跑17B的开源模型。因此我想只build一个小规模的模型,随机初始化权重,然后lora finetune,以验证环境配置无误。
我做的改动如下:
model_config.json文件如下,相对于cogvlm-base-490开源模型的改动已经标出:
{ "model_class": "CogVLMModel", "tokenizer_type": "vicuna-7b-v1.5", "num_layers": 2, # 32 => 2 "hidden_size": 128, # 4096 => 128 "num_attention_heads": 2, # 32 => 2 "vocab_size": 32000, "layernorm_order": "pre", "model_parallel_size": 1, "max_sequence_length": 4096, "use_bias": false, "inner_hidden_size": 11008, "image_length": 1225, "image_size": 490, "eva_args": { "model_class": "EVA2CLIPModel", "num_layers": 3, # 63 => 3 "hidden_size": 1792, "num_attention_heads": 16, "vocab_size": 1, "layernorm_order": "post", "model_parallel_size": 1, "max_sequence_length": 1226, "inner_hidden_size": 15360, "use_final_layernorm": false, "layernorm_epsilon": 1e-06, "row_parallel_linear_final_bias": false, "image_size": [ 490, 490 ], "pre_len": 1, "post_len": 0, "in_channels": 3, "patch_size": 14 }, "bos_token_id": 1, "eos_token_id": 2, "pad_token_id": 0 }启动指令如下:
但是任务在进行了一个step就报错了,报错如下:
请问是什么原因呢?还有其他地方需要修改的吗?
谢谢~
附上我的pip环境:
Beta Was this translation helpful? Give feedback.
All reactions