Skip to content

搭建web端示例时报错了 #540

@RoninBang

Description

@RoninBang

我在basic_demo中使用如下指令搭建一个web端
python web_demo.py --from_pretrained ../cogagent-chat-hf --version chat --bf16
cogagent-chat-hf 目录是我在huggingface上下载的模型
但是他报了这个错误:

(visual-llm) root@ubuntu:~/visual-LLM/CogVLM/basic_demo# python web_demo.py --from_pretrained ../cogagent-chat-hf --version chat --bf16
[2024-12-12 11:42:00,560] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
Please build and install Nvidia apex package with option '--cuda_ext' according to https://github.com/NVIDIA/apex#from-source .
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:234 in │
│ │
│ 231 │ rank = int(os.environ.get('RANK', 0)) │
│ 232 │ world_size = int(os.environ.get('WORLD_SIZE', 1)) │
│ 233 │ args = parser.parse_args() │
│ ❱ 234 │ main(args) │
│ 235 │
│ │
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:165 in main │
│ │
│ 162 │
│ 163 def main(args): │
│ 164 │ global model, image_processor, cross_image_processor, text_processor_infer, is_groun │
│ ❱ 165 │ model, image_processor, cross_image_processor, text_processor_infer = load_model(arg │
│ 166 │ is_grounding = 'grounding' in args.from_pretrained │
│ 167 │ │
│ 168 │ gr.close_all() │
│ │
│ /root/visual-LLM/CogVLM/basic_demo/web_demo.py:65 in load_model │
│ │
│ 62 from sat.quantization.kernels import quantize │
│ 63 │
│ 64 def load_model(args): │
│ ❱ 65 │ model, model_args = AutoModel.from_pretrained( │
│ 66 │ │ args.from_pretrained, │
│ 67 │ │ args=argparse.Namespace( │
│ 68 │ │ deepspeed=None, │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/model/base_model.py:342 in │
│ from_pretrained │
│ │
│ 339 │ @classmethod
│ 340 │ def from_pretrained(cls, name, args=None, *, home_path=None, url=None, prefix='', bu │
│ 341 │ │ if build_only or 'model_parallel_size' not in overwrite_args: │
│ ❱ 342 │ │ │ return cls.from_pretrained_base(name, args=args, home_path=home_path, url=ur │
│ 343 │ │ else: │
│ 344 │ │ │ new_model_parallel_size = overwrite_args['model_parallel_size'] │
│ 345 │ │ │ if new_model_parallel_size != 1 or new_model_parallel_size == 1 and args.mod │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/model/base_model.py:323 in │
│ from_pretrained_base │
│ │
│ 320 │ │ │ null_args = True │
│ 321 │ │ else: │
│ 322 │ │ │ null_args = False │
│ ❱ 323 │ │ args = update_args_with_file(args, path=os.path.join(model_path, 'model_config.j │
│ 324 │ │ args = overwrite_args_by_dict(args, overwrite_args=overwrite_args) │
│ 325 │ │ if not hasattr(args, 'model_class'): │
│ 326 │ │ │ raise ValueError('model_config.json must have key "model_class" for AutoMode │
│ │
│ /root/anaconda3/envs/visual-llm/lib/python3.10/site-packages/sat/arguments.py:469 in │
│ update_args_with_file │
│ │
│ 466 │
│ 467 │
│ 468 def update_args_with_file(args, path): │
│ ❱ 469 │ with open(path, 'r', encoding='utf-8') as f: │
│ 470 │ │ config = json.load(f) │
│ 471 │ # expand relative path │
│ 472 │ folder = os.path.dirname(path) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
FileNotFoundError: [Errno 2] No such file or directory: '../cogagent-chat-hf/model_config.json'

我不知道该怎么办,还有,能告诉我composite_demo目录是做什么的吗,里面的文件如何使用,我没有在readme中看到使用方法

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions