Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SLM] Add support for InternLM architecture #1835

Merged
merged 18 commits into from
Feb 28, 2024
Prev Previous commit
Next Next commit
fix pylint issue
  • Loading branch information
tlopex authored Feb 27, 2024
commit 83a41b9000a282a0f92a3d174f9b9cd76344c329
6 changes: 3 additions & 3 deletions python/mlc_chat/model/model_preset.py
Original file line number Diff line number Diff line change
Expand Up @@ -447,12 +447,12 @@
"use_cache": True,
"vocab_size": 125696,
},
"internlm": {
"internlm": {
"architectures": ["InternLMForCausalLM"],
"auto_map": {
"AutoConfig": "configuration_internlm.InternLMConfig",
"AutoModel": "modeling_internlm.InternLMForCausalLM",
"AutoModelForCausalLM": "modeling_internlm.InternLMForCausalLM"
"AutoModelForCausalLM": "modeling_internlm.InternLMForCausalLM",
},
"bias": True,
"bos_token_id": 1,
Expand All @@ -471,7 +471,7 @@
"torch_dtype": "float16",
"transformers_version": "4.33.2",
"use_cache": True,
"vocab_size": 103168
"vocab_size": 103168,
},
# TODO(mlc-team): enable the model presets when stablized.
# "gemma_2b": {
Expand Down