Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running MegaForCausalLM example code in Docs #22974

Closed
2 of 4 tasks
Tylersuard opened this issue Apr 24, 2023 · 5 comments
Closed
2 of 4 tasks

Error when running MegaForCausalLM example code in Docs #22974

Tylersuard opened this issue Apr 24, 2023 · 5 comments
Assignees

Comments

@Tylersuard
Copy link
Contributor

Tylersuard commented Apr 24, 2023

System Info

Most recent version of Tranformers from Githup, on Google Colab

Who can help?

@ArthurZucker @younesbelkada

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

This is the example code from the documentation for MegaForCausalLM (https://huggingface.co/docs/transformers/main/model_doc/mega):

from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig
import torch

tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext")
config.is_decoder = True
config.bidirectional = False
model = MegaForCausalLM.from_pretrained("mnaylor/mega-base-wikitext", config=config)

inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)

prediction_logits = outputs.logits

After installing Transformers from source, when I run the above code snippet on Colab, I get this error:

RuntimeError: Error(s) in loading state_dict for MegaForCausalLM:
size mismatch for mega.layers.0.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.0.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.1.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.1.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.2.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.2.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
size mismatch for mega.layers.3.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]).
size mismatch for mega.layers.3.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

Expected behavior

The pretrained model would load all weights without error

@Tylersuard Tylersuard changed the title Error when running example code in Docs Error when running MegaForCausalLM example code in Docs Apr 24, 2023
@ArthurZucker
Copy link
Collaborator

ArthurZucker commented Apr 25, 2023

Hey! Thanks for reporting! This is because the default configuration argument of bidirectional is True. When setting it to False you reduce the size of the ema matrix. If you still want to use it, ignore_mismatched_sizes=True will help you initialize the model.

@ArthurZucker ArthurZucker self-assigned this Apr 25, 2023
@Tylersuard
Copy link
Contributor Author

Thank you for your response. When I set ignore_mismatched_sizes=True the code works. However, the example code in the docs is still incorrect.

@amyeroberts
Copy link
Collaborator

@Tylersuard Yep, you're right! Would you like to open a PR to update the docs to get the git contribution for spotting?

@Tylersuard
Copy link
Contributor Author

@amyeroberts Absolutely!

@Tylersuard
Copy link
Contributor Author

Ok! I just made the PR here. #23382

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants