We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I will be grateful to know is there any option to use a pretrained model.
Since the main.py file doesn't have any code that loads the model from a checkpoint and the README.md doesn't have any information on it either.
main.py
README.md
It is not really clear how do I use provided checkpoints. Usually, the type of model is torch.nn.Model, so I can do it like this:
torch.nn.Model
model = build_model(cfg, arguments, args.local_rank, args.distributed) state_dict = torch.load("/content/visual-genome/checkpoints/faster_rcnn_ckpt.pth", map_location="cpu") model.load_state_dict(state_dict)
But here the type of SceneGraphGeneration is not even a PyTorch model.
SceneGraphGeneration
I will be happy to know is there any way to use this in a real project.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I will be grateful to know is there any option to use a pretrained model.
Since the
main.py
file doesn't have any code that loads the model from a checkpoint and theREADME.md
doesn't have any information on it either.It is not really clear how do I use provided checkpoints. Usually, the type of model is
torch.nn.Model
, so I can do it like this:But here the type of
SceneGraphGeneration
is not even a PyTorch model.I will be happy to know is there any way to use this in a real project.
The text was updated successfully, but these errors were encountered: