Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] TypeError: __init__() got an unexpected keyword argument 'num_entity' #126

Closed
bhadreshpsavani opened this issue Aug 24, 2022 · 2 comments

Comments

@bhadreshpsavani
Copy link

TorchDrug Knowledge Graph Reasoning Tutorial has below code

from torchdrug import models
model = models.NeuralLogicProgramming(num_entity=dataset.num_entity,
                                      num_relation=dataset.num_relation,
                                      hidden_dim=128,
                                      num_step=3,
                                      num_lstm_layer=1)

which give the following error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-4-2a7043191fc0>](https://localhost:8080/#) in <module>
      4                                       hidden_dim=128,
      5                                       num_step=3,
----> 6                                       num_lstm_layer=1)

TypeError: __init__() got an unexpected keyword argument 'num_entity'

Note: it works fine for 0.1.2.post1 version

@bhadreshpsavani
Copy link
Author

bhadreshpsavani commented Aug 24, 2022

if we don't pass num_entity
During training code execution

optimizer = torch.optim.Adam(task.parameters(), lr=1.0e-2)
solver = core.Engine(task, train_set, valid_set, test_set, optimizer,
                     gpus=[0], batch_size=64)
solver.train(num_epoch=1)

we gets below error

AttributeError                            Traceback (most recent call last)
[<ipython-input-7-a2710d940934>](https://localhost:8080/#) in <module>
      2 solver = core.Engine(task, train_set, valid_set, test_set, optimizer,
      3                      gpus=[0], batch_size=64)
----> 4 solver.train(num_epoch=1)

7 frames
[/usr/local/lib/python3.7/dist-packages/torchdrug/core/engine.py](https://localhost:8080/#) in train(self, num_epoch, batch_per_epoch)
    153                     batch = utils.cuda(batch, device=self.device)
    154 
--> 155                 loss, metric = model(batch)
    156                 if not loss.requires_grad:
    157                     raise RuntimeError("Loss doesn't require grad. Did you define any loss in the task?")

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/torchdrug/tasks/reasoning.py](https://localhost:8080/#) in forward(self, batch, all_loss, metric)
     83         metric = {}
     84 
---> 85         pred = self.predict(batch, all_loss, metric)
     86         pos_h_index, pos_t_index, pos_r_index = batch.t()
     87 

[/usr/local/lib/python3.7/dist-packages/torchdrug/tasks/reasoning.py](https://localhost:8080/#) in predict(self, batch, all_loss, metric)
    158             t_index[:batch_size // 2, 1:] = neg_index[:batch_size // 2]
    159             h_index[batch_size // 2:, 1:] = neg_index[batch_size // 2:]
--> 160             pred = self.model(self.fact_graph, h_index, t_index, r_index, all_loss=all_loss, metric=metric)
    161 
    162         return pred

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/torchdrug/models/neurallp.py](https://localhost:8080/#) in forward(self, graph, h_index, t_index, r_index, all_loss, metric)
    106         r_index_set = hr_index_set % graph.num_relation
    107 
--> 108         output = self.get_t_output(graph, h_index_set, r_index_set)
    109 
    110         score = output[t_index, hr_inverse]

<decorator-gen-306> in get_t_output(self, graph, h_index, r_index)

[/usr/local/lib/python3.7/dist-packages/torchdrug/utils/decorator.py](https://localhost:8080/#) in wrapper(forward, self, *args, **kwargs)
    135 
    136         if self.training:
--> 137             return forward(self, *args, **kwargs)
    138 
    139         sig = inspect.signature(forward)

[/usr/local/lib/python3.7/dist-packages/torchdrug/models/neurallp.py](https://localhost:8080/#) in get_t_output(self, graph, h_index, r_index)
     54 
     55         hidden, hx = self.lstm(query)
---> 56         memory = functional.one_hot(h_index, graph.num_entity).unsqueeze(0)
     57 
     58         for i in range(self.num_step):

AttributeError: 'Graph' object has no attribute 'num_entity'

@KiddoZhu
Copy link
Member

Hi! Thanks for pointing this out. This is fixed in c277640.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants