Skip to content

Commit ca9c36b

Browse files
authored
6638 use np.prod instead of np.product (#6639)
Fixes #6638 ### Types of changes <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [ ] Breaking change (fix or new feature that would cause existing functionality to change). - [ ] New tests added to cover the changes. - [ ] Integration tests passed locally by running `./runtests.sh -f -u --net --coverage`. - [ ] Quick tests passed locally by running `./runtests.sh --quick --unittests --disttests`. - [ ] In-line docstrings updated. - [ ] Documentation updated, tested `make html` command in the `docs/` folder. Signed-off-by: Wenqi Li <wenqil@nvidia.com>
1 parent 1aeb04d commit ca9c36b

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

monai/networks/nets/regressor.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ def _get_layer(
143143
return layer
144144

145145
def _get_final_layer(self, in_shape: Sequence[int]):
146-
linear = nn.Linear(int(np.product(in_shape)), int(np.product(self.out_shape)))
146+
linear = nn.Linear(int(np.prod(in_shape)), int(np.prod(self.out_shape)))
147147
return nn.Sequential(nn.Flatten(), linear)
148148

149149
def forward(self, x: torch.Tensor) -> torch.Tensor:

monai/networks/nets/varautoencoder.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ def __init__(
120120
for s in strides:
121121
self.final_size = calculate_out_shape(self.final_size, self.kernel_size, s, padding) # type: ignore
122122

123-
linear_size = int(np.product(self.final_size)) * self.encoded_channels
123+
linear_size = int(np.prod(self.final_size)) * self.encoded_channels
124124
self.mu = nn.Linear(linear_size, self.latent_size)
125125
self.logvar = nn.Linear(linear_size, self.latent_size)
126126
self.decodeL = nn.Linear(self.latent_size, linear_size)

0 commit comments

Comments
 (0)