-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorization can give incorrect results if similar costs depend on values not declared as optim/aux vars #511
Comments
Oh. So temporarily, is there any approach that makes the flag function well? I use a similar flag in many cases, but I haven't noticed this issue |
For the now the two easiest workarounds, although neither is ideal are:
class NoLongerBadCost(th.CostFunction):
def __init__(self, x: th.Vector, flag: th.Variable, name=None):
super().__init__(th.ScaleCostWeight(1.0), name=name)
self.x = x
self.flag = flag
self.register_optim_vars(["x"])
self.register_aux_vars(["flag"])
def error(self) -> torch.Tensor:
return (
torch.where(
self.flag.tensor,
torch.ones_like(self.x.tensor)),
torch.zeros_like(self.x.tensor),
)
) I'm hoping I can add a fix before the end of the half, although, looking at the code above I'm now realizing there might not be a one-size-fits-all solution, since the correct logic probably depends on each particular cost function. I'll need to think about this. |
Thanks for your reply. For the first point, I want to know how to turn the vectorization off? |
You can do layer = th.TheseusLayer(optimizer, vectorize=False) If you are using an optimizer w/o a |
🐛 Bug
Steps to Reproduce
Here is a simple repro
Expected behavior
The above prints
tensor([[1., 1.]])
but it should printtensor([[1., 0.]])
.The text was updated successfully, but these errors were encountered: