-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorization refactor #205
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. The only thing I'm concerned with is to make sure _cached_errs
and cached_jacobians
in CostFunctionWrapper
are cleared if the vars in corresponding cost functions are updated.
…a cost function and its weight.
…Layer to create them at init time.
…rator, and added it to TheseusLayer.
… from vectorization. Otherwise it works just like the cost function it wraps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work on the refactor @luisenp and of course nice work on the original features @fantaosha! Reviewed everything so far and left comments.
…-to-end tests now run with vectorize=True.
theseus/core/theseus_function.py
Outdated
@@ -64,6 +64,16 @@ def register_aux_vars(self, aux_var_names: Sequence[str]): | |||
for name in aux_var_names: | |||
self.register_aux_var(name) | |||
|
|||
def register_vars(self, vars: Iterable[Variable], is_optim_vars: bool = False): | |||
for var_ in vars: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: trailing underscore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Changed.
@@ -80,7 +80,7 @@ def ee_pose_err_fn(optim_vars, aux_vars): | |||
max_iterations=15, | |||
step_size=0.5, | |||
) | |||
theseus_optim = th.TheseusLayer(optimizer) | |||
theseus_optim = th.TheseusLayer(optimizer, vectorize=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, forgot about these two. Added.
* Created a wrapper cost function class that combines the aux vars for a cost function and its weight. * Disabled support for optimization variables in cost weights. * Changed Objective to iterate over CFWrapper if available, and TheseusLayer to create them at init time. * Added a Vectorizer class and moved CFWrappers there. * Renamed vectorizer as Vectorize, added logic to replace Objective iterator, and added it to TheseusLayer. * Added a CostFunctionSchema -> List[CostFunction] to use for vectorization grouping. * _CostFunctionWrapper is now meant to just store a cached value coming from vectorization. Otherwise it works just like the cost function it wraps. * Added code to automatically compute shared vars in Vectorize. * Changed vectorized costs construction to ensure that their weight is also a copy. * Implemented main cost function vectorization logic. * Updated bug that was causing detached gradients. * Fixed invalid check in theseus end-to-end unit tests. * Added unit test for schema and shared var computation. * Added a test to check that computed vectorized errors are correct. * Moved vectorization update call to base linearization class. * Changed code to allow batch_size > 1 in shared variables. * Fixed unit test and added call to Objective.update() in update_vectorization() if batch_size is None. * Added new private iterator for vectorized costs. * Replaced _register_vars_in_list with TheseusFunction.register_vars. * Renamed vectorize_cost_fns kwarg as vectorize. * Added license headers. * Small refactor. * Fixed bug that was preventing vectorized costs to work with to(). End-to-end tests now run with vectorize=True. * Renamed the private Objective cost function iterator to _get_iterator(). * Renamed kwarg in register_vars. * Set vectorize=True for inverse kinematics and backward tests. * Remove lingering comments. Co-authored-by: Taosha Fan <6612911+fantaosha@users.noreply.github.com>
Refactored the vectorization logic. I did some quick tests on motion planning and tactile pose estimation, and seems to be working.
Some TODOs left:
retract_and_update_variables
in that one as well)