You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The split_datasets function is only used internally in the data.tracking module - it would clean things up a bit to define the function there rather than at the package level (i.e. in data/__init__.py).
I think this should be possible without affecting the user API, so hopefully no deprecations etc. Opening here in case there are any objections or I've missed something obvious!
The text was updated successfully, but these errors were encountered:
I think there was originally an intention to implement other datasets in which case the split_datasets function would be used in other modules. However I don't think that we are even using the prepare_dataset function that calls split_datasets in the tracking module. We have moved towards defining stable dataset splits in the data registry as opposed to splitting the data right before training.
I'm fine with moving split_datasets into data.tracking, but I think we should make a note that some of this code is no longer in use.
Yeah there's certainly a larger discussion to be had about which data loading/generating functionality in deepcell-tf is still in use with the current pipelines, and what (if any) may be candidates for deprecation. Thanks for the feedback!
rossbar
added a commit
to rossbar/deepcell-tf
that referenced
this issue
Jan 11, 2023
The
split_datasets
function is only used internally in thedata.tracking
module - it would clean things up a bit to define the function there rather than at the package level (i.e. indata/__init__.py
).I think this should be possible without affecting the user API, so hopefully no deprecations etc. Opening here in case there are any objections or I've missed something obvious!
The text was updated successfully, but these errors were encountered: