v0.6.0beta
Pre-releaseWhat's Changed
This release comes with a Docker image based on PyTorch v1.13.1
and pytorch_lightning
1.9.3. The image is available on Docker Hub at the following link: visualbehaviorofficial/aloception-oss:cuda-11.3-pytorch1.13.1-lightning1.9.3
Docker + version + pip install
- The Docker image comes with a default
aloception
user. - You can now check the version of the package you are using with
aloscene.__version__
,alonet.__version__
, andalodataset.__version__
. All of them are currently linked to the same version,v0.6.0beta
. - You can install the packages from pip.
Docker with Aloception user
When running the new Docker image, it is recomended to map your Home within the docker image
-e LOCAL_USER_ID=$(id -u) /home/YOUR_USER/:/home/aloception/
Pip Install
The setup.py is working now. If you are not planning to change or update Aloception, you can install it from Git using the following command from the Docker (not pre-installed by default):
pip install git+https://github.com/Visual-Behavior/aloception-oss.git@v0.6.0beta
If you are planning to change Aloception, you can install it from the aloception-oss
folder with the following command:
pip install -e .
Features & fix
- Fix bug 1 :
MetricsCallback
andrun_pl_training
on_train_batch_end
hook doesnt requiredataloader_idx
now.- FitLoop object of pytorch-lightning doesnt have public property
should_accumulate
since version 1.5.0 run_pl_training
: pytorch-lightning changes the initialization method ofTrainer
, especially for multi-gpu training.
- New feature 1 : Structure directory for logging and checkpoint during training
- New feature 2: Since now, there is a config file
alonet_config.json
created in~/.aloception
which defines the default directory to save log and checkpoint during training. If the file does not exist, user can create it during the first training. - New feature 3: We can also have use the different path for logging and checkpoint as in
alonet_config.json
by using--log_save_dir path_to_dir
and--cp_save_dir path_to_dir
.
Fix unit test : Mostly removed warning & put back oriented boxes2D with cuda (Now automaticly built into the docker)
Fix setup.py.
General description of your pull request with the list of new features and/or bugs.
-
Fix bug X : fix ZeroDivision error in metrics .
-
New feature : add precision and recall.
-
Fix bug X :
depth.encode_absolute
has bug in dimension intorch1.13
. #337
- How to fix: remove
unsqueeze
inencode_abosolute
- Result after fixing
>>> from aloscene import Depth
>>> import torch
>>> depth = Depth(torch.zeros((1, 128, 128)), is_absolute=False)
>>> depth.encode_absolute()
tensor(
scale=1
shift=0
distortion=1.0
is_planar=True
is_absolute=True
projection=pinhole
[[[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
...,
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.]]])
>>> depth.encode_absolute(keep_negative=True)
tensor(
scale=1
shift=0
distortion=1.0
is_planar=True
is_absolute=True
projection=pinhole
[[[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
...,
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.]]])
>>>
Introducing base classes for datamodules and train pipelines (inspired by BaseDataset class).
@thibo73800
-
New feature 1 : BaseDataModule class
My motivation for this class is that I kept reusing code solutions from other projects, such as the arguments, the aug/no aug train_transform structure, etc. This created quite a bit of Ctrl+C/Ctrl+V which is undesirable. My view for this class is that in the future, when creating a DataModule for a project, we inherit from the BaseDataModule class and implement only the transforms and the setup. It acts as a wrapper to the Pytorch Lightning Datamodule class, to provide all aloception users with a common code base. -
New feature 2 : BaseLightningModule
Same motivation, but for training pipelines. This time, the often-reused bits are the arguments again, the optimizers, the run functions, etc. When inheriting, the user needs to implement the model and the criterion. The user is of course free to write its own functions in the child class for more complex cases
Logs
- Merge pull request #325 from Visual-Behavior/bv0.5.0-beta by @thibo73800 in #326
- fix incompatibilites lightning1.9 by @anhtu293 in #328
- Issue 56: logging and checkpoint directories by @anhtu293 in #301
- Torch v2 by @thibo73800 in #331
- Fix unit test & setup.py by @thibo73800 in #332
- Revert "Fix unit test & setup.py" by @thibo73800 in #335
- Fixe unit test, fixe setup.py by @thibo73800 in #336
- fix metrics by @Data-Iab in #330
- Fix
Depth.encode_absolute
by @anhtu293 in #339 - Generic datamodules by @Dee61298 in #341
- Aloception v0.6.0dev by @thibo73800 in #340
- Fix per size MAP by @Aurelien-VB in #347
- adapt torch1.13 by @anhtu293 in #346
- Add warning when augmentations fail by @Aurelien-VB in #348
- fix : remove exportation arg (not supported anymore) by @Data-Iab in #344
- fix zero div by @Data-Iab in #343
- Dev by @thibo73800 in #349
Full Changelog: v0.5.1...v0.6.0beta