Skip to content

v0.6.0beta

Pre-release
Pre-release
Compare
Choose a tag to compare
@thibo73800 thibo73800 released this 06 Apr 13:33
· 18 commits to master since this release

What's Changed

This release comes with a Docker image based on PyTorch v1.13.1 and pytorch_lightning 1.9.3. The image is available on Docker Hub at the following link: visualbehaviorofficial/aloception-oss:cuda-11.3-pytorch1.13.1-lightning1.9.3

Docker + version + pip install

  • The Docker image comes with a default aloception user.
  • You can now check the version of the package you are using with aloscene.__version__, alonet.__version__, and alodataset.__version__. All of them are currently linked to the same version, v0.6.0beta.
  • You can install the packages from pip.

Docker with Aloception user

When running the new Docker image, it is recomended to map your Home within the docker image

-e LOCAL_USER_ID=$(id -u) /home/YOUR_USER/:/home/aloception/

Pip Install

The setup.py is working now. If you are not planning to change or update Aloception, you can install it from Git using the following command from the Docker (not pre-installed by default):

pip install git+https://github.com/Visual-Behavior/aloception-oss.git@v0.6.0beta

If you are planning to change Aloception, you can install it from the aloception-oss folder with the following command:

pip install -e .

Features & fix

  • Fix bug 1 : MetricsCallback and run_pl_training
  • on_train_batch_end hook doesnt require dataloader_idx now.
  • FitLoop object of pytorch-lightning doesnt have public property should_accumulate since version 1.5.0
  • run_pl_training: pytorch-lightning changes the initialization method of Trainer, especially for multi-gpu training.

  • New feature 1 : Structure directory for logging and checkpoint during training
  • New feature 2: Since now, there is a config file alonet_config.json created in ~/.aloception which defines the default directory to save log and checkpoint during training. If the file does not exist, user can create it during the first training.
  • New feature 3: We can also have use the different path for logging and checkpoint as in alonet_config.json by using --log_save_dir path_to_dir and --cp_save_dir path_to_dir.

Fix unit test : Mostly removed warning & put back oriented boxes2D with cuda (Now automaticly built into the docker)
Fix setup.py.


General description of your pull request with the list of new features and/or bugs.

  • Fix bug X : fix ZeroDivision error in metrics .

  • New feature : add precision and recall.

  • Fix bug X : depth.encode_absolute has bug in dimension in torch1.13. #337

  • How to fix: remove unsqueeze in encode_abosolute
  • Result after fixing
>>> from aloscene import Depth
>>> import torch
>>> depth = Depth(torch.zeros((1, 128, 128)), is_absolute=False)
>>> depth.encode_absolute()
tensor(
	scale=1
	shift=0
	distortion=1.0
	is_planar=True
	is_absolute=True
	projection=pinhole
	[[[100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         ...,
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.]]])
>>> depth.encode_absolute(keep_negative=True)
tensor(
	scale=1
	shift=0
	distortion=1.0
	is_planar=True
	is_absolute=True
	projection=pinhole
	[[[100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         ...,
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.]]])
>>> 


Introducing base classes for datamodules and train pipelines (inspired by BaseDataset class).
@thibo73800

  • New feature 1 : BaseDataModule class
    My motivation for this class is that I kept reusing code solutions from other projects, such as the arguments, the aug/no aug train_transform structure, etc. This created quite a bit of Ctrl+C/Ctrl+V which is undesirable. My view for this class is that in the future, when creating a DataModule for a project, we inherit from the BaseDataModule class and implement only the transforms and the setup. It acts as a wrapper to the Pytorch Lightning Datamodule class, to provide all aloception users with a common code base.

  • New feature 2 : BaseLightningModule

Same motivation, but for training pipelines. This time, the often-reused bits are the arguments again, the optimizers, the run functions, etc. When inheriting, the user needs to implement the model and the criterion. The user is of course free to write its own functions in the child class for more complex cases


Logs

Full Changelog: v0.5.1...v0.6.0beta