Releases: Visual-Behavior/aloception-oss
v0.6.0beta
What's Changed
This release comes with a Docker image based on PyTorch v1.13.1
and pytorch_lightning
1.9.3. The image is available on Docker Hub at the following link: visualbehaviorofficial/aloception-oss:cuda-11.3-pytorch1.13.1-lightning1.9.3
Docker + version + pip install
- The Docker image comes with a default
aloception
user. - You can now check the version of the package you are using with
aloscene.__version__
,alonet.__version__
, andalodataset.__version__
. All of them are currently linked to the same version,v0.6.0beta
. - You can install the packages from pip.
Docker with Aloception user
When running the new Docker image, it is recomended to map your Home within the docker image
-e LOCAL_USER_ID=$(id -u) /home/YOUR_USER/:/home/aloception/
Pip Install
The setup.py is working now. If you are not planning to change or update Aloception, you can install it from Git using the following command from the Docker (not pre-installed by default):
pip install git+https://github.com/Visual-Behavior/aloception-oss.git@v0.6.0beta
If you are planning to change Aloception, you can install it from the aloception-oss
folder with the following command:
pip install -e .
Features & fix
- Fix bug 1 :
MetricsCallback
andrun_pl_training
on_train_batch_end
hook doesnt requiredataloader_idx
now.- FitLoop object of pytorch-lightning doesnt have public property
should_accumulate
since version 1.5.0 run_pl_training
: pytorch-lightning changes the initialization method ofTrainer
, especially for multi-gpu training.
- New feature 1 : Structure directory for logging and checkpoint during training
- New feature 2: Since now, there is a config file
alonet_config.json
created in~/.aloception
which defines the default directory to save log and checkpoint during training. If the file does not exist, user can create it during the first training. - New feature 3: We can also have use the different path for logging and checkpoint as in
alonet_config.json
by using--log_save_dir path_to_dir
and--cp_save_dir path_to_dir
.
Fix unit test : Mostly removed warning & put back oriented boxes2D with cuda (Now automaticly built into the docker)
Fix setup.py.
General description of your pull request with the list of new features and/or bugs.
-
Fix bug X : fix ZeroDivision error in metrics .
-
New feature : add precision and recall.
-
Fix bug X :
depth.encode_absolute
has bug in dimension intorch1.13
. #337
- How to fix: remove
unsqueeze
inencode_abosolute
- Result after fixing
>>> from aloscene import Depth
>>> import torch
>>> depth = Depth(torch.zeros((1, 128, 128)), is_absolute=False)
>>> depth.encode_absolute()
tensor(
scale=1
shift=0
distortion=1.0
is_planar=True
is_absolute=True
projection=pinhole
[[[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
...,
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.]]])
>>> depth.encode_absolute(keep_negative=True)
tensor(
scale=1
shift=0
distortion=1.0
is_planar=True
is_absolute=True
projection=pinhole
[[[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
...,
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.],
[100000000., 100000000., 100000000., ..., 100000000.,
100000000., 100000000.]]])
>>>
Introducing base classes for datamodules and train pipelines (inspired by BaseDataset class).
@thibo73800
-
New feature 1 : BaseDataModule class
My motivation for this class is that I kept reusing code solutions from other projects, such as the arguments, the aug/no aug train_transform structure, etc. This created quite a bit of Ctrl+C/Ctrl+V which is undesirable. My view for this class is that in the future, when creating a DataModule for a project, we inherit from the BaseDataModule class and implement only the transforms and the setup. It acts as a wrapper to the Pytorch Lightning Datamodule class, to provide all aloception users with a common code base. -
New feature 2 : BaseLightningModule
Same motivation, but for training pipelines. This time, the often-reused bits are the arguments again, the optimizers, the run functions, etc. When inheriting, the user needs to implement the model and the criterion. The user is of course free to write its own functions in the child class for more complex cases
Logs
- Merge pull request #325 from Visual-Behavior/bv0.5.0-beta by @thibo73800 in #326
- fix incompatibilites lightning1.9 by @anhtu293 in #328
- Issue 56: logging and checkpoint directories by @anhtu293 in #301
- Torch v2 by @thibo73800 in #331
- Fix unit test & setup.py by @thibo73800 in #332
- Revert "Fix unit test & setup.py" by @thibo73800 in #335
- Fixe unit test, fixe setup.py by @thibo73800 in #336
- fix metrics by @Data-Iab in #330
- Fix
Depth.encode_absolute
by @anhtu293 in #339 - Generic datamodules by @Dee61298 in #341
- Aloception v0.6.0dev by @thibo73800 in #340
- Fix per size MAP by @Aurelien-VB in #347
- adapt torch1.13 by @anhtu293 in #346
- Add warning when augmentations fail by @Aurelien-VB in #348
- fix : remove exportation arg (not supported anymore) by @Data-Iab in #344
- fix zero div by @Data-Iab in #343
- Dev by @thibo73800 in #349
Full Changelog: v0.5.1...v0.6.0beta
v0.5.1
What's Changed
- Fix bug 1 :
MetricsCallback
andrun_pl_training
on_train_batch_end
hook doesnt requiredataloader_idx
now.- FitLoop object of pytorch-lightning doesnt have public property
should_accumulate
since version 1.5.0 run_pl_training
: pytorch-lightning changes the initialization method ofTrainer
, especially for multi-gpu training.
v0.5.0
- Fix bug #243 :
AugTensors can be called without logging the Userwarning
Add the 'append_labels' method to 'BoundingBoxes3D'.
- New feature 1 : Description
box3d = BoundingBoxes3D([[10, 10, 400, 80, 46, 18, 1]])
box3d.append_labels(label)
When labels names exist, display it next to the 2D bounding box instead of displaying IDs.
- New feature : #249 Dataset from directory iterator. Main use : calibrating TRT engines.
path1 = "PATH/TO/DIR/WITH/IMAGES1"
path2 = "PATH/TO/DIR/WITH/IMAGES2"
dataset = FromDirectoryDataset(dirs={"right": [path1, path1], "left": [path2, path2]}, slice=[0.2, 0.3])
# Will return dictionary {"right": img1, "left": img2}
sample1 = dataset[0]
- Fix bug #256 :
- Reduce the memory size required to export and run TRT engines.
- Raise runtime error when a precision is not optimized in a device.
- Fix bug : #15
>>frame = aloscene.Frame(np.random.uniform(0, 1, (3, 50, 100)), names=("C", "H", "W"))
>>frame = aloscene.Frame.batch_list([frame, frame.clone()])
>>frame = frame.temporal()
>>print(f"names: {frame.names}\nshape: {frame.shape}")
names: ('B', 'T', 'C', 'H', 'W')
shape: torch.Size([2, 1, 3, 50, 100])
- New feature 1 : Sampler for train loader can be constructed before calling the method
sampler = torch.utils.data.RandomSampler(dataset, replacement=True)
loader = dataset.train_loader(sampler=sampler)
- New feature 2 : Sampler kwargs can be given to
train_loader
method:
sampler = torch.utils.data.RandomSampler
loader = dataset.train_loader(sampler=sampler, sampler_kwargs={"replacement":True})
- Check if the requested normalization is supported
- Set the
mean_std
property toresnet_rgb_mean_std
at instantiation when usingnormalization="resnet"
- New feature 1 :
When manipulating aug tensor, _saved_names can accumulate None values which could prevent proper concatenation. Note that this might not be an issue once we have a good merging policy of different properties within aug tensors.
- ***Fix bug #271 *** : Duplicated function
- Remove
alonet.common.weights.vb_fodler
- add
create_if_not_found
option toalonet.common.pl_helpers.vb_folder
: if.aloception
does not exist inhome
, mkdir is called.
- Used the keyword arguments
**kwargs
to allow using differentpadding_mode
&fill
values - Updated the docstring
Closes #22
This PR improves the black square title displayed on views.
Issue: #12
- Add parameter to activate/deactivate title (default to True)
Test code (withadd_title=False
):
from aloscene import Frame
from alodataset import CocoBaseDataset
coco_dataset = CocoBaseDataset(sample=False, img_folder="test2017")
#checking if regular getitem works
stuff=coco_dataset.getitem(0)
stuff.get_view(add_title=False).render()
#check if dataloader works
for f, frames in enumerate(coco_dataset.train_loader(batch_size=2)):
frames = Frame.batch_list(frames)
frames.get_view().render()
if f > 1:
break
- Improve title rendering: Use existing
put_adapative_cv2_text()
to add title in Renderer class. Improve the function to adapt text size to frame size. Decrease text size if text is too long.
Test code:
from aloscene import Frame
from alodataset import CocoBaseDataset
coco_dataset = CocoBaseDataset(sample=False, img_folder="test2017")
#checking if regular getitem works
stuff=coco_dataset.getitem(0)
stuff.get_view(add_title=True).render()
#check if dataloader works
for f, frames in enumerate(coco_dataset.train_loader(batch_size=2)):
frames = Frame.batch_list(frames)
frames.get_view().render()
if f > 1:
break
General description of your pull request with the list of new features and/or bugs.
- New feature 1 : Support for pytorch 1.13
__torch_function__
was about to not be supported anymore. Switching to classmethod
was required. The current implementation seem to still be compatible with pytorch 1.10. Note that this change is touching to the most important/breakable part of the aug tensor pipeline.
Open discussion: Should be update the doc to make pytorch 1.13 the default ? I think not before to check for pytorch lightning support.
By the way: c8ed7f6b1cdfeeec369447517af7321349df1e25
All the named labels are displayed next to BoundingBoxes2D
- Labels BoundingBoxes2D : Render Labels next to BoundingBoxes2D FIX #221
import numpy as np
from aloscene import Frame, BoundingBoxes2D, Labels
frame = Frame(np.zeros((3, 100, 500)), normalization="01")
label = Labels([0, 1, 0])
label2 = Labels([0, 0, 1], labels_names=["red", "green"])
box = BoundingBoxes2D([[100, 20, 400, 80], [200, 40, 400, 80], [100, 40, 300, 80]], boxes_format="xyxy", absolute=True, frame_size=frame.HW)
box.append_labels(label, name="label")
box.append_labels(label2, name="label2")
frame.append_boxes2d(box)
print(box)
frame.get_view().render()
- First addition of issue #23 : being able to pad the tensor to the next multiple of
multiple
- unittest
Minimal example
>>> x = aloscene.Frame(torch.rand(1, 10, 10), names=('C', 'H', 'W'))
>>> x.pad(multiple=8).shape
torch.Size([1, 16, 16])
>>> x.pad(multiple=10).shape
torch.Size([1, 10, 10])
>>> x.pad(multiple=32).shape
torch.Size([1, 32, 32])
Fix the merging of tensor to allow torch.cat
to accept a tuple
of AugmentedTensor
.
- New feature : #260
# While overriding the exporter class
def __init__(dynamic: bool = False, **kwargs):
if dynamic:
# Keys are inputs names. Lists are indexes of axes that we want to set as dynamic.
self.dynamic_axes = {"input1": [1, 2, 3], "input2":[1, 2, 3]}
#....
- New feature #269 : Flexible onnx version
General description of your pull request with the list of new features and/or bugs.
- ***New feature 1 #85 *** : Now we can load a model directly from
run_id
without passingload_training
and Lightning module. weights
has highest priority. ifweights is None
, load model fromrun_id
is used.- We can choose to load best checkpoint or last checkpoint.
from alonet.common.weights import load_weights
from alonet.detr import DetrR50
model = DetrR50(num_classes=91, weights="detr-r50")
# load from downloaded weight
load_weights(model, weights="detr-r50")
# load from .pth file
load_weights(model, weights="~/.aloception/weights/detr-r50/detr-r50.pth")
# load from run_id
load_weights(model, run_id="your_run_id_heer", project_run_id="your_project_run_id",)
- New feature 1 : Rotate frame around a custom center
Torchvision Rotate transform supports the possibility to pass a "center" argument to rotate around a given center (and not only around the image center). I added this functionality to our Rotate Alotransform
from alodataset.coco_base_dataset import CocoBaseDataset
from alodataset.transforms import Rotate
coco_dataset = CocoBaseDataset(sample=True)
x = coco_dataset[0]
angle = 15.0
x = Rotate(angle, center=[650, 0])(x)
x.get_view().render()
- fix getitem on augmentedTensor with augmented tensor as mask.
- reset name only if tensor aren't linearized (like in bbox unit test) else declass to classic tensor
- Fix bug 309 : Wrong display of 3d boxes on padded images
The error was in the camera_calib code where two variables were misplaced.
New feature 1 : Added wandb hyperparameters logging
Now the hyperparameters are logged by default in wandb. The config of the experiment can be viewed in wandb=>overwiev=>config (see image below)
- Mean_std_norm no more uses resnet normalization and can be used for custom normalization :
Before, mean_std_norm used _resnet_mean_std by default, therefore you could only use resnet normalization. Now you can use any custom normalization.
import torch
import aloscene
x=torch.rand(3,600,600)
x=aloscene.Frame(x,mean_std=((0.333,0.333,0.333),(0.333,0.333,0.333)))
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)
x=x.mean_std_norm(mean=(0.440,0.220,0.880), std=(0.333,0.333,0.333), name="my_norm")
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)
Output :
normalization de x 255
Mean_std de x ((0.333, 0.333, 0.333), (0.333, 0.333, 0.333))
normalization de x my_norm
Mean_std de x ((0.44, 0.22, 0.88), (0.333, 0.333, 0.333))
- Conversion from mean_std_norm to minmax_sym and from minmax_sym to mean_std_norm
Added this conversion which raised an Exception before
import torch
import aloscene
x=torch.rand(3,600,600)
x=aloscene.Frame(x,mean_std=((0.333,0.333,0.333),(0.333,0.333,0.333)))
x = x.norm_minmax_sym()
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)
x = x.mean_std_norm(mean=(0.333,0.333,0.333), std=(0.333,0.333,0.333), name="custom") # Exception raised
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)
Output :
normalization de x minmax_sym
Mean_std de x None
normalization de x custom
Mean_std de x ((0.333, 0.333, 0.333), (0.333, 0.333, 0.333))
General description of your pull request with the list of new features and/or bugs.
-
Docker : New docker image with pytorch 1.13 support & pytorch lightning 1.9
-
Changed back transfrom to p=1.0 : transformation used to be apply automaticly on all frames. Last month we introduced a new parameter to ran...
v0.4.0
What's Changed
New feature : create_calibrator
function that allows to create calibrator from name and kwargs is added to alonet.torch2trt to optimize imports in Tensorrt scripts.
from alonet.torch2trt import create_calibrator, DataBatchStreamer
cache_file = "calib.bin"
data_streamer = DataBatchStreamer(...)
calibrator = create_calibrator("minmax", data_streamer, cache_file)
- Fix bug :
aloscene.read_image
is now supported in the Jetsonnx.
- Fix bug : Change the way of representing an augmented tensor. Clearer separation between property with new separator and without unnecessary ones.
- Fix bug : The default value (10) for calibration batches does not allow the use of the whole calibration dataset. Default value has been changed to
None
.
- Fix bug : Change all the links to the documentation due to the change of name of the repository.
- New feature : Added a
setup.py
to facilitate the installation of aloception.
- New feature : Kitti Dataset (Stereo, Flow, Scene Flow, Depth, Odometry, Object, Tracking, Road, Semantics)
Kitti Depth : How to use Kitti Depth
date = "2011_09_26"
idsOfDrives = [
"0001", # sample from training subset
"0002", # sample from validation subset
]
custom_drives = {date: idsOfDrives}
kitti_ds = KittiDepth(
subset="all",
return_depth=True,
custom_drives=custom_drives,
)
for f, frames in enumerate(kitti_ds.train_loader(batch_size=2)):
frames = Frame.batch_list(frames)
Kitti Semantic : The semantic class
dataset = KittiSemanticDataset()
obj = dataset.getitem(0)
obj.get_view().render()
How to use the remaining task of the dataset : Dataset's class list : KittiStereoFlow2012
, KittiStereoFlowSFlow2015
, KittiOdometryDataset
, KittiObjectDataset
, KittiTrackingDataset
, KittiRoadDataset
dataset = DATASET_CLASS(right_frame=False)
obj = dataset.getitem(0)
obj["left"].get_view().render()
Scene Flow: dimensions : Error with shape of occlusion mask.
- Fix bug : Fix calibrator import issue. All TensorRT and prod package are now optional.
by @thibo73800 in #215
Support for the kumler_bauer projection in coords2rtheta
- New feature : Add support for the kumler_bauer projection in coords2rtheta
coords2rtheta(..., distortion=(0.25, 0.45), projection="kumler_bauer")
coords2rtheta(..., distortion=0.25, projection="equidistant") # API doesn't change for other projections
- New feature : Add WoodScape dataset.
WooodScapeDataset.
from alodatset import WooodScapeDataset
woodscape = WooodScapeDataset(
labels=[],
cameras=[],
fragment=1.,
)
frame = woodscape[222]
frame.get_view().render()
WooodScapeSplitDataset : WoodScape dataset with train and validation fractions.
from alodatset import WoodScapeSplitDataset, Split
woodscapeTrain = WoodScapeSplitDataset(split=Split.TRAIN)
frame = woodscapeTrain[222]
frame.get_view().render()
- Fix bug: kumler-bauer projection support for
aloscene.Depth
as_planar
,as_euclidean
: Assert error because the missing of "kumler_bauer" in verification condition.as_points3d
: missing of the calculation for distorted focal length for kumler_bauer.
- New feature : Better handle distortion coef for equidistant projection: both
float
andlist
are accepted.
# Python code snippet showing how to use it.
import torch
from aloscene import Depth
x = torch.rand(size=(1, 1, 20, 20))
depth1 = Depth(x, projection="equidistant", distortion=[0.5])
depth2 = Depth(x, projection="equidistant", distortion=0.5)
- New feature : 3 different implementations of focus blur augmentation
from alodataset.transforms import RandomFocusBlur, RandomFocusBlurV2, RandomFocusBlurV3
import aloscene
import torch
frame = aloscene.Frame(torch.rand((3, 300, 300)))
blured_frame1 = RandomFocusBlur()(frame)
blured_frame2 = RandomFocusBlurV2()(frame)
blured_frame3 = RandomFocusBlurV3()(frame)
- New feature : Motion blur augmentation from optical flow
## Motion blur from RAFT-flow
from alonet.raft.raft import RAFT
flow_model = RAFT(weights="raft-things")
flow_model = model.eval()
frame_t0_t1 = aloscene.Frame(torch.ones((2, 3, 300, 300)), names=tuple("TCHW"))
frame_t0 = frame_t0_t1[0]
frame_t1 = frame_t0_t1[1]
blured_t1 = RandomFlowMotionBlur(flow_model=flow_model)(frame_t1, p_frame=frame_t0)
blured_t1.get_view().render()
## Motion blur from ground truth optical flow
flow = aloscene.Flow(torch.ones((2, 300, 300)))
blured_t1 = RandomFlowMotionBlur()(frame_t1, flow=flow)
blured_t1.get_view().render()
- New feature : Random corner masking augmentation
from alodataset.transforms import RandomCornersMask
import aloscene
import torch
frame = aloscene.Frame(torch.ones((3, 300, 300)))
randomly_masked_frame = RandomCornersMask()(frame)
- Fix bug :
CameraIntrinsic
initialization with a shape of 4x4 is now possible using__init__
.
- Fix bug : Fix detr exportation to onnx & trt
- New feature : Added title to frames displayed with get_view()
Added a "title" argument to the get_view() method to be able to directly input a title for your display.
frames.get_view(title="test").render()
- Typing by @Ardorax in #212
- Warning gcc/g++ version for cuda ops by @Aurelien-VB in #230
- fix euclidean depth when as_point3d, better handle points behind camera by @anhtu293 in #311
- Delete github actions by @Ardorax in #313
Full Changelog: v0.3.0...v0.4.0
v0.3.0
What's Changed
Features
The shortest distance between two points is a straight line. - Archimedes
As said Archimedes, knowing the distance (straight line) between camera and a point is as important as knowing planar depth. Therefore, it's convenient to have methods that can do the conversion between them
What's new ?
- Handle negative points in encode_absolute: For wide range camera (FoV > 180), it's possible to have points whose planar depth is small than 0 (points behind camera). To keep these points instead of clipping by 0, pass keep_negative=True in argument.
- Depth to distance as_distance(): Convert depth to distance. Only pinhole camera and linear equidistant camera are supported at this time.
- Distance to depth as_depth(): Convert distance to depth. Only pinhole camera and linear equidistant camera are supported at this time.
- Possible to create a tensor of Distance by passing is_distance=True at initialization.
- Support functions in depth_utils.
Update
- Change the term to avoid the confusion: "euclidean depth" for distance and "planar depth" for usual depth.
- as_distance() becomes as_euclidean()
- as_depth() becomes as_planar()
Archimedes's quote now becomes: The shortest "euclidean depth" between two points is a straight line.
New feature
- Add projection and distortion as new properties of
SpatialAugmentedTensor
so that we can inherit for other types of tensor. Two projection models are supported:pinhole
andequidistant
. Default values arepinhole
and1.0
for distortion so it won't change anything for initialization if we are working on "pinhole" image. Onlyaloscene.Depth
is supported for distortion and equidistant projection at this time. Depth.as_points3d
is now supported equidistant model with distortion. If no projection model and distortion are specified in arguments,as_points3d
uses theprojection
anddistortion
property.Depth.as_planar
andDepth.as_euclidean
now useprojection
anddistortion
property if there is no projection model and distortion specified in arguments.Depth.__get_view__
now has color legend iflegend
is setTrue
.
-
New 💥 :
- TensorRt engines can now be built with int8 precision using Post Training Quantization.
- 4 calibrators are available for quantization :
MinMaxCalibrator
,LegacyCalibrator
,EntropyCalibrator
andEntropyCalibrator2
. - Added a QuantizedModel interface to convert model to quantized model for Training Aware Quantization.
-
Fixed 🔧 :
- Adapt graph option is removed, we just adapt graph once it's exported from torch to ONNX.
New ⭐ :
profiling_verbosity
option is added to the TRTEngineBuilder to better inspect the details of each node when calling thetensorrt.EngineInspector
- Some quantization related arguments are added to the
BaseTRTExporter
.
RandomDownScale
: transform to randomly downscale between original and a minimum frame sizeRandomDownScaleCrop
: a compose transform to randomly downscale then crop
New 🌱
- Engine's inputs/outputs can share the same space in GPU for faster execution. Hosts with shared memory can be retrieved with
outputs_to_cpu
argument and can be updated usinginputs_to_gpu
argument.
Dynamic Cropping
- Possibility to crop an image to smaller fixed size image in the position we want. The crop position can be parsed by argument
center
which can befloat
orint
. - If crop is out of image border, an error is triggered.
New ⭐ :
- Depth evaluation metrics are added to alonet metrics.
- Lvis Dataset + Coco Update + minor fix by @thibo73800 in #196
CocoDetectionDataset
can now use a givenann_file
when loadedCocoPanopticDataset
can now useignore_classes
to ignore some classed when loading the panoptic anns- In
DetrCriterion
interpolation is an option that can be changed withupscale_interpolate
- Lvis Dataset based on
CocoDetectionDataset
with a different ann file
# Create three gray frames, display them on two rows (2 on first rows, 1 on 2nd row)
import numpy as np
import aloscene
arrays = [np.full((3, 600, 650), 100), np.full((3, 500, 500), 50), np.full((3, 500, 800), 200)]
frames = [aloscene.Frame(arr) for arr in arrays]
views = [[frames[0].get_view(), frames[1].get_view()], [frames[2].get_view()]]
aloscene.render(views, renderer="matplotlib")
Create scene flow by calling the class with a file path, a tensor or a ndarray.
If you have optical flow, depth at time T and T + 1 and the camera intrinsic. You can create scene flow with the class method from_optical_flow
. It handle the creation of the occlusion mask if some parameters have one.
- Github action who automatically launch unit test when there is a commit or pull request in master branch
Fix
- fix depth absolute/inverse assertion by @Data-Iab in #167
- Fixed some issues by @Dee61298 in #171
- better colorbar position by @anhtu293 in #178
- Check if depth is planar before projecting to 3d points by @anhtu293 in #177
- Merge dataset weights by @jsalotti in #175
- update arg name by @anhtu293 in #179
- Fix package prod dependencies by @thibo73800 in #181
- remove tracing assertion by @Data-Iab in #182
- clip low values of depth before conversion to disp by @jsalotti in #180
- Pass arguments to RandomScale and RandomCrop in ResizeCropTransform by @anhtu293 in #189
- add execution context failed creation exception by @Data-Iab in #190
- fix: AugmentedTensor clone method by @jsalotti in #191
- bugfix: close plt figure by @jsalotti in #192
- fix masking dimension mismatch by @Data-Iab in #194
- ignore same_on_sequence when no time dimension by @jsalotti in #200
- RealisticNoise default values by @jsalotti in #199
- allow for non integer principal point coordinates by @jsalotti in #202
- check disp_format and clamp if necessary by @jsalotti in #203
GLOBAL_COLOR_SET_CLASS
will automaticly adjust its size for giving random color for a given object class
New Contributors
Full Changelog: v0.2.1...v0.3.0
v0.2.1
What's Changed
- fix tracing assertion by @Data-Iab in #166
Check if tracing attribute exists before checking if it's set to True.
- camera calib by @thibo73800 in #168
Add new method for getting distance from one pose to an other
pose.distance_with(other_pos)
Set default names to extrinsic to (None, None)
- depth encode inverse by @thibo73800 in #169
- Make inverse False by default when creating Depth tensor.
- scale and shift are not required. They're optional.
Full Changelog: v0.2.0...v0.2.1
v0.2.0
What's Changed
- Update README.md by @thibo73800 in #156
- Fix model none on BaseTRTExporter by @thibo73800 in #158
BaseTRTExporter can now be create from a None model. This is usefull if one want to only export from an onnx file.
Two methods added to Depth:
encode_inverse : invert depth with given scale and shift.
encode_absolute : undo encode_inverse changes.
One method added to AugTensor:
to_squeezed_numpy: as its name indicates, converts to squeezed numpy.
- fixe project run_id default by @thibo73800 in #161
Fixe when using load_training without loading the common argparse, the no_run_id was used in load_training. I now use a default value if the value is not set into the args.
Ratio of width and height are not exact in the resize method of matrix intrinsic.
- add batch(dim) and temporal(dim) by @thibo73800 in #162
In this merge request:
It is now possible to do
tensor.temporal(dim=1) # where dim can be 0 or 1
and
tensor.batch(dim=1) # where dim can be 0 or 1
TODO: Check back unit test to check that everything is correct.
Add a mergeable pose label to the Frame object.
It can be used as such
P = aloscene.Pose(cam_pos)
Pose directly inherit from CameraExtrinsic but usually refer to the global world coordinates/
Fix noisy pos to propagate the normalization and to use the device properly.
Add aloscene.render()
You can now directly render a list of view using aloscene.render()
aloscene.render(views)
Here is a example to add views and to record a video
views = []
# Run DFM on side cameras
for frames in data_loader:
# Build a list of view
for frames_side in frames:
output = model.inference(model(frames))
views.append(output.get_view())
# render the list
aloscene.render(views, record_file="model_outputs.mp4")
# Save the final video
aloscene.save_renderer()
batch list from aloscene
Instead of doing
SpatialAugmentedTensor.batch_list(tensors)
or
tensors[0].batch_list(tensors)
You can now do:
aloscene.batch_list(tensors)
Compute translation between two pose/extrinsic
ref.pose.translation_with(src.pose)
New Contributors
Full Changelog: v0.1.0...v0.2.0
v0.1.0
What's Changed
- add version by @thibo73800 in #154
- Trt export from onnx by @thibo73800 in #155
- add
skip_adapt_graph
option to not adapt the graph before to export to TensorRT. - Fix issue when calling
TRTExecutor()
without engine, its nowTRTExecutor(stream=cuda.Stream())
- Automatically adapt graph by default: handle clip operations + simplify onnx graph. Is is not mandatory to override this method anymore. This change will not affect the current trt exporter since the
adapt_graph
method is supposed to be override.
Full Changelog: v0.0.1...v0.1.0
v0.0.1
What's Changed
- Requirements vbfolder update by @LucBourrat1 in #5
- update README for docs by @LucBourrat1 in #6
- Prevent a known issue with camera parameters by @thibo73800 in #40
- use randomsampler in train_dataloader by @jsalotti in #37
- #45 fix tensorboard logger by @ragier in #46
- 3 cocodetection mask implement by @Johansmm in #36
- 1 update samples by @Johansmm in #41
- 50 train with samples by @Johansmm in #58
- Prevent crop outside of the spatial size by @thibo73800 in #34
- tuto getting started: modif typo + better frame slicing example by @LucBourrat1 in #33
- 35 doc getting started datasets by @thibo73800 in #59
- alotransform: probabilistic same_on_* by @jsalotti in #61
- Readmes by @thibo73800 in #63
- Batch list improvment by @thibo73800 in #62
- Frame with labels by @thibo73800 in #64
- 47 training your model alonet by @thibo73800 in #65
- Run from run_ID by @LucBourrat1 in #68
- About augmented tensor by @thibo73800 in #70
- Detr def arch by @thibo73800 in #69
- sample download progress bar and skip user prompt by @Johansmm in #81
- Frame api by @thibo73800 in #82
- update load weight and load train functions by @Johansmm in #80
- remove the need to instantiate boxes by @Johansmm in #84
- Coco panoptic dataset by @Johansmm in #77
- Fixe weights is None for detr & Deformable by @thibo73800 in #87
- Points2D by @thibo73800 in #72
- sample download progress bar and skip user prompt by @Johansmm in #90
- fix bug in load_training by @Johansmm in #91
- 88 load weights finetune models by @Johansmm in #92
- new colormaps and clipping option for disp view by @jsalotti in #95
- 42 development panoptic module by @Johansmm in #73
- fix bug load png image masks by @Johansmm in #99
- Fix compute pq metric mask2id by @Johansmm in #101
- 43 panoptic quality metrics by @Johansmm in #89
- fix stric mode by @Johansmm in #102
- 44 panoptic docs by @Johansmm in #94
- Quick fix log mask and panoptic by @Johansmm in #105
- Points2D & Boxes2D : Full pad support + new augmentations by @thibo73800 in #93
- temporal base metrics by @thibo73800 in #110
- 86 coco panoptic and masks doc by @Johansmm in #96
- Try to import and raise error only if use with instruction by @thibo73800 in #114
- Augmented tensor: labels renamed by Child + default args when adding one node by @thibo73800 in #111
- Fix augmented tensor on resize with Tensor label + done 57 issue by @thibo73800 in #83
- Depth disp pts3d by @thibo73800 in #118
- Fixe crop with fit & absolute boses by @thibo73800 in #117
- 113 unknown weights by @Johansmm in #115
- 119 depth on frame by @Johansmm in #120
- 98 deformable panoptic head by @Johansmm in #123
- Fix apply on child by @Johansmm in #127
- Fix boxes display by @thibo73800 in #124
- Serving ready test by @Johansmm in #129
- 79 coco detection splitmixin by @Johansmm in #122
- Raft refacto by @jsalotti in #128
- Serving ready by @Johansmm in #130
- fix: resize camera calib by @jsalotti in #132
- update threshold by @Johansmm in #133
- Trt export profiling by @jsalotti in #134
- include verbose in scope names by @Johansmm in #138
- Compatibility with embed systems by @Johansmm in #143
- fix append occlusion and title view by @Johansmm in #141
- fix labels get_view error when frame is None by @anhtu293 in #139
- fix log_image for tensorboard logger by @jsalotti in #140
- Fix crop on boxes and pts2d when using absolute position by @thibo73800 in #136
- Panoptic2trt by @Johansmm in #145
- add_rotation by @LucBourrat1 in #144
- Panoptic2trt fix by @Johansmm in #147
- Fix CameraIntrinsic last diag element by @jsalotti in #146
- bugfix: make depth label mergeable by @jsalotti in #149
- Depth.get_view(): fix normalization and add reverse cmap feature by @jsalotti in #150
- Deformable panoptic by @Johansmm in #148
- 142: Load best model instead of last one by @Johansmm in #151
- Optional tensort by @thibo73800 in #152
- save method on renderer by @thibo73800 in #153
New Contributors
- @LucBourrat1 made their first contribution in #5
- @thibo73800 made their first contribution in #40
- @jsalotti made their first contribution in #37
- @ragier made their first contribution in #46
- @Johansmm made their first contribution in #36
- @anhtu293 made their first contribution in #139
Full Changelog: https://github.com/Visual-Behavior/aloception/commits/v0.0.1