Skip to content

Commit ca14945

Browse files
author
Zhijian Liu
committed
Add pre-commit for continuous integration
1 parent 63a67ed commit ca14945

File tree

113 files changed

+2929
-3180
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+2929
-3180
lines changed

.github/workflows/formatter.yml

Lines changed: 0 additions & 34 deletions
This file was deleted.

.github/workflows/pre-commit.yml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
name: pre-commit
2+
3+
on:
4+
pull_request:
5+
push:
6+
branches: [master]
7+
8+
jobs:
9+
pre-commit:
10+
runs-on: ubuntu-latest
11+
steps:
12+
- run: |
13+
sudo apt-get update
14+
sudo apt-get install -y --no-install-recommends clang-format
15+
- uses: actions/checkout@v2
16+
- uses: actions/setup-python@v2
17+
- uses: pre-commit/action@v2.0.3

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
.vscode/
22
build/
3-
*.pyc
3+
*.pyc

.pre-commit-config.yaml

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
repos:
2+
- repo: https://github.com/pre-commit/pre-commit-hooks
3+
rev: v4.0.1
4+
hooks:
5+
- id: trailing-whitespace
6+
name: (Common) Remove trailing whitespaces
7+
- id: mixed-line-ending
8+
name: (Common) Fix mixed line ending
9+
args: ['--fix=lf']
10+
- id: end-of-file-fixer
11+
name: (Common) Remove extra EOF newlines
12+
- id: check-merge-conflict
13+
name: (Common) Check for merge conflicts
14+
- id: requirements-txt-fixer
15+
name: (Common) Sort "requirements.txt"
16+
- id: fix-encoding-pragma
17+
name: (Python) Remove encoding pragmas
18+
args: ['--remove']
19+
- id: double-quote-string-fixer
20+
name: (Python) Fix double-quoted strings
21+
- id: debug-statements
22+
name: (Python) Check for debugger imports
23+
- id: check-json
24+
name: (JSON) Check syntax
25+
- id: check-yaml
26+
name: (YAML) Check syntax
27+
- id: check-toml
28+
name: (TOML) Check syntax
29+
- repo: https://github.com/asottile/pyupgrade
30+
rev: v2.19.4
31+
hooks:
32+
- id: pyupgrade
33+
name: (Python) Update syntax for newer versions
34+
args: ['--py36-plus']
35+
- repo: https://github.com/google/yapf
36+
rev: v0.31.0
37+
hooks:
38+
- id: yapf
39+
name: (Python) Format with yapf
40+
- repo: https://github.com/pycqa/isort
41+
rev: 5.8.0
42+
hooks:
43+
- id: isort
44+
name: (Python) Sort imports with isort
45+
- repo: https://github.com/pycqa/flake8
46+
rev: 3.9.2
47+
hooks:
48+
- id: flake8
49+
name: (Python) Check with flake8
50+
additional_dependencies: [flake8-bugbear, flake8-comprehensions, flake8-docstrings, flake8-executable, flake8-quotes]
51+
- repo: https://github.com/pre-commit/mirrors-mypy
52+
rev: v0.902
53+
hooks:
54+
- id: mypy
55+
name: (Python) Check with mypy
56+
additional_dependencies: [tokenize-rt]
57+
- repo: local
58+
hooks:
59+
- id: clang-format
60+
name: (C/C++/CUDA) Format with clang-format
61+
entry: clang-format -style=google -i
62+
language: system
63+
files: \.(h\+\+|h|hh|hxx|hpp|cuh|c|cc|cpp|cu|c\+\+|cxx|tpp|txx)$

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,4 +46,4 @@ SOFTWARE.
4646

4747
Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
4848
Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
49-
of the code.
49+
of the code.

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
## Overview
1010

11-
We release `torchsparse`, a high-performance computing library for efficient 3D sparse convolution. This library aims at accelerating sparse computation in 3D, in particular the Sparse Convolution operation.
11+
We release `torchsparse`, a high-performance computing library for efficient 3D sparse convolution. This library aims at accelerating sparse computation in 3D, in particular the Sparse Convolution operation.
1212

1313
<img src="https://hanlab.mit.edu/projects/spvnas/figures/sparseconv_illustration.gif" width="1080">
1414

@@ -52,7 +52,7 @@ inds, labels, inverse_map = sparse_quantize(pc, feat, labels, return_index=True,
5252

5353
where `pc`, `feat`, `labels` corresponds to point cloud (coordinates, should be integer), feature and ground-truth. The `inds` denotes unique indices in the point cloud coordinates, and `inverse_map` denotes the unique index each point is corresponding to. The `inverse map` is used to restore full point cloud prediction from downsampled prediction.
5454

55-
To combine a list of `SparseTensor`s to a batch, you may want to use the `torchsparse.utils.sparse_collate_fn` function.
55+
To combine a list of `SparseTensor`s to a batch, you may want to use the `torchsparse.utils.sparse_collate_fn` function.
5656

5757
Detailed results are given in [SemanticKITTI dataset preprocessing code](https://github.com/mit-han-lab/e3d/blob/master/spvnas/core/datasets/semantic_kitti.py) in our [SPVNAS](https://github.com/mit-han-lab/e3d) project.
5858

@@ -99,7 +99,7 @@ In this example, `sphash` is the function converting integer coordinates to hash
9999

100100
### Dummy Training Example
101101

102-
We here provides an entire training example with dummy input [here](examples/example.py). In this example, we cover
102+
We here provides an entire training example with dummy input [here](examples/example.py). In this example, we cover
103103

104104
- How we start from point cloud data and convert it to SparseTensor format;
105105
- How we can implement SparseTensor batching;
@@ -109,7 +109,7 @@ You are also welcomed to check out our [SPVNAS](https://github.com/mit-han-lab/e
109109

110110
### Mixed Precision (float16) Support
111111

112-
Mixed precision training is supported via `torch.cuda.amp.autocast` and `torch.cuda.amp.GradScaler`. Enabling mixed precision training can speed up training and reduce GPU memory usage. By wrapping your training code in a `torch.cuda.amp.autocast` block, feature tensors will automatically be converted to float16 if possible. See [here](examples/example.py) for a complete example.
112+
Mixed precision training is supported via `torch.cuda.amp.autocast` and `torch.cuda.amp.GradScaler`. Enabling mixed precision training can speed up training and reduce GPU memory usage. By wrapping your training code in a `torch.cuda.amp.autocast` block, feature tensors will automatically be converted to float16 if possible. See [here](examples/example.py) for a complete example.
113113

114114
## Speed Comparison Between torchsparse and MinkowskiEngine
115115

examples/example.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
1+
import argparse
2+
13
import numpy as np
24
import torch
35
import torch.nn as nn
4-
import torchsparse
6+
57
import torchsparse.nn as spnn
68
from torchsparse import SparseTensor
7-
from torchsparse.utils import sparse_collate_fn, sparse_quantize
8-
import argparse
9+
from torchsparse.utils.collate import sparse_collate_fn
10+
from torchsparse.utils.quantize import sparse_quantize
911

1012

1113
def generate_random_point_cloud(size=100000, voxel_size=0.2):
@@ -35,7 +37,7 @@ def generate_batched_random_point_clouds(size=100000,
3537
voxel_size=0.2,
3638
batch_size=2):
3739
batch = []
38-
for i in range(batch_size):
40+
for _ in range(batch_size):
3941
batch.append(generate_random_point_cloud(size, voxel_size))
4042
return sparse_collate_fn(batch)
4143

@@ -45,7 +47,7 @@ def dummy_train(device, mixed=False):
4547
spnn.Conv3d(4, 32, kernel_size=3, stride=1), spnn.BatchNorm(32),
4648
spnn.ReLU(True), spnn.Conv3d(32, 64, kernel_size=2, stride=2),
4749
spnn.BatchNorm(64), spnn.ReLU(True),
48-
spnn.Conv3d(64, 64, kernel_size=2, stride=2, transpose=True),
50+
spnn.Conv3d(64, 64, kernel_size=2, stride=2, transposed=True),
4951
spnn.BatchNorm(64), spnn.ReLU(True),
5052
spnn.Conv3d(64, 32, kernel_size=3, stride=1), spnn.BatchNorm(32),
5153
spnn.ReLU(True), spnn.Conv3d(32, 10, kernel_size=1)).to(device)
@@ -71,12 +73,12 @@ def dummy_train(device, mixed=False):
7173

7274
if __name__ == '__main__':
7375
parser = argparse.ArgumentParser()
74-
parser.add_argument("--mixed", action="store_true")
76+
parser.add_argument('--mixed', action='store_true')
7577
args = parser.parse_args()
7678

7779
# set seeds for reproducibility
7880
np.random.seed(2021)
7981
torch.manual_seed(2021)
8082

8183
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
82-
dummy_train(device, args.mixed)
84+
dummy_train(device, args.mixed)

examples/performance.py

Lines changed: 22 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,14 @@
33
import numpy as np
44
import torch
55
import torch.autograd.profiler as profiler
6+
import torch.cuda
67
import torch.nn as nn
8+
import torch.optim
9+
710
import torchsparse.nn as spnn
811
from torchsparse import SparseTensor
9-
from torchsparse.utils import sparse_collate_fn, sparse_quantize
12+
from torchsparse.utils.collate import sparse_collate_fn
13+
from torchsparse.utils.quantize import sparse_quantize
1014

1115

1216
def generate_random_point_cloud(size=100000, voxel_size=0.2):
@@ -36,7 +40,7 @@ def generate_batched_random_point_clouds(size=100000,
3640
voxel_size=0.2,
3741
batch_size=2):
3842
batch = []
39-
for i in range(batch_size):
43+
for _ in range(batch_size):
4044
batch.append(generate_random_point_cloud(size, voxel_size))
4145
return sparse_collate_fn(batch)
4246

@@ -47,19 +51,19 @@ def dummy_train_3x3(device):
4751
spnn.Conv3d(32, 64, kernel_size=3, stride=1),
4852
spnn.Conv3d(64, 128, kernel_size=3, stride=1),
4953
spnn.Conv3d(128, 256, kernel_size=3, stride=1),
50-
spnn.Conv3d(256, 128, kernel_size=3, stride=1, transpose=True),
51-
spnn.Conv3d(128, 64, kernel_size=3, stride=1, transpose=True),
52-
spnn.Conv3d(64, 32, kernel_size=3, stride=1, transpose=True),
53-
spnn.Conv3d(32, 10, kernel_size=3, stride=1, transpose=True),
54+
spnn.Conv3d(256, 128, kernel_size=3, stride=1, transposed=True),
55+
spnn.Conv3d(128, 64, kernel_size=3, stride=1, transposed=True),
56+
spnn.Conv3d(64, 32, kernel_size=3, stride=1, transposed=True),
57+
spnn.Conv3d(32, 10, kernel_size=3, stride=1, transposed=True),
5458
).to(device)
5559
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
5660
criterion = nn.CrossEntropyLoss().to(device)
5761

5862
print('Starting dummy_train_3x3...')
5963
time = datetime.now()
6064
with profiler.profile(profile_memory=True, use_cuda=True) as prof:
61-
with profiler.record_function("model_inference"):
62-
for i in range(10):
65+
with profiler.record_function('model_inference'):
66+
for _ in range(10):
6367
feed_dict = generate_batched_random_point_clouds()
6468
inputs = feed_dict['lidar'].to(device)
6569
targets = feed_dict['targets'].F.to(device).long()
@@ -69,8 +73,8 @@ def dummy_train_3x3(device):
6973
loss.backward()
7074
optimizer.step()
7175
# print('[step %d] loss = %f.'%(i, loss.item()))
72-
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))
73-
prof.export_chrome_trace("trace_dummy_3x3.json")
76+
print(prof.key_averages().table(sort_by='cuda_time_total', row_limit=10))
77+
prof.export_chrome_trace('trace_dummy_3x3.json')
7478

7579
time = datetime.now() - time
7680
print('Finished dummy_train_3x3 in ', time)
@@ -82,19 +86,19 @@ def dummy_train_3x1(device):
8286
spnn.Conv3d(32, 64, kernel_size=(1, 3, 3), stride=1),
8387
spnn.Conv3d(64, 128, kernel_size=(3, 1, 3), stride=1),
8488
spnn.Conv3d(128, 256, kernel_size=(1, 3, 3), stride=1),
85-
spnn.Conv3d(256, 128, kernel_size=(3, 1, 3), stride=1, transpose=True),
86-
spnn.Conv3d(128, 64, kernel_size=(1, 3, 3), stride=1, transpose=True),
87-
spnn.Conv3d(64, 32, kernel_size=(3, 1, 3), stride=1, transpose=True),
88-
spnn.Conv3d(32, 10, kernel_size=(1, 3, 3), stride=1, transpose=True),
89+
spnn.Conv3d(256, 128, kernel_size=(3, 1, 3), stride=1, transposed=True),
90+
spnn.Conv3d(128, 64, kernel_size=(1, 3, 3), stride=1, transposed=True),
91+
spnn.Conv3d(64, 32, kernel_size=(3, 1, 3), stride=1, transposed=True),
92+
spnn.Conv3d(32, 10, kernel_size=(1, 3, 3), stride=1, transposed=True),
8993
).to(device)
9094
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
9195
criterion = nn.CrossEntropyLoss().to(device)
9296

9397
print('Starting dummy_train_3x1 ...')
9498
time = datetime.now()
9599
with profiler.profile(profile_memory=True, use_cuda=True) as prof:
96-
with profiler.record_function("model_inference"):
97-
for i in range(10):
100+
with profiler.record_function('model_inference'):
101+
for _ in range(10):
98102
feed_dict = generate_batched_random_point_clouds()
99103
inputs = feed_dict['lidar'].to(device)
100104
targets = feed_dict['targets'].F.to(device).long()
@@ -104,8 +108,8 @@ def dummy_train_3x1(device):
104108
loss.backward()
105109
optimizer.step()
106110
# print('[step %d] loss = %f.'%(i, loss.item()))
107-
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))
108-
prof.export_chrome_trace("trace_dummy_3x1.json")
111+
print(prof.key_averages().table(sort_by='cuda_time_total', row_limit=10))
112+
prof.export_chrome_trace('trace_dummy_3x1.json')
109113

110114
time = datetime.now() - time
111115
print('Finished dummy_train_3x1 in ', time)

setup.cfg

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
[yapf]
2+
based_on_style = google
3+
spaces_around_power_operator = true
4+
split_before_arithmetic_operator = true
5+
split_before_logical_operator = true
6+
split_before_bitwise_operator = true
7+
8+
[isort]
9+
known_first_party = torchsparse, torchsparse.backend
10+
11+
[flake8]
12+
select = B, C, E, F, P, T4, W, B9
13+
ignore = E501, E722, W503
14+
per-file-ignores =
15+
__init__.py: F401, F403

0 commit comments

Comments
 (0)