Skip to content

Commit b96f221

Browse files
add document
lint lint save save add more case save error lint lint commit do lint save fix lint wrap it back as func lint save remove dead comment fix style fix lint Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> address review feedback pe now handle freevar. as a result preserving function is now trivial. test add basic test, implement pretty printing for generic function test lint fix segfault save save do test fix another error address comment commit save address review feedback add test for invalidate, fix error in lookup rename cont to boduy fix error and add regression test Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> fix error, add test case fix lint remove extra line fix some error pe commit save save save save save (pe/dce broken) [DOCKER] Pin flatbuffers checkout to the last release tag (apache#2823). (apache#2879) [Relay][Text Format] Reverse CallNode Print Order (apache#2882) [NNPACK] Modernize test (apache#2868) [Relay] Add list update to prelude (apache#2866) Add missing sgx includes (apache#2878) Fix setting up hints for getaddrinfo (apache#2872) [ARITH] RewriteSimplifier: improved cmp simplification (apache#2851) do (apache#2883) [RELAY][Frontend][TF] decompile tf control flow (apache#2830) * decompile tf control flow * Add docs * remove import relay * move tests under tensorflow frontend * minor fix Enhance upsample operator to adapt onnx opset version 9 (apache#2840) Use version invariant rustfmt (apache#2886) [Relay][Op] Add group conv2d dispatch to topi function (apache#2870) * [Relay][Op] Add group conv2d dispatch to topi function * Rerun tests [Apps] [howto_deploy] fix cxx-flags order and build directory (apache#2888) fix prelu, now can use on 2d input and add one test (apache#2875) Add dense schedules to __init__ for cpu (apache#2855) * Add dense schedules to __init__ for cpu * Add documentation for topi::shape * Add additional imports to topi CPU __init__. [TESTS] Improve script robustness (apache#2893) A number of test scripts use the '|| exit 1' idiom. This has two issues, first process exit codes are defined to be in the range 0-255. Second, more importantly, the idiom is fragile because it requires that every possible failure point be explicitly coded. This patch removes the idiom in favour of "set -e" as used in the docker scripts as a more robust mechanism to ensure that script failures are always caught and propagated by default. [Relay] Fix name of bias in testing.mlp (apache#2892) winograd_nnpack (apache#2721) [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (apache#2861) * Fix Relay ARM CPU spatial pack depthwise alter op layout issue. * Update tune_relay_arm.py [TESTS] Import script robustness (set -u) (apache#2896) Adopt the "set -u" idiom from the docker scripts as a mechanism to improve future robustness. [DOCKER] Upgrade ci-cpu to latest v0.50 (apache#2901) Allow linking against MKLML (apache#2902) [COMMUNITY] ASF mentors (apache#2906) [Relay] Allow converting keras.layers.Sequential (apache#2842) * Allow converting keras.layers.Sequential * Use existing new_var function * Only update expr when missing * Add test [Relay] clean up hd, change tl (apache#2917) Turn on USE_SORT by default (apache#2916) [TEST] Cache test data (apache#2921) Unified error handling in NNVM and Relay frontends (apache#2828) add support for mxnet smooth_l1 (apache#2905) [Relay] Add support for TupleGetItem in op fusion (apache#2914) [Relay, TOPI] Deformable conv2d (apache#2908) * [Relay, TOPI] Add deformable conv2d * Moved to op level2 * Fix lint * Moved to level2 & bug fix * Update comments * Disabled flaky test of conv2d TVM debugresult dump to Chrome Tracing (apache#2922) [Relay] add test for second order ad (apache#2754) * do second order * add comment * better name * use tvm assert all close * refire ci Revert "[Relay] add test for second order ad (apache#2754)" (apache#2926) This reverts commit f5ca991. [Tutorial] Cache the test data in tutorial (apache#2923) [AUTOTVM] Refactor measure build func (apache#2927) Fix intersect of modular set (apache#2904) Fix comment bugs and code style [Relay, OpFusion] Fix handling TupleGetItem for nested tuples (apache#2929) Consistent result of DetectLinearEquation() when an empy vars is passed (apache#2860) [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (apache#2850) * [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. * * test cases * * ci error Outdated renaming for flatten in ONNX converter (apache#2843) [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (apache#2864) * [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. * * review comments Fix vcvtph2ps codegen (apache#2925) Port changes More fixes save save Changes to schedules and mxnet importer
1 parent 46f0b67 commit b96f221

File tree

159 files changed

+6396
-1324
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

159 files changed

+6396
-1324
lines changed

CONTRIBUTORS.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,21 @@
11
TVM Contributors
22
================
3-
TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
3+
TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
44
contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community.
55

66
See the [community structure document](http://docs.tvm.ai/contribute/community.html) for the explanation of community structure and contribution guidelines.
77

8+
## Mentors
9+
10+
TVM is now part of the Apache Incubator.
11+
We are fortunate to have the following mentors.
12+
13+
- Markus Weimer @markusweimer
14+
- Sebastian Schelter @sscdotopen
15+
- Byung-Gon Chun @bgchun
16+
- Henry Saputra @hsaputra
17+
- Timothy Chen @tnachen
18+
- Furkan KAMACI @kamaci
819

920
## Committers
1021

Jenkinsfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
//
2323
ci_lint = "tvmai/ci-lint:v0.50"
2424
ci_gpu = "tvmai/ci-gpu:v0.51"
25-
ci_cpu = "tvmai/ci-cpu:v0.41"
25+
ci_cpu = "tvmai/ci-cpu:v0.50"
2626
ci_i386 = "tvmai/ci-i386:v0.50"
2727

2828
// tvm libraries

apps/howto_deploy/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,4 @@ lib/cpp_deploy_pack: cpp_deploy.cc lib/test_addone_sys.o lib/libtvm_runtime_pack
3131
# Deploy using pre-built libtvm_runtime.so
3232
lib/cpp_deploy_normal: cpp_deploy.cc lib/test_addone_sys.o
3333
@mkdir -p $(@D)
34-
$(CXX) $(PKG_CFLAGS) -o $@ $^ $(PKG_LDFLAGS) -ltvm_runtime
34+
$(CXX) $(PKG_CFLAGS) -o $@ $^ -ltvm_runtime $(PKG_LDFLAGS)

apps/howto_deploy/run_example.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ echo "Build the libraries.."
33
mkdir -p lib
44
make
55
echo "Run the example"
6-
export LD_LIBRARY_PATH=../../lib:${LD_LIBRARY_PATH}
7-
export DYLD_LIBRARY_PATH=../../lib:${DYLD_LIBRARY_PATH}
6+
export LD_LIBRARY_PATH=../../build:${LD_LIBRARY_PATH}
7+
export DYLD_LIBRARY_PATH=../../build:${DYLD_LIBRARY_PATH}
88

99
echo "Run the deployment with all in one packed library..."
1010
lib/cpp_deploy_pack

cmake/config.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ set(USE_MPS OFF)
127127
set(USE_ROCBLAS OFF)
128128

129129
# Whether use contrib sort
130-
set(USE_SORT OFF)
130+
set(USE_SORT ON)
131131

132132
# Build ANTLR parser for Relay text format
133133
set(USE_ANTLR OFF)

cmake/modules/SGX.cmake

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,6 @@ if(NOT USE_SGX STREQUAL "OFF")
4848
-L${USE_SGX}/lib64 -l${_urts_lib}
4949
-L${RUST_SGX_SDK}/sgx_ustdc -lsgx_ustdc)
5050
list(APPEND RUNTIME_SRCS ${RUNTIME_SGX_SRCS})
51+
52+
include_directories(${RUST_SGX_SDK}/edl ${RUST_SGX_SDK}/common)
5153
endif()

cmake/modules/contrib/BLAS.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ elseif(USE_BLAS STREQUAL "mkl")
1010
if(NOT IS_DIRECTORY ${USE_MKL_PATH})
1111
set(USE_MKL_PATH /opt/intel/mkl)
1212
endif()
13-
find_library(BLAS_LIBRARY mkl_rt ${USE_MKL_PATH}/lib/ ${USE_MKL_PATH}/lib/intel64)
13+
find_library(BLAS_LIBRARY NAMES mkl_rt mklml_gnu HINTS ${USE_MKL_PATH}/lib/ ${USE_MKL_PATH}/lib/intel64)
1414
include_directories(${USE_MKL_PATH}/include)
1515
list(APPEND TVM_RUNTIME_LINKER_LIBS ${BLAS_LIBRARY})
1616
list(APPEND RUNTIME_SRCS ${CBLAS_CONTRIB_SRC})

docker/install/ubuntu_install_rust.sh

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,10 @@ apt-get update && apt-get install -y --no-install-recommends curl
99
export RUSTUP_HOME=/opt/rust
1010
export CARGO_HOME=/opt/rust
1111
# this rustc is one supported by the installed version of rust-sgx-sdk
12-
curl -s -S -L https://sh.rustup.rs -sSf | sh -s -- -y --no-modify-path --default-toolchain nightly-2019-01-28
12+
curl -s -S -L https://sh.rustup.rs -sSf | sh -s -- -y --no-modify-path --default-toolchain nightly-2019-03-24
1313
. $CARGO_HOME/env
14-
rustup component add rust-src
15-
cargo install sccache
16-
cargo install rustfmt-nightly --version 1.0.1 --force
17-
cargo install xargo
14+
rustup component add rustfmt
15+
cargo install sccache --no-default-features
1816

1917
# make rust usable by all users
2018
chmod -R a+w /opt/rust

docker/install/ubuntu_install_tflite.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ set -u
55
set -o pipefail
66

77
# Download, build and install flatbuffers
8-
git clone --depth=1 --recursive https://github.com/google/flatbuffers.git
8+
git clone --branch=v1.10.0 --depth=1 --recursive https://github.com/google/flatbuffers.git
99
cd flatbuffers
1010
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release
1111
make install -j8

include/tvm/relay/attrs/nn.h

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,24 @@ struct Conv2DWinogradAttrs : public tvm::AttrsNode<Conv2DWinogradAttrs> {
155155
}
156156
};
157157

158+
/*! \brief Attributes used in winograd weight transformation operators */
159+
struct Conv2DWinogradNNPACKWeightTransformAttrs
160+
: public tvm::AttrsNode<Conv2DWinogradNNPACKWeightTransformAttrs> {
161+
int convolution_algorithm;
162+
DataType out_dtype;
163+
164+
TVM_DECLARE_ATTRS(Conv2DWinogradNNPACKWeightTransformAttrs,
165+
"relay.attrs.Conv2DWinogradNNPACKWeightTransformAttrs") {
166+
TVM_ATTR_FIELD(convolution_algorithm)
167+
.describe(
168+
"The convolution algorithm for Winograd NNPACK. "
169+
"E.g. tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8 for WT_8x8, "
170+
"tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8_FP16 for WT_8x8_FP16");
171+
TVM_ATTR_FIELD(out_dtype)
172+
.set_default(NullValue<DataType>())
173+
.describe("Output data type, set to explicit type under mixed precision setting");
174+
}
175+
};
158176

159177
/*! \brief Attributes used in softmax operators */
160178
struct SoftmaxAttrs : public tvm::AttrsNode<SoftmaxAttrs> {
@@ -438,6 +456,67 @@ struct L2NormalizeAttrs : public tvm::AttrsNode<L2NormalizeAttrs> {
438456
}
439457
};
440458

459+
460+
/*! \brief Attributes for DeformableConv2D operator */
461+
struct DeformableConv2DAttrs : public tvm::AttrsNode<DeformableConv2DAttrs> {
462+
Array<IndexExpr> strides;
463+
Array<IndexExpr> padding;
464+
Array<IndexExpr> dilation;
465+
int deformable_groups;
466+
int groups;
467+
IndexExpr channels;
468+
Array<IndexExpr> kernel_size;
469+
std::string data_layout;
470+
std::string kernel_layout;
471+
std::string out_layout;
472+
DataType out_dtype;
473+
474+
TVM_DECLARE_ATTRS(DeformableConv2DAttrs, "relay.attrs.DeformableConv2DAttrs") {
475+
TVM_ATTR_FIELD(strides).set_default(Array<IndexExpr>({1, 1}))
476+
.describe("Specifies the strides of the convolution.");
477+
TVM_ATTR_FIELD(padding).set_default(Array<IndexExpr>({0, 0}))
478+
.describe("If padding is non-zero, then the input is implicitly zero-padded"
479+
"on both sides for padding number of points");
480+
TVM_ATTR_FIELD(dilation).set_default(Array<IndexExpr>({1, 1}))
481+
.describe("Specifies the dilation rate to use for dilated convolution.");
482+
TVM_ATTR_FIELD(deformable_groups).set_default(1)
483+
.describe("Controls the connections between inputs and offsets."
484+
"Input channels are partitioned into multiple deformable groups. Offsets"
485+
"are shared across input channels in the same deformable group.");
486+
TVM_ATTR_FIELD(groups).set_default(1)
487+
.describe("Controls the connections between inputs and outputs."
488+
"At groups=1, all inputs are convolved to all outputs."
489+
"At groups=2, the operation becomes equivalent to having two convolution"
490+
"layers side by side, each seeing half the input channels, and producing"
491+
"half the output channels, and both subsequently concatenated.");
492+
TVM_ATTR_FIELD(channels)
493+
.describe("The number of output channels in the convolution."
494+
" If it is not set, inferred by shape of the weight.")
495+
.set_default(NullValue<IndexExpr>());
496+
TVM_ATTR_FIELD(kernel_size)
497+
.describe("Specifies the dimensions of the convolution window.")
498+
.set_default(NullValue<Array<IndexExpr> >());
499+
TVM_ATTR_FIELD(data_layout).set_default("NCHW")
500+
.describe("Dimension ordering of input data. Can be 'NCHW', 'NHWC', etc."
501+
"'N', 'C', 'H', 'W' stands for batch, channel, height, and width"
502+
"dimensions respectively. Convolution is applied on the 'H' and"
503+
"'W' dimensions.");
504+
TVM_ATTR_FIELD(kernel_layout).set_default("OIHW")
505+
.describe("Dimension ordering of weight. Can be 'OIHW', 'OIHW16o16i', etc."
506+
"'O', 'I', 'H', 'W' stands for num_filter, input_channel, height, and width"
507+
"dimensions respectively.");
508+
TVM_ATTR_FIELD(out_layout).set_default("")
509+
.describe("Dimension ordering of output. Can be 'NCHW', 'NHWC', etc."
510+
"'N', 'C', 'H', 'W' stands for batch, channel, height, and width"
511+
"dimensions respectively. Default to be same as input layout.");
512+
513+
// use 0 bits to indicate none.
514+
TVM_ATTR_FIELD(out_dtype)
515+
.set_default(NullValue<DataType>())
516+
.describe("Output data type, set to explicit type under mixed precision setting");
517+
}
518+
};
519+
441520
} // namespace relay
442521
} // namespace tvm
443522
#endif // TVM_RELAY_ATTRS_NN_H_

0 commit comments

Comments
 (0)