Skip to content

Commit 303a471

Browse files
zhiicsSWuinadobjwfrommvmiheer
authored
Merge with Apache/incubator-tvm (#71)
* Change upstream url * Fix bias_add gradient (apache#4516) * Fix bias_add gradient A change caused collapse_sum_like to reject implicit dimension broadcasting for bias_add gradient, so switch to explicit sum reduction on the non-bias axis dimensions. * Lint fix * [Bugfix][Frontend][TFlite] Fix wrong function call in TANH tests (apache#4517) * Replace sigmoid() with tanh() in tests for TANH * Fixed extra reshape parameter bug. (apache#4524) * Use the best tuner possible (apache#4397) * Use the best tuner possible * Add comment denoting availability of better tuners * Fix typos and wording * [ir] use DataType instead of Type for readability because Type has been deprecated (apache#4513) * add bfloat16 typeflag support (apache#4525) * fix empty config caused KeyError (apache#4520) * fix onnx shape dtype (apache#4528) * fix crash issue in tsim backend (apache#4527) * PIL is depreciated and should be replaced with pillow (a fork of PIL) (apache#4533) Change-Id: If2075df5475505f2da87dae7145af5a7ab83d8a4 * [Relay] External codegen (apache#4482) * Update legacy places from nnvm to relay. (apache#4535) * Update legacy places from nnvm to relay. This PR prepares the current mainline to remove nnvm compiler dep. * remove legacy stage * Implement 1d deconvolution (apache#4476) * [relay][op] add expand op (from ONNX) to relay frontend (apache#4483) * Add Expand to onnx.py * add test function for expand * Fix a onnx frontend test * Add tests for the value itself instead of shape only on test_expand * Cleaned up some unnecessary modifications. * [TOPI] Allow batch matmul to be fused into injective ops (apache#4537) * [TOPI] Fixed nms max_output_size loop (apache#4541) One of the loops in hybrid_nms used for performing the max_output_size reordering was incorrectly designated as parallel resulting in incorrect behaviour. This patch changes that loop to a serial loop. Change-Id: I97184f5887f5f028d8ab339fa2808eb7630a4017 * [DOCS] Mention Ninja build system in install/from_source.rst (apache#4554) * [DOCS] Mention Ninja build system in install/from_source.rst * Address comments * [PYTHON][FFI] Cythonize NDArray.copyto (apache#4549) * [PYTHON][FFI] Cythonize NDArray.copyto * Cythonize the shape property * vm external codegen (apache#4544) * [COMMUNITY] @cchung100m -> reviewer (apache#4557) * [VTA] improved virtual memory mapping (apache#4545) * [VTA] improved virtual memory mapping * Update virtual_memory.cc * [IR] fix style in ir_mutator and ir_visitor (apache#4561) * [RUNTIME][VULKAN] Fix compiler warning (apache#4559) * [REFACTOR][DTYPE] Isolate dtype to runtime (apache#4560) dtype.h -> runtime/data_type.h Changes: - Rename all old reference of tvm::Type to DataType - ExprNode.type -> ExprNode.dtype - Expr.type() -> Expr.dtype() - Change Expr related functions to expr_operator. - DataType::min() -> min_value(DataType) - DataType::max() -> max_value(DataType) - Move type constructor Int, UInt, Float, Handle, Bool into DataType. - Int(bits) -> DataType::Int(bits) - UInt(bits) -> DataType::UInt(bits) * Support standardize runtime module (apache#4532) * [Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (apache#4563) * [TEST] Remove nnvm related code in topi and test script (apache#4562) * [TEST] Remove nnvm related code in topi and test script * Remove docs dep * [Relay] add max_pool3d in relay and TF converter (apache#4551) * [Relay] add max_pool3d in relay and TF converter * fix comments * Remove nnvm (apache#4565) * [VTA][Chisel] End-to-end Inference with Chisel VTA (apache#4574) * [VTA][Chisel] End-to-end Inference with Chisel VTA * Update TensorAlu.scala * remove unnecessary cast to int32 (apache#4573) * Fix llvm-enabled build by adding missing intrinsics headers (apache#4575) * [DEPRECATION] Remove NNVM compiler (apache#4571) * Remove NNVM compiler * [Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (apache#4566) * Added tvm function stencil for subpixel operations to topi. * Topi subpixel operators added and tested. * Added subpixel attrs. * Added depth_to_space relay attributes. * depth_to_space fully working. * Fixed NHWC shape bug. * SpaceToDepth in and all tests passing. * lint fixes. * Added string include * Fixed topi formatting. * Added DCR/CDR mode to depthtospace operator. * [DOC] fix doc in api.py (apache#4580) * [DEPRECATION] Cleanup legacy verilog support (apache#4576) This PR cleans up the left over code for legacy verilog support which was experimental. The new hardware backend path is now support by VTA via TSIM. * [RUNTIME] Remove Extension VTable in favor of Unified Object system. (apache#4578) Before the unified object protocol, we support pass additional extension objects around by declaring a type as an extension type. The old extension mechanism requires the types to register their constructor and deleter to a VTable and does not enjoy the benefit of the self-contained deletion property of the new Object system. This PR upgrades the extension example to make use of the new object system and removed the old Extension VTable. Note that the register_extension funtion in the python side continues to work when the passed argument does not require explicit container copy/deletion, which covers the current usecases of the extension mechanism. * Some Windows and MSVC fixes (apache#4569) * fix python exception creation in Windows * better string conversion for msvc * fix cpp style issue * [NEWS] add v0.6 release (apache#4558) * [NEWS] add v0.6 release * remove link prefix * fix issue number * [DOCS]fix typos in autotvm tutorial (apache#4585) * [Quantization, Calibrate] Fix context creation when current_target is explicity set (apache#4582) * [Container] Fix NDArray SaveDLTensor declaration and implementation signature different (apache#4586) * [TOPI][AutoTVM] NHWC conv2d templates for ARM (apache#3859) * [AutoTVM][TOPI] NHWC conv2d templates (spatial pack) for ARM As some frontends (tflite for example) are using NHWC as the default layout, we are enabling NHWC schedule templates in TOPI and AutoTVM. * some comments fix * [FIX][TOPI][X86] schedule dense pack (apache#4539) * [Relay] Convert Layout Pass. (apache#4335) * [Relay][AlterLayout] Broadcast with scalar shape (apache#4577) * [TOPI] add 3D upsampling Op. (apache#4584) * [TOPI] add 3D upsampling Op. * fix lint issues * change align_corners to coordinate_transformation_mode * fix resize3d half_pixel * make a simple function and clean up trilinear_resize3d_python * fix doc * [Runtime] add necessary const qualifier for NDArray container of parameters (apache#4590) * [autotvm] fix typos in comment (apache#4591) * fix tf.compat.v1 issue for tf verison <=1.12 (apache#4593) * [FRONTEND][TF] conv2d_transpose 'SAME' support kernel more than 1x1 (apache#4484) * [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1 * revised per as review comments * add more fallback wolkaround to make all tests pass * [GraphRuntime] Support parameter out in the graph runtime debug (apache#4598) * [GraphRuntime] Support parameter out in the graph runtime debug * Dummy commit to trigger build * [Perf] Add CublasLt extern support for better Igemm performance (apache#4550) * cublaslt added * fix lint * address comments * address more comments * Trigger CI * Trigger CI * fix codegenc (apache#4597) * [REFACTOR][RUNTIME] Update NDArray use the Unified Object System (apache#4581) * [REFACTOR][RUNTIME] Move NDArray to Object System. Previously NDArray has its own object reference counting mechanism. This PR migrates NDArray to the unified object protocol. The calling convention of NDArray remained intact. That means NDArray still has its own type_code and its handle is still DLTensor compatible. In order to do so, this PR added a few minimum runtime type detection in TVMArgValue and RetValue only when the corresponding type is a base type(ObjectRef) that could also refer to NDArray. This means that even if we return a base reference object ObjectRef which refers to the NDArray. The type_code will still be translated correctly as kNDArrayContainer. If we assign a non-base type(say Expr) that we know is not compatible with NDArray during compile time, no runtime type detection will be performed. This PR also adopts the object protocol for NDArray sub-classing and removed the legacy NDArray subclass protocol. Examples in apps/extension are now updated to reflect that. Making NDArray as an Object brings all the benefits of the object system. For example, we can now use the Array container to store NDArrays. * Address review comments * [Relay][Convert Layout] Handling batch norm layout change. (apache#4600) * [relay][refactor] Cache Op::Get in passes to reduce lookup overhead (apache#4594) * Refactor to use IsOp utility * retrigger CI * Update dmlc_tvm_commit_id.txt * disable one test_batch_norm unit test for now to check CI * enable test_batch_norm Co-authored-by: SWu <SWu@users.noreply.github.com> Co-authored-by: Ina Dobreva <55383260+inadob@users.noreply.github.com> Co-authored-by: Josh Fromm <jwfromm@uw.edu> Co-authored-by: miheer vaidya <v.miheer@gmail.com> Co-authored-by: Liang ZOU <liang.d.zou@gmail.com> Co-authored-by: YixinBao <yixin.bao@intel.com> Co-authored-by: Cody Yu <comaniac0422@gmail.com> Co-authored-by: masahi <masahi129@gmail.com> Co-authored-by: Liangfu Chen <liangfu.chen@icloud.com> Co-authored-by: lhutton1 <35535092+lhutton1@users.noreply.github.com> Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com> Co-authored-by: Alex Gladkov <gladkov_alex@yahoo.com> Co-authored-by: Takato Yamada <tkclimb0911@gmail.com> Co-authored-by: Haichen Shen <shenhaichen@gmail.com> Co-authored-by: mbarrett97 <55580676+mbarrett97@users.noreply.github.com> Co-authored-by: Hideto Ueno <uenoku.tokotoko@gmail.com> Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn> Co-authored-by: Zhao Wu <wuzhaozju@gmail.com> Co-authored-by: Neo Chien <cchung100m@cs.ccu.edu.tw> Co-authored-by: Yong Wu <55wuyong@163.com> Co-authored-by: Dmitri Makarov <dmakarov@users.noreply.github.com> Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com> Co-authored-by: kice <wslikerqs@gmail.com> Co-authored-by: Yizhi Liu <liuyizhi@apache.org> Co-authored-by: Wang Yucheng <wyc91543@163.com> Co-authored-by: 王振华(Zhenhua WANG) <i@jackwish.net> Co-authored-by: deepIgnorance <zhengsizemax@outlook.com> Co-authored-by: Animesh Jain <anijain@umich.edu> Co-authored-by: optima2005 <56945758+optima2005@users.noreply.github.com> Co-authored-by: zhuochen <zhuochen@outlook.com> Co-authored-by: Leyuan Wang <laurawly@gmail.com>
1 parent cfde295 commit 303a471

File tree

606 files changed

+11007
-37973
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

606 files changed

+11007
-37973
lines changed

CMakeLists.txt

Lines changed: 5 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -119,9 +119,8 @@ else(MSVC)
119119
endif(MSVC)
120120

121121
# add source group
122-
FILE(GLOB_RECURSE GROUP_SOURCE "src/*.cc" "nnvm/src/*.cc")
123-
FILE(GLOB_RECURSE GROUP_INCLUDE "src/*.h" "include/*.h"
124-
"nnvm/src/*.h" "nnvm/include/*.h")
122+
FILE(GLOB_RECURSE GROUP_SOURCE "src/*.cc")
123+
FILE(GLOB_RECURSE GROUP_INCLUDE "src/*.h" "include/*.h")
125124
assign_source_group("Source" ${GROUP_SOURCE})
126125
assign_source_group("Include" ${GROUP_INCLUDE})
127126

@@ -170,19 +169,6 @@ endif(USE_VM_PROFILER)
170169
file(GLOB DATATYPE_SRCS src/codegen/datatype/*.cc)
171170
list(APPEND COMPILER_SRCS ${DATATYPE_SRCS})
172171

173-
if(NOT MSVC)
174-
file(GLOB COMPILER_VERILOG_SRCS src/codegen/verilog/*.cc)
175-
list(APPEND COMPILER_SRCS ${COMPILER_VERILOG_SRCS})
176-
endif()
177-
178-
file(GLOB_RECURSE NNVM_COMPILER_SRCS
179-
nnvm/src/c_api/*.cc
180-
nnvm/src/core/*.cc
181-
nnvm/src/pass/*.cc
182-
nnvm/src/compiler/*.cc
183-
nnvm/src/top/*.cc
184-
)
185-
186172
file(GLOB TOPI_SRCS
187173
topi/src/*.cc
188174
)
@@ -255,6 +241,8 @@ include(cmake/modules/LLVM.cmake)
255241
include(cmake/modules/Micro.cmake)
256242
include(cmake/modules/ANTLR.cmake)
257243
include(cmake/modules/contrib/BLAS.cmake)
244+
include(cmake/modules/contrib/CODEGENC.cmake)
245+
include(cmake/modules/contrib/DNNL.cmake)
258246
include(cmake/modules/contrib/Random.cmake)
259247
include(cmake/modules/contrib/MicroStandaloneRuntime.cmake)
260248
include(cmake/modules/contrib/Sort.cmake)
@@ -295,7 +283,6 @@ if(NOT USE_SGX STREQUAL "OFF")
295283
add_dependencies(tvm_runtime sgx_edl tvm_t)
296284
install(TARGETS tvm_t ARCHIVE DESTINATION lib${LIB_SUFFIX})
297285
endif()
298-
add_library(nnvm_compiler SHARED ${NNVM_COMPILER_SRCS})
299286

300287
if(USE_THREADS)
301288
message(STATUS "Build with thread support...")
@@ -305,14 +292,11 @@ if(USE_THREADS)
305292
target_link_libraries(tvm Threads::Threads)
306293
target_link_libraries(tvm_topi Threads::Threads)
307294
target_link_libraries(tvm_runtime Threads::Threads)
308-
target_link_libraries(nnvm_compiler Threads::Threads)
309295
endif(USE_THREADS)
310296

311297
target_link_libraries(tvm ${TVM_LINKER_LIBS} ${TVM_RUNTIME_LINKER_LIBS})
312298
target_link_libraries(tvm_topi tvm ${TVM_LINKER_LIBS} ${TVM_RUNTIME_LINKER_LIBS})
313299
target_link_libraries(tvm_runtime ${TVM_RUNTIME_LINKER_LIBS})
314-
target_link_libraries(tvm_runtime_static ${TVM_RUNTIME_LINKER_LIBS})
315-
target_link_libraries(nnvm_compiler tvm)
316300

317301
if (HIDE_PRIVATE_SYMBOLS AND NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
318302
set(HIDE_SYMBOLS_LINKER_FLAGS "-Wl,--exclude-libs,ALL")
@@ -322,7 +306,6 @@ if (HIDE_PRIVATE_SYMBOLS AND NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
322306
target_link_libraries(tvm ${HIDE_SYMBOLS_LINKER_FLAGS})
323307
target_link_libraries(tvm_topi ${HIDE_SYMBOLS_LINKER_FLAGS})
324308
target_link_libraries(tvm_runtime ${HIDE_SYMBOLS_LINKER_FLAGS})
325-
target_link_libraries(nnvm_compiler ${HIDE_SYMBOLS_LINKER_FLAGS})
326309
endif()
327310

328311
# Related headers
@@ -332,10 +315,7 @@ target_include_directories(
332315
target_include_directories(
333316
tvm_topi
334317
PUBLIC "topi/include")
335-
target_include_directories(
336-
nnvm_compiler
337-
PUBLIC "nnvm/include"
338-
PUBLIC "topi/include")
318+
339319

340320
# Tests
341321
set(TEST_EXECS "")
@@ -374,8 +354,6 @@ add_custom_target(runtime DEPENDS tvm_runtime)
374354
install(TARGETS tvm DESTINATION lib${LIB_SUFFIX})
375355
install(TARGETS tvm_topi DESTINATION lib${LIB_SUFFIX})
376356
install(TARGETS tvm_runtime DESTINATION lib${LIB_SUFFIX})
377-
install(TARGETS tvm_runtime_static DESTINATION lib${LIB_SUFFIX})
378-
install(TARGETS nnvm_compiler DESTINATION lib${LIB_SUFFIX})
379357

380358
if (INSTALL_DEV)
381359
install(
@@ -398,11 +376,6 @@ if (INSTALL_DEV)
398376
FILES_MATCHING
399377
PATTERN "*.h"
400378
)
401-
install(
402-
DIRECTORY "nnvm/include/." DESTINATION "include"
403-
FILES_MATCHING
404-
PATTERN "*.h"
405-
)
406379
else(INSTALL_DEV)
407380
install(
408381
DIRECTORY "include/tvm/runtime/." DESTINATION "include/tvm/runtime"
@@ -415,5 +388,4 @@ endif(INSTALL_DEV)
415388
if(MSVC)
416389
target_compile_definitions(tvm PRIVATE -DTVM_EXPORTS)
417390
target_compile_definitions(tvm_runtime PRIVATE -DTVM_EXPORTS)
418-
target_compile_definitions(nnvm_compiler PRIVATE -DNNVM_EXPORTS)
419391
endif()

CONTRIBUTORS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,7 @@ We do encourage everyone to work anything they are interested in.
6969
- [Liangfu Chen](https://github.com/liangfu): @liangfu
7070
- [Wei Chen](https://github.com/wweic): @wweic
7171
- [Zhi Chen](https://github.com/zhiics): @zhiics
72+
- [Neo Chien](https://github.com/cchung100m): @cchung100m
7273
- [Meghan Cowan](https://github.com/cowanmeg): @cowanmeg
7374
- [Balint Cristian](https://github.com/cbalint13): @cbalint13
7475
- [Sergei Grechanik](https://github.com/sgrechanik-h): @sgrechanik-h
@@ -120,4 +121,3 @@ We do encourage everyone to work anything they are interested in.
120121
- [Cody Hao Yu](https://github.com/comaniac)
121122
- [Chris Nuernberger](https://github.com/cnuernber)
122123
- [Shoubhik Bhattacharya](https://github.com/shoubhik)
123-
- [Neo Chien](https://github.com/cchung100m)

Jenkinsfile

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ tvm_multilib = "build/libtvm.so, " +
5757
"build/libvta_tsim.so, " +
5858
"build/libvta_fsim.so, " +
5959
"build/libtvm_topi.so, " +
60-
"build/libnnvm_compiler.so, " + tvm_runtime
60+
tvm_runtime
6161

6262
// command to start a docker container
6363
docker_run = 'docker/bash.sh'
@@ -309,14 +309,15 @@ stage('Integration Test') {
309309
}
310310
}
311311
},
312-
'legacy: GPU': {
312+
'docs: GPU': {
313313
node('GPU') {
314-
ws(per_exec_ws("tvm/legacy-python-gpu")) {
314+
ws(per_exec_ws("tvm/docs-python-gpu")) {
315315
init_git()
316316
unpack_lib('gpu', tvm_multilib)
317317
timeout(time: max_time, unit: 'MINUTES') {
318-
sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_legacy.sh"
318+
sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_docs.sh"
319319
}
320+
pack_lib('mydocs', 'docs.tgz')
320321
}
321322
}
322323
}

Makefile

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,14 +69,12 @@ build/libtvm_web_runtime.js: build/libtvm_web_runtime.bc
6969
cpplint:
7070
python3 3rdparty/dmlc-core/scripts/lint.py vta cpp vta/include vta/src
7171
python3 3rdparty/dmlc-core/scripts/lint.py topi cpp topi/include;
72-
python3 3rdparty/dmlc-core/scripts/lint.py nnvm cpp nnvm/include nnvm/src;
7372
python3 3rdparty/dmlc-core/scripts/lint.py tvm cpp include src \
7473
examples/extension/src examples/graph_executor/src
7574

7675
pylint:
7776
python3 -m pylint python/tvm --rcfile=$(ROOTDIR)/tests/lint/pylintrc
7877
python3 -m pylint topi/python/topi --rcfile=$(ROOTDIR)/tests/lint/pylintrc
79-
python3 -m pylint nnvm/python/nnvm --rcfile=$(ROOTDIR)/tests/lint/pylintrc
8078
python3 -m pylint vta/python/vta --rcfile=$(ROOTDIR)/tests/lint/pylintrc
8179

8280
jnilint:

0 commit comments

Comments
 (0)