Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Commit

Permalink
Merge with main branch from TVM official repo (#3)
Browse files Browse the repository at this point in the history
* [ETHOSN] Remove the compiler library from the runtime link (#10334)

Due to some restructuring of the Ethos(TM)-N driver library it is no
longer necessary to link the compiler library (AKA Support library)
into the runtime.

* [Hexagon] Export `ir_lower_vtcm_pass` function in the init file (#10330)

* [runtime] Add Metadata classes for AOTExecutor (#10282)

* Add new Metadata classes and base implementation.

 * These were autogenerated in the original PR, but checking them in
   as plain code until we can revisit the auto-generator approach.

* address masa comments

* Add documentation per Manupa's comments, and move kMetadataVersion namespace.

* remove get_name function, used for debugging

* clang-format

* [ONNX] only broadcast matmul if the shape has changed (#10321)

* [ONNX] only broadcast matmul if the shape has changed

* fix copy-pasta mistake

* [TIR] Tir constants integration into compilation pipeline (#8509)

* [TIR] Introduce tir.allocate_const to TIR

This PR is adding non-scalar constant representation in TIR. This is used to
express constants (i.e., parameters) in the TIR instead of bypassing the
TIR as it's done until now.

Change-Id: Id3afc4d7197260cb43ecde60f05ccbce3fc42430

Co-authored-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
Change-Id: Id4a09a637c9c1fd7d49989c6c10f474a78569e18

* [TIR] Integrate tir constant nodes in compilation pipeline

This PR integrates tir.allocate_const to the compilation pipeline to support --link-params.

Change-Id: Ic8d0cb75d596299fcae7078b304598afbf0c5494

Co-authored-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
Change-Id: Id98cc682bbfacfe75c4d8b260fd41658f1f196b2

* [TIR] tir.const extraction

This commit tries to implement an amendment to tir.constant RFC
with centralized storage of constant data within the IRModule
Please note that data and irmod_storage_idx are not mutual exclisive
further more the irmod_storage_idx is valid only immediatly after
prim func addition to the mod or after update within the mod.
If prim func is out of the the module scope then the index become
meangless. irmod_storage_idx also is not used in calculation of hash
function of the tir.constant node.

Change-Id: I40742ed580468b0252ea3fec02184cba65e20871

* unit test fixed

Change-Id: Ied2186554d4cbad44b2346216c8be92449e55732

* cmsis-nn codegen fix

Now handled case when params of the functions came as constants

Change-Id: I5874e182e34ef94e23048eaf3c61b01a56d91131

* Fixes for unittests

Change-Id: I5b82ee3f80337155706b5470973f494a301b5d90

* Rebasing tests fixes

Change-Id: I94ac87907081bab53c1dd1ab2db106ae057b4b19

* Linter: added method param description

Change-Id: I2f8c4c8d244b74c794abaa6079c46cc593ffcbdb

* Printing removal fix

This patch removes forgotten print in fuse_ops

Change-Id: I4bb5934f3b4cd5fde19d36a8e3319aae136bce8a

* Bugfix

Fixed concurrent map update bug here

Change-Id: Ifec3bf5030086d9079b9e493096f17dfd82297ec

* Reworked logic for not to introduce empty constant list to modue attrs

Change-Id: I082c85b3b4b70c218f0d714f5613ef6e178bd020

* Added support for tir builtin::tvm_access_ptr

This fixed unit tests for tests/python/integration/test_arm_mprofile_dsp.py

Change-Id: I10919f301ef9ddc3fd87f0e1a8414e9a52fc7938

* Unit test fix

Fixes unit tests in torch frontend

Change-Id: I6c179834f93dd202605d1ce5a7f07d987b9dc469

* Addressed requested changes

Addressed changes requested upstream

Change-Id: I741e52b89eb285732c23b1ac7ff277e757a088c3

* Namespace usage changed to conform earlier C++ standard

Change-Id: I1b29238cfe2a6bedb525f4f823a3a540f631d836

* Bugfix

Change-Id: I57a44b714b307278a243817ec2864e53ad31366b

* updated IRModuleNode::ExtractPrimFuncConstants

Updated IRModuleNode::ExtractPrimFuncConstants as per
request upstream.

Change-Id: I35db0145fb5827efd0445ce665d0c99465274016

* Minor changes

typo fixd
renamed ExtractPrimFuncConstants to ExtractConstants
removed getters/setters from FuseMutator and added parametrized
constructor

Change-Id: Ib2326805781779b88c963a8642ff683c8755956e

* Moved LinkedParam/LinkedParamNode

Moved LinkedParam/LinkedParamNode from tvm::tir namespace to tvm
namespace

Change-Id: Ie3f0303bd4f7890c6d680268c91f2051977bc7f4

* Addressed upstream comments

Changed BindParams argument to Array<NDArray>
Removed 'name' argument from te.const
Switched to in-depth comparision of NDArrays in constant de-duplication
Removed extra final comma from NDArrayToTIR
Changed return type of ConstantAllocationSize to int64_t
Made link_param a tvm.testing.parameter for test_fuse_take and test_fuse_gather_nd

Change-Id: I4285099cc63756aa5ebe91a5bd207d4135499b41

* Removed unnecessary forward declaration

+linter

Change-Id: I2a6c0d1f97773aeb1ae3f458da252a22079ccdb1

* Constant extractor now is a separate pass

Change-Id: Ia4adca9d3315b26fbdc006ef7c115900c081e303

* Added forgotten file + unit test fix

Change-Id: Ice305f4fefd13fe95e97574e6d63ffeb664621df

* Changed to IRModule pass

Refactored ExtractPrimFuncConstants to IRModule pass.
deDup -> DeDup
Refactored logic of Applicator supplementary class

Change-Id: I6c120d175eb6790ba90f176c4f856bde8f0c7c94

* bugfix after rebasing

Change-Id: Ie3ee6ea2479476a30f486baef74f20070f117942

* -v -> -vv to have more debug information

Change-Id: I12c63731663b9c9ea574b9ed5cb17311ba3cf701

Co-authored-by: Giuseppe Rossini <giuseppe.rossini@arm.com>

* Simple workaround for PyTorch symbol crash problem in meta schedule test (#10342)

* Simple workaround for PyTorch symbol crash problem in meta schedule test

* workaround for CI

* add reading of nRF5340 DK product ID to determine which COM port to use (#10304)

* [ARM_CPU] Conv2d int8 intrinsic for cortex-A72 (#10310)

* [ARM_CPU] Conv2d int8 intrinsic for cortex-A72

Add an intrinsic that performs a dot product of 8 4-element vectors at
once. Also conditionally inline fused operators into the main
convolution loop depending on convolutions size. Small convolution = no
inlining. Performance improves by ~20% on mobilenet on raspberry pi 4
and ~30% improvement on performance for the individual convolutions.

* ignore incorrect lints

* fixup fstring

* revert changes to conv2d_NCHWc (not int8)

* remove error check, apparently tests rely on it

* refactor alter op layout

* [CI][Hexagon] Add Hexagon Tests to pipeline (#10302)

* Add hexagon tests to CI Hexagon

* Fix CRT libs

* cleanup and fix Jenkins

* Address @areusch comments

* [TIR] Misc minor updates (#10335)

* [CUBLAS] Fix cublas batch matmul strategy plevel (#10351)

* [CI] Re-introduce redirect follow and update hash for Boost download (#10343)

Looks like we did need the redirect in (#10247), otherwise you get a
blank redirect response and `tar` doesn't like that very much:

```
tar: This does not look like a tar archive

gzip: stdin: unexpected end of file
```

* Add per channel quantization to QLinearConv and fix related bugs (#10354)

* [CI] Fix Flaky Test `test_task_scheduler_gradient` (#10360)

* [CI] Fix Flaky Test `test_task_scheduler_gradient`

A change to fix the issue of flaky test mentioned in #10356 by increase the `chain_rule` factor and avoid small gradient.

* Retrigger CI.

* [TOPI] VNNI support for batch matmul (#10332)

* add test

* compute added

* schedule works

* reuse dense_vnni schedule

* try an alternative approach to scheduling layout transform

* introduce a tunable knob to decide if compute_root

* check transpose condition

* support s8 + s8 input

* pylint

* [TIR] TIR Schedule Misc Update (#10341)

* tir schedule misc update

* Trigger Build

* [AOT] BugFix of workspace calculation (#10337)

Following an investigation from #10022,
it turns out, currently the workspace
calculation assumes there would be a single
lowered PrimFunc could be produced per
primitive Relay Function.

However, the exception turned out to
be the CMSIS-NN codegen that produces
multiple calls/PrimFuncs in the place
of a single call to single relay PrimFunc.

This commit adds changes to workspace
calculation to be done on lowered IRModule.

Additionally, changes the test utils to
not to generate any stack allocator code
when USMP is used to make the tests more
strict.

This change also removes the confusing
"run_model" which has semantics identitical
to "__tvm_main__" in TIR.

* [runtime] Improved log information with function signature (#10326)

This PR introduces a function signature printer in the `TypedPackedFunc` part, so that the log information in `detail::unpack_call` will be more complete. This PR allows users to obatin the original function signature when the `detail::unpack_call` fails.

* refactored GraphProto.from_onnx into smaller functions (#10267)

* refactored GraphProto.from_onnx into smaller functions

* black formatted file

* removed line that does not seem to make sense. Is there a purpose that I missed?

* just to trigger CI pipeline

* [skip ci] Fix onnx frontend lint (#10363)

This was broken in #10267, not sure how that commit passed CI (maybe some logic to figure out the PR diff in pylint is broken).

Co-authored-by: driazati <driazati@users.noreply.github.com>

* [COMMUNITY] csullivan -> Committer (#10364)

* [BUGFIX][ARITH] Fix FloorMod Simplifier (#10336)

* fix canonical simplifier

* improve comments

* [Lint] Fix Pylint Issues (#10358)

* [TIR][Transform] relax LoopPartition restriction that the intersection of all conditions can not be none. (#10340)

Co-authored-by: sqing <qing.siqi@intellif.com>

* [ETHOSN] Improved identification of driver library version (#10285)

* [ETHOSN] Stricter data type conversion checks (#10271)

The 21.11 update for the Ethos(TM)-N driver is slightly more strict in
accepting various operator attributes.

* [microNPU][4] Add the cascader Proposal generator (#9959)

* [microNPU][4] Add the cascader Proposal generator

The Proposal generator takes optimal Plans and combines
them to find optimal 'Proposals' - sets of disjoint
Plans that cover every Part in a CascaderGraph. It
ultimately produces a Pareto-frontier of 'optimal'
Proposals in terms of estimated cycles and memory usage.

Change-Id: Id42099819a596496a5769bae22f08eeb75ec69b6

* Fixes

Change-Id: I4f5f2a298bd3bb379c7c8d179150358923b0dd66

* [Runtime][Pipeline Executor] multiple threads management and the data forwarding notification mechanism. (#10234)

* [Runtime][Pipeline Executor] multiple threads management and the
data forwarding notification mechanism.

In this patch we create working threads for each runtime of pipeline.
the threads would be terminated once the runtime class gets destroyed.

We also add a notification mechanism derived from the 'binding configuration'
of the runtime to forward the data notification.

* address review comments.

* address review comments.

* fix typo.

* fix typo.

* trigger build.

* address review comments.

* address review comments.

* address review comments.

* address review comments.

* [Hexagon] RPC server/client for simulator (#10361)

This is the C++ code for running Hexagon code on simulator via the
RPC mechanism. It is intended to be integrated into the current
HexagonLauncher, although the integration will require further changes
to the launcher python code.

The final goal is to be able to run the same file.py on either
hardware or simulator without needing to edit the python file, but
simply by changing the configuration of the execution platform
(i.e. something like --exectute-on=simulator as a command line or
in an environment variable). The exact details are still to be
determined.

* [TIR, Relay] improve bfloat16 support (#10112)

* update AMP table to enable ResNet50 conversion

* add runtime datatype dispatch for BFloat16

* skip asserts for uint16 for bf16 compatibility

* add bf16 cast for the unary intrinsic operators

* enable "bf16<-->fp32<-->any dtype" casting

* support inconsistent input for bf16 BIOP legalize

* add treatments for bfloat16 in if statements

* add bfloat16 dtype casts in binary OP

* delete unnecessary treatments for bfloat16

* add test for bfloat16 building

* code style

* restore the modifications in .gitignore

* restore the changes to AMP lists

* fix typos

* fix lint errors

* fix typo

* [ci] Check more events before pinging reviewers (#10208)

* [ci] Check more events before pinging reviewers

This was missing some events before (reviews without comments, PR updated from a draft -> ready for review) so these were being ignored when finding the latest event. This PR adds them and restructures the code a bit to make it more clear what is happening for each PR. This addresses some of the issues from #9983

* fix tests

Co-authored-by: driazati <driazati@users.noreply.github.com>

* Lower cache_read and cache_write to Hexagon DMA via tensorize (#10365)

* Lower cache_read and cache_write to Hexagon DMA via tensorize

* rework test to be compatible with launcher

* remove cpu device api mem_copy implementation and test

* [microNPU] adding more tests with USMP (#10362)

Adding a few tests to confirm memory usage
with and without USMP.

- Supporting the toggle to disable storage_rewrite.
- There is a slight change to tir_to_cs_translator to
   add index of Load nodes associated with NpuAddressRange objects

* [RELAY] [VIRTUALDEVICE] Change syntax for device planning and store parameter virtual devices in virtual_device_ field (#10352)

* parent 33082e0
author electriclilies <lilyorthsmith@gmail.com> 1643141097 -0800
committer Lily Orth-Smith <lilyorthsmith@gmail.com> 1645560059 -0800

Store function param virtual devices in virtual_device_ field

Fix test_annotation.py and change result_virtual_device to virtual_device

* Change plan devices tests to use the new syntax for function parameters

* Fix free var problem

* Fix attribute parsing if there is virtual device; most device planning tests passgit status

* fixed lambda lifting

* Debugging high order functions -- right now FunctionOnDevice and Bind are mutually recursive. This needs to not be the case.

* tests pass wootgit status

* Remove FunctionOnDevice from device planner

* Don't use MaybeFunctionOnDevice in VM compiler

* Remove MaybeFunctionOnDevice from lambda lifter

* Delete FunctionOnDevice and MaybeFunctionOnDevice!

* Reomve GetFunctionResultVirtualDevice

* Remove GetFunctionParamVirtualDevice

* lint

* lint

* Python formatting

* Remove FunctionOnDevice python test

* Fix bug in binds & debug output

* Fix text printer

* lint

* Remove function on device from fold constant tests

* Mark nits

* Revert behavior of bind

* clean up debug

* Make ExprBinder public interface and use instead of Bind

* Fix lambda lift

* This is broken but not sure how to fix

* passes all device planning tests yay!

* Add substitution helper and use in device planner

* Remove unnecessary check

* Respond to comments

* Update comment

* [VirtualMachine] new method allowing to set one input tensor by its index or name (#10293)

* set_input_with_index was implemented for VM

* clean code

* add getInputIndexFromName. add function descriptions. lint fix

* fix lint

* transfer comparison of parameter names number and assigned devices number to VMFunction constructor

* add GetVMFunctionWithName to Executable API

* clean code

* add SetInputWithName (set_input_with_name) to VM API

* join SetInputWithIndex and SetInputWithName to SetOneInputTensor (set_one_input) to VM API, the joined methods were removed

* fix lint

* some fixes after review

* add set_one_input method to python API of VirtualMachine

* pytests for set_input and set_one_input methods of VirtualMachine were implemented and checked

* CI restart

* construct simple model for pytests by relay instead of onnx tools (need for correct CI)

Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>

* [Hexagon] Replace strlen in constant initialization with sizeof (#10381)

Strlen is not constexpr everywhere, so replace it with sizeof.
In C++ sizeof("string") works fine, since "string" has type
"const char [...]".

* check to avoid crash in opt_level=0 vm build (#10347)

* [DOCS] Add how to contribute TVM docs with images. (#10287)

* [MetaSchedule] Update Tuning Interfaces. (#10367)

This PR is further improvement of the meta schedule project (apache/tvm#8473).

Co-authored-by: Junru Shao <<junrushao1994@gmail.com>>
Co-authored-by: Bohan Hou <<32121147+spectrometerHBH@users.noreply.github.com>>
Co-authored-by: Ruihang Lai <<lairuihangdongdong@qq.com>>
Co-authored-by: Hongyi Jin <<3231950289@qq.com>>
Co-authored-by: Wuwei Lin <<wuwei@apache.org>>
Co-authored-by: Siyuan Feng <<Hzfengsy@sjtu.edu.cn>>

* [Bugfix][TVMScript] Convert BufferSlice to BufferLoad when used as range/loop start and end (#10370)

A quick fix of the parser issue mentioned in #10327 .
Ranges and loops require `start` and `stop` to be PrimExpr, however, `BufferSlice` is not always scalar so it's not a `PrimExpr`.
This PR performs the transformation.

* [FIX,PROFILING] Add extra precision to numbers when serializing to json (#10392)

Numbers were serialized with too little precision when serializing
profiling reports to json. Deserialization can then sometimes round the
number differently than if the full precision was available.

Fixes #10382.

* Fix plint error. (#10394)

plint complain error in parser.py and test_vm.py just fix it.

* meta schedule misc update (#10389)

* Fix tvmc run error message when inputs aren't found. (#10017)

* [Runtime][PipelineExecutor] Polish the name and comments of variable. (#10395)

Polish comments and variable name

* Enable groups argument for conv2d_transpose on the cudnn backend (#10396)

* wip

* reset conv2d_transpose topi conv_mode to 1

* fix for 'Error: identifier “hfabs” is undefined'

* address @masahi's comments in pytorch test_forward

Co-authored-by: Masahiro Masuda <masahi129@gmail.com>

* Fixed a bug in the convert_fully_connected() function (#10371)

In case we need to change the output shape, need to convert the output_shape tuple to list before the change.

* [TensorIR] Renormalize split pattern (#10401)

* [MetaSchedule] Arithmetic analysis (#10403)

This PR changes the normal form of the affine detector and supports a single var predicate. It also enhances ModularSet detector to enable floor mod patterns.

* Add @slow decorator to run tests on `main` (#10057)

* Add @slow decorator to run tests on `main`

This adds the infrastructure discussed in https://discuss.tvm.apache.org/t/rfc-ci-skip-slow-tests-on-prs/11910, but without affecting any tests. As we investigate reasons behind [slow tests](https://gist.github.com/driazati/e009f09ff44c6bc91c4d95a8e17fd6f1) in CI, this decorator will allow us to move these to run only on `main` and not PRs after checking with all concerned parties.

* cleanup

Co-authored-by: driazati <driazati@users.noreply.github.com>

* [microTVM] Zephyr: refactor _find_openocd_serial_port (#10346)

Refactor _find_openocd_serial_port() as a generic USB serial port
finder since other runners beyond openocd use it (e.g. jlink runner).

Also instead of using redundant hardcoded values in BOARD_USB_FIND_KW
dict, use idVendor and idProduct from boards.json. And don't use 'usb'
module to first find the serial number of the port and then pass it to
'serial' module to obtain the port path, instead search for the port
path directly via 'serial' module using the serial number (if provided)
or use idVendor and idProduct values taken from boards.json.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>

* [microTVM][RVM] Skip USB device attach if device is already attached (#8737)

* [microTVM][RVM] Skip USB device attach if device is already attached

Currently, when the VirtualBox provider is selected, if base-box-tool.py
'test' command is used and a VM is already running with the USB device
necessary to perform the tests already attached to it the command fails
because it tries to blindly attach again the USB device without checking
if device is already attached.

The failure can be reproduced by first running a VM for testing (the
tests need to fail and leave the VM running):

$ ./base-box-tool.py --provider virtualbox test --microtvm-board=stm32f746g_disco

then one tries to re-run the tests without building the whole VM again:

$ ./base-box-tool.py --provider virtualbox test --skip-build zephyr --microtvm-board=stm32f746g_disco

This commit fixes that error by checking and properly skipping the USB
device attach if it's already attached to the VirtualBox VM.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>

* areusch review: Use --machinereadable for the output

Use 'showvminfo --machinereadable' output to parse for more robustness
to updates in VBoxManage.

* Realize the function op during forward rewrite (#10410)

* [ci][1/2] Shard `frontend: GPU` job into 2 jobs (#10413)

This is the longest individual CI job by about an hour, meaning everything else is usually done and waiting on this job for a while before the entire build completes. This PR breaks it up into two roughly equal jobs (based on timings in https://ci.tlcpack.ai/job/tvm/job/main/2623/testReport/, both should take about 90 minutes). If capacity is available, this means CI jobs could potentially take 1 hour less. If not available, besides an insignificant queueing delay this PR has no effect.

This is a two part PR since the Jenkinsfile changes cannot be bundled in this PR, so they will need to be in a follow up.

cc @areusch

Co-authored-by: driazati <driazati@users.noreply.github.com>

* RelayViz Graphviz renderer (#10400)

Following apache/tvm#10085, this PR adds a
graphviz backend. It requires python `graphviz` package and `dot`
executable in the PATH, similar to `tedd.py`.

This implementation is much like a porting of `visualize` function in
https://tvm.apache.org/2020/07/14/bert-pytorch-tvm, except that
`node_attr_dict` is replaced with a callback `get_node_attr`.

`get_node_attr` can be somehow used to emphasize a set of nodes.
It might be useful if we encounter problems in inferences
and want to find nodes with certain types and attributes.

An example is provided in
https://github.com/chiwwang/tvm/blob/graphviz_renderer_example/test_viz.py

Its outputs are (conv2d with NCHW layout is green-colored):
https://github.com/chiwwang/tvm/blob/graphviz_renderer_example/mod_with_subgraph.pdf
https://github.com/chiwwang/tvm/blob/graphviz_renderer_example/mod_wo_subgraph.pdf

* [Runtime][ThreadPool]Refactor affinity function and support CPU affinity list setting. (#9802)

* [Runtime][ThreadPool] Refactor affinity function and support CPU affinity list setting.

Issue:
1. There are multiple affinity function using "LINUX" and "ANDROID" macro
check and the multiple check make the logic maintain and change become
complex.

2. Current logic of tvm [Runtime][ThreadPool] assume all of the cpu resources are available for
a single backend runtime to do the data flow computation. But such assumption may not
true when user running multiple task on the system and not want tvm task
exhaust all of the cpu resource, or when user going to run multiple backend
runtime of tvm on the system, each backend runtime of tvm should use different cpu
affinity settings to achieve best performance.

Solution:
1.Refactor the affinity functions to move the "LINUX" and "ANDROID" check
into one function.

2.In this solution, we introduce a new "CPU AffinityMode type" named "kSpecify", by using
"kSpecify" and the function named "tvm::runtime::threading ::Configure" user can specify
the cpu list for the cpu affinity of a backend runtime.

This solution reused the existing per thread thread pool logic of [Runtime][Threadpool] that
created a worker thread pool for current thread which can running a particular runtime. for a multiple
runtime use case, user can first launch multiple threads, then call "tvm::runtime::threading ::Configure"
with cpu list to create tvm data flow worker thread pool, after doing this the execution of the multiple
runtime on the multiple threads will use different cpu resource list.

* fix windows build issue.

* fix build issue.

* fix build issue.

* fix windows build issue.

* fix plint issue

* polish comments.

* address review comments.

* address reivew comments.

* address review comments.

* address review comments.

Co-authored-by: hua jiang <hua.jiang@xilinx.com>

* [CI][1/2] Update the Python version of pyxir (#10406)

Currently the CMake file for pyxir is looking for things in Python3.6,
so it needs to be upgraded to use 3.7 now that we have moved to use 3.7.
Otherwise the build fails when the docker images are updated since the
3.6 can't find the pyxir packages which have moved to 3.7.

Additionally, there seems to be a problem with the newer version of
setuptools installing the pyxir libraries, so reverting these versions
to the previous versions as a workaraound.

Note that this has to be done in two patches for the changes to go
through the current CI, this patch downgrades the pip and setuptools
versions.

* Modify debug output (#10372)

1. Modify debug output to make it more readable
3. Replace magic number with a variable `error_ct_threshold`
3. Add function to set error counter threshold externally for debug purposes

* Fix relative include path (#10402)

* [ci][2/2] Shard `frontend: GPU` job into 2 jobs (#10414)

* [TensorIR] Update VerifyGPU (#10405)

* update VerifyGPU

* address comments

* [Bugfix][Arith] Fix TryFuseIter (#10427)

* Lily -> Committer (#10417)

* Add group_conv2d_transpose_nchw to CUDA backend (#10423)

* add group_conv2d_transpose_nchw to CUDA backend

* simplify significantly, just add groups argument to conv2d_transpose_nchw

* [MISC] Add miss Type2Str and remove compile warnings (#10430)

* [MISC] Add miss Type2Str and remove compile warnings

* fix lint

* [cleanup] Log compile errors for AOT tests (#10214)

* [cleanup] Log compile errors for AOT tests

See #10213

* Update tests/python/relay/aot/aot_test_utils.py

* removed the encode of msg that is already str

Co-authored-by: lhutton1 <luke.hutton@arm.com>

Co-authored-by: driazati <driazati@users.noreply.github.com>
Co-authored-by: Manupa Karunaratne <manupa.karunaratne@arm.com>
Co-authored-by: lhutton1 <luke.hutton@arm.com>

* [skip ci][CI][Fix] Fixing lint (#10445)

A linting issue was introduced in #10423, fixing this up.

Change-Id: I06c518194e30dcaa755005f06b8b7280c237d386

* [CMSIS-NN] enable USMP with CMSIS-NN (#10224)

This commit mainly enables the USMP
with CMSIS-NN codegen.

In order to do that, CMSIS-NN functions needed
to contain BufferMaps. This commit adds the necessary
BufferMaps as well.

All the tests are modified to run with USMP
while the networks tests run with and without
USMP.

* Fix plint complain for some files. (#10433)

* Fix a Uninitialized Variable Warnings. (#10436)

There is a 'Uninitialized Variable' Warning in building process, just fix it.

* [Frontend][TFLite] Added broadcasting to prelu alpha. (#10435)

* Update prelu test cases

* Add broadcasting to prelu alpha

* [Relay] Fix shape func for strided slice (#10418)

* fix dyn strided slice

* add tests

* remove stuff

* jostle ci

* jostle ci

* jostle

* [skip-ci][COMMUNITY] leandron to PMC (#10448)

* [Hexagon] Allow execution on target or simulator from HexagonLauncher (#10454)

Setting ANDROID_SERIAL_NUMBER=simulator will execute the tests on
simulator instead of a hardware device.

This patch also introduces an environment variable HEXAGON_RPC_LIB_DIR
to specify the location of the hexagon_api binaries. If unset, the
code will look for the binaries in the same way as before this patch.

* [microNPU][5] Convert Proposals to te.Schedules (#10062)

* [microNPU][5] Convert Proposals to te.Schedules

Change-Id: I6771578f1007b8fea02e2dec7d0c797a6ef6aa5e

* Fixes

Change-Id: Id062ca7793656be4e870ac48ba41a34aa83276d2

* Fix test

Change-Id: Ib0fd55b99459c26425e1805df19d12367244e1b0

* hot fix (#10464)

* [ci] Add workflow to cc teams (#10322)

As discussed in https://discuss.tvm.apache.org/t/rfc-remove-codeowners/12095/2?u=driazati, this adds a mechanism to auto-tag people based on PR/issue titles and labels. This should improve visibility across the project and make it easy for interested people to subscribe to various topics.

Details on usage will be posted in the relevant issue: #10317

Co-authored-by: driazati <driazati@users.noreply.github.com>

* just a typo fixed (#10442)

* minor typo fixed

* to trigger CI

* to trigger CI

* fixed formatting issues

* black formatted file

* [runtime] AOTExecutor implementation and c target code-generator (#10283)

* Add memory pools to Metadata classes.

* Move ShapeToJSON to utils.

* Track returned TensorType from AOTExecutorCodegen.

* Support calling Relay functions with Tuple.

* Expand supported TIR calling conventions to work with C++ runtime.

* Rename MetadataModule to ConstLoaderModule.

* Add runtime AOT executor module.

* Add AOT code-generation.

* Add a runtime Module to mux between .text Metadata and live Metadata.

* Move launch_param to namespace

* Add test of c++ AOT.

* Fix incongruity between kTvmRuntimeCrt constant

* Expand ExecutorCodegenMetadata to include AOT runtime metadata.

* commit cpp test

* Make Metadata compile under C.

* Ignore ephemeral metadata_module export_model_library_format.

 * This module does not need to be exported, since it is merely a C++
   wrapper around get_c_metadata, and get_metadata is not used in C.

* address manupa, kparszsyc, masahi comments.

* further address comments

* clang and python format

* Fix broken test

* Address lingering comments from masahi, kparszyzc

* [Runtime][ThreadPool] Handle the default value of affinity mode. (#10434)

* [Runtime][ThreadPool] Handle the default value of affinity mode and a
corner case of function 'SetMaxConcurrency'.

 1. Handle the default value of affinity mode.
 2. After calling the function 'SetMaxConcurrency' with a non-zero value,
    if calling the function 'SetMaxConcurrency' again with a zero value ,
    then the second setting can not correctly set the max_concurrency value
    into zero.
    use new logic to fix this issue.

* address review comments.

* polish the warning message.

* [Relay] Fix output dtype for conv2d wgrad when the original one is void (#10459)

* [Relay] Fix output dtype for conv2d wgrad when the original one is void

* fix cpplint

* also add out dtype information to dgrad

* also use out_dtype for wgrad

* remove redundant import

* [skip ci][ci] Remove -i from lint scripts (#10469)

This was changed in #8509 to run without checking the file formatting, which would lead to pylint errors like we saw on `main` in apache/tvm@0c836b7.

Co-authored-by: driazati <driazati@users.noreply.github.com>

* Modify Jenkinsfile to prevent builds from triggering on branch indexing (#10432)

Co-authored-by: Noah <nkontur@octoml.ai>

* [skip ci][ci] Skip actions on forks (#10468)

* [ci] Use available CPUs in builds (#10359)

* [ci] Use sccache in builds

* trigger ci

* update

Co-authored-by: driazati <driazati@users.noreply.github.com>

* [ci] Fix slow test script permissions (#10457)

This is failing silently, e.g.: https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-10359/4/pipeline

cc @areusch

Co-authored-by: driazati <driazati@users.noreply.github.com>

* [runtime][Hexagon] AOTExecutor implementation for C Codegen (#10311)

* Hexagon AOT tests work

* fix and address comments

* [microTVM] Zephyr: add B-U585I-IOT02A board support (#10416)

* [MetaSchedule] Fix Cyclic Dependency in PyClass Family (#10368)

Following the design of module_pass, we developed a mechanism, a decorator named derived_obj, to systematically allow derivation from TVM objects in pure Python and being passed into any language, without cyclic dependency. This PR introduces the new mechanism to all PyClasses in meta schedule.

* [Hotfix] Black format (#10482)

* [MetaSchedule] Keep Task / Trial / Iter / Postproc Number Consistent in Log (#10478)

This PR fixes some inconsistency in log printing and make sure all numbers start from zero for tasks, trials, iters and postprocs. I think it's better for debugging if any task or trail went wrong in the future.

* [Torch] fix torch version check (#10481)

old code checkout "1.10.2" greater_than "1.5.0" if false, fix it

* [microNPU] Remove unused code from testing infra (#10462)

Removing some legacy code from infra.py that is not called by anything.

* [MetaSchedule] Enable AutoTVM-style template-based search space (#10461)

* [MetaSchedule] Enable AutoTVM-style template-based search space

* Fix lint

* suppress mypy

* [MetaSchedule] update misc parts (#10444)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* [Arith] Handle mod/floormod in modular set analysis (#10453)

* Correctly enable architecture extensions in CMSIS-NN Zephyr Demo (#10458)

* Correctly enable architecture extensions in CMSIS-NN Zephyr Demo

Without `CONFIG_FPU` being set the correct architecture extensions weren't being applied which means the buffer sizes didn't necessarily match up - this corrects it so that they align.

* Fix memory allocation in demo

The stack allocator forcibly aligns memory by removing parts of it which causes there not to be enough memory and the CMSIS-NN integration uses more stack than the demo with pure TVM operators (we should look to remove some of our stack usage)

Co-authored-by: Leo-arm <Leo.Blonk@arm.com>
Co-authored-by: Masahiro Masuda <masahi129@gmail.com>
Co-authored-by: Andrew Reusch <areusch@gmail.com>
Co-authored-by: Matthew Brookhart <mbrookhart@octoml.ai>
Co-authored-by: Dmitriy Smirnov <dmitriy.smirnov@arm.com>
Co-authored-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
Co-authored-by: Alan MacDonald <alanmacd@users.noreply.github.com>
Co-authored-by: Tristan Konolige <tkonolige@octoml.ai>
Co-authored-by: Mehrdad Hessar <mhessar@octoml.ai>
Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
Co-authored-by: Christopher Sidebottom <chris.sidebottom@arm.com>
Co-authored-by: Sevin F. Varoglu <sfvaroglu@octoml.ai>
Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
Co-authored-by: Hongyi Jin <3231950289@qq.com>
Co-authored-by: Manupa Karunaratne <manupa.karunaratne@arm.com>
Co-authored-by: Yaxing Cai <yaxingca@usc.edu>
Co-authored-by: SebastianBoblestETAS <73823717+SebastianBoblestETAS@users.noreply.github.com>
Co-authored-by: David Riazati <9407960+driazati@users.noreply.github.com>
Co-authored-by: driazati <driazati@users.noreply.github.com>
Co-authored-by: Ziheng Jiang <ziheng@apache.org>
Co-authored-by: Jinkun Lin <lazycal12@gmail.com>
Co-authored-by: Qiang Zhang <johnson9009@163.com>
Co-authored-by: albert qing <2628869@qq.com>
Co-authored-by: sqing <qing.siqi@intellif.com>
Co-authored-by: Matthew Barrett <55580676+mbaret@users.noreply.github.com>
Co-authored-by: Hua Jiang <huaj@xilinx.com>
Co-authored-by: Krzysztof Parzyszek <kparzysz@quicinc.com>
Co-authored-by: Youlei Yang <youlei.yang@intel.com>
Co-authored-by: Adam Straw <astraw@octoml.ai>
Co-authored-by: Lily Orth-Smith <lilyorthsmith@gmail.com>
Co-authored-by: Valery Chernov <black.chervi@gmail.com>
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: wrongtest <wrongtest0@gmail.com>
Co-authored-by: Christian Convey <cconvey@octoml.ai>
Co-authored-by: Junru Shao <<junrushao1994@gmail.com>>
Co-authored-by: Bohan Hou <<32121147+spectrometerHBH@users.noreply.github.com>>
Co-authored-by: Ruihang Lai <<lairuihangdongdong@qq.com>>
Co-authored-by: Hongyi Jin <<3231950289@qq.com>>
Co-authored-by: Wuwei Lin <<wuwei@apache.org>>
Co-authored-by: Siyuan Feng <<Hzfengsy@sjtu.edu.cn>>
Co-authored-by: Zihao Ye <expye@outlook.com>
Co-authored-by: Hans Brouwer <hans@brouwer.work>
Co-authored-by: Ophir Frish <ophir.frish@arm.com>
Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
Co-authored-by: Gustavo Romero <gromero@users.noreply.github.com>
Co-authored-by: chiwwang <84191062+chiwwang@users.noreply.github.com>
Co-authored-by: hua jiang <hua.jiang@xilinx.com>
Co-authored-by: Elen Kalda <elen.kalda@arm.com>
Co-authored-by: Kirill Snezhko <4477094+argrento@users.noreply.github.com>
Co-authored-by: Ben Greiner <code@bnavigator.de>
Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: Haichen Shen <shenhaichen@gmail.com>
Co-authored-by: Cody Yu <comaniac0422@gmail.com>
Co-authored-by: lhutton1 <luke.hutton@arm.com>
Co-authored-by: blackkker <823036806@qq.com>
Co-authored-by: AndrewZhaoLuo <andrew.zhao.luo@gmail.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Sebastian Boblest <sebastian.boblest@etas.com>
Co-authored-by: Noah Kontur <35545508+konturn@users.noreply.github.com>
Co-authored-by: Noah <nkontur@octoml.ai>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: yogurfrul <yogur89@163.com>
Co-authored-by: Wuwei Lin <wuwei@apache.org>
Co-authored-by: Christopher Sidebottom <christopher.sidebottom@arm.com>
  • Loading branch information
Show file tree
Hide file tree
Showing 387 changed files with 17,404 additions and 3,515 deletions.
1 change: 1 addition & 0 deletions .github/workflows/cc_bot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ concurrency:

jobs:
cc-reviewers:
if: github.repository == 'apache/tvm'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/ping_reviewers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ concurrency:

jobs:
ping:
if: github.repository == 'apache/tvm'
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
Expand Down
48 changes: 48 additions & 0 deletions .github/workflows/tag_teams.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

# GH actions.
# We use it to cover windows and mac builds
# Jenkins is still the primary CI

name: Teams

on:
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target
pull_request_target:
types: [opened, reopened, edited, ready_for_review, labeled]
issues:
types: [opened, edited, reopened, labeled]

concurrency:
group: Teams-${{ github.event.pull_request.number }}-${{ github.event.issue.number }}
cancel-in-progress: true

jobs:
tag-teams:
if: github.repository == 'apache/tvm'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Tag people from relevant teams
env:
PR: ${{ toJson(github.event.pull_request) }}
ISSUE: ${{ toJson(github.event.issue) }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -eux
python tests/scripts/github_tag_teams.py || echo failed
1 change: 1 addition & 0 deletions .github/workflows/update_last_successful_branch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ concurrency:

jobs:
update-last-successful-branch:
if: github.repository == 'apache/tvm'
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
Expand Down
8 changes: 8 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ tvm_option(USE_LLVM "Build with LLVM, can be set to specific llvm-config path" O
tvm_option(USE_STACKVM_RUNTIME "Include stackvm into the runtime" OFF)
tvm_option(USE_GRAPH_EXECUTOR "Build with tiny graph executor" ON)
tvm_option(USE_GRAPH_EXECUTOR_CUDA_GRAPH "Build with tiny graph executor with CUDA Graph for GPUs" OFF)
tvm_option(USE_AOT_EXECUTOR "Build with AOT executor" ON)
tvm_option(USE_PROFILER "Build profiler for the VM and graph executor" ON)
tvm_option(USE_OPENMP "Build with OpenMP thread pool implementation" OFF)
tvm_option(USE_RELAY_DEBUG "Building Relay in debug mode..." OFF)
Expand Down Expand Up @@ -399,6 +400,13 @@ if(USE_PROFILER)
list(APPEND RUNTIME_SRCS ${RUNTIME_VM_PROFILER_SRCS})
endif(USE_PROFILER)

if(USE_AOT_EXECUTOR)
message(STATUS "Build with AOT Executor support...")
file(GLOB RUNTIME_AOT_EXECUTOR_SRCS src/runtime/aot_executor/*.cc)
list(APPEND RUNTIME_SRCS ${RUNTIME_AOT_EXECUTOR_SRCS})

endif(USE_AOT_EXECUTOR)

# Enable ctest if gtest is available
if(USE_GTEST)
# Check env var for backward compatibility. A better way to specify package
Expand Down
6 changes: 4 additions & 2 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ We do encourage everyone to work anything they are interested in.
- [Thierry Moreau](https://github.com/tmoreau89) (PMC): @tmoreau89 - vta
- [Kazutaka Morita](https://github.com/kazum): @kazum - frontends, opencl
- [Trevor Morris](https://github.com/trevor-m): @trevor-m - byoc, compiler
- [Leandro Nunes](https://github.com/leandron): @leandron - tvmc
- [Leandro Nunes](https://github.com/leandron) (PMC): @leandron - tvmc
- [Lily Orth-Smith](https://github.com/electriclilies): @electriclilies - relay
- [Krzysztof Parzyszek](https://github.com/kparzysz-quic): @kparzysz-quic - hexagon, llvm
- [Andrew Reusch](https://github.com/areusch): (PMC) @areusch - runtime, microTVM
- [Jared Roesch](https://github.com/jroesch) (PMC): @jroesch - relay
Expand All @@ -64,6 +65,7 @@ We do encourage everyone to work anything they are interested in.
- [Christopher Sidebottom](https://github.com/Mousius): @Mousius - arm, ethos-u, relay
- [Junru Shao](https://github.com/junrushao1994) (PMC): @junrushao1994 - relay, compiler
- [Haichen Shen](https://github.com/icemelon) (PMC): @icemelon - relay, topi
- [Chris Sullivan](https://github.com/csullivan): @csullivan - amd backend
- [Siva Rama Krishna Reddy](https://github.com/srkreddy1238): @srkreddy1238 - frontends, golang
- [Zhixun Tan](https://github.com/phisiart): @phisiart - opengl, web
- [Andrew Tulloch](https://github.com/ajtulloch): @ajtulloch - topi, compiler, runtime
Expand Down Expand Up @@ -106,7 +108,7 @@ We do encourage everyone to work anything they are interested in.
- [Manupa Karunaratne](https://github.com/manupa-arm): @manupa-arm
- [Marisa Kirisame](https://github.com/MarisaKirisame): @MarisaKirisame
- [Tristan Konolige](https://github.com/tkonolige): @tkonolige
- [Ruihang Lai](https://github.com/MasterJH5574): @MasterJH5574
- [Ruihang Lai](https://github.com/MasterJH5574): @MasterJH5574
- [Wuwei Lin](https://github.com/vinx13): @vinx13
- [Andrew Liu](https://github.com/hypercubestart): @hypercubestart
- [Henry Liu](https://github.com/optima2005): @optima2005
Expand Down
115 changes: 99 additions & 16 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,13 +45,14 @@
import org.jenkinsci.plugins.pipeline.modeldefinition.Utils

// NOTE: these lines are scanned by docker/dev_common.sh. Please update the regex as needed. -->
ci_lint = "tlcpack/ci-lint:v0.68"
ci_gpu = "tlcpack/ci-gpu:v0.81"
ci_cpu = "tlcpack/ci-cpu:v0.81"
ci_wasm = "tlcpack/ci-wasm:v0.71"
ci_i386 = "tlcpack/ci-i386:v0.74"
ci_qemu = "tlcpack/ci-qemu:v0.10"
ci_arm = "tlcpack/ci-arm:v0.07"
ci_lint = 'tlcpack/ci-lint:v0.68'
ci_gpu = 'tlcpack/ci-gpu:v0.81'
ci_cpu = 'tlcpack/ci-cpu:v0.81'
ci_wasm = 'tlcpack/ci-wasm:v0.71'
ci_i386 = 'tlcpack/ci-i386:v0.74'
ci_qemu = 'tlcpack/ci-qemu:v0.10'
ci_arm = 'tlcpack/ci-arm:v0.07'
ci_hexagon = 'tlcpack/ci-hexagon:v0.01'
// <--- End of regex-scanned config.

// Parameters to allow overriding (in Jenkins UI), the images
Expand Down Expand Up @@ -104,6 +105,21 @@ def init_git() {
}
}

def should_skip_slow_tests(pr_number) {
withCredentials([string(
credentialsId: 'tvm-bot-jenkins-reader',
variable: 'GITHUB_TOKEN',
)]) {
// Exit code of 1 means run slow tests, exit code of 0 means skip slow tests
result = sh (
returnStatus: true,
script: "./tests/scripts/should_run_slow_tests.py --pr '${pr_number}'",
label: 'Check if CI should run slow tests',
)
}
return result == 0
}

def cancel_previous_build() {
// cancel previous build if it is not on main.
if (env.BRANCH_NAME != 'main') {
Expand Down Expand Up @@ -131,6 +147,14 @@ def should_skip_ci(pr_number) {
return git_skip_ci_code == 0
}

// skips builds from branch indexing; sourced from https://www.jvt.me/posts/2020/02/23/jenkins-multibranch-skip-branch-index/
// execute this before anything else, including requesting any time on an agent
if (currentBuild.getBuildCauses().toString().contains('BranchIndexingCause')) {
print "INFO: Build skipped due to trigger being Branch Indexing"
currentBuild.result = 'ABORTED' // optional, gives a better hint to the user that it's been skipped, rather than the default which shows it's successful
return
}

cancel_previous_build()

stage('Prepare') {
Expand Down Expand Up @@ -168,6 +192,7 @@ stage('Sanity Check') {
label: 'Check for docs only changes',
)
skip_ci = should_skip_ci(env.CHANGE_ID)
skip_slow_tests = should_skip_slow_tests(env.CHANGE_ID)
sh (
script: "${docker_run} ${ci_lint} ./tests/scripts/task_lint.sh",
label: 'Run lint',
Expand All @@ -177,6 +202,7 @@ stage('Sanity Check') {
}
}


// Run make. First try to do an incremental make from a previous workspace in hope to
// accelerate the compilation. If something is wrong, clean the workspace and then
// build from scratch.
Expand Down Expand Up @@ -237,13 +263,13 @@ def python_unittest(image) {
def fsim_test(image) {
sh (
script: "${docker_run} ${image} ./tests/scripts/task_python_vta_fsim.sh",
label: 'Run VTA tests in FSIM ',
label: 'Run VTA tests in FSIM',
)
}

def cmake_build(image, path, make_flag) {
sh (
script: "${docker_run} ${image} ./tests/scripts/task_build.sh ${path} ${make_flag}",
script: "${docker_run} ${image} ./tests/scripts/task_build.py --num-executors ${CI_NUM_EXECUTORS} --sccache-bucket tvm-sccache-prod",
label: 'Run cmake build',
)
}
Expand All @@ -256,6 +282,9 @@ def cpp_unittest(image) {
}

stage('Build') {
environment {
SKIP_SLOW_TESTS = "${skip_slow_tests}"
}
parallel 'BUILD: GPU': {
if (!skip_ci) {
node('GPUBUILD') {
Expand Down Expand Up @@ -286,7 +315,7 @@ stage('Build') {
ci_setup(ci_cpu)
// sh "${docker_run} ${ci_cpu} ./tests/scripts/task_golang.sh"
// TODO(@jroesch): need to resolve CI issue will turn back on in follow up patch
sh (script: "${docker_run} ${ci_cpu} ./tests/scripts/task_rust.sh", label: "Rust build and test")
sh (script: "${docker_run} ${ci_cpu} ./tests/scripts/task_rust.sh", label: 'Rust build and test')
}
}
}
Expand Down Expand Up @@ -381,10 +410,41 @@ stage('Build') {
} else {
Utils.markStageSkippedForConditional('BUILD: QEMU')
}
},
'BUILD: Hexagon': {
if (!skip_ci && is_docs_only_build != 1) {
node('CPU') {
ws(per_exec_ws('tvm/build-hexagon')) {
init_git()
sh (
script: "${docker_run} ${ci_hexagon} ./tests/scripts/task_config_build_hexagon.sh",
label: 'Create Hexagon cmake config',
)
try {
make(ci_hexagon, 'build', '-j2')
sh (
script: "${docker_run} ${ci_hexagon} ./tests/scripts/task_build_hexagon_api.sh",
label: 'Build Hexagon API',
)
sh (
script: "${docker_run} ${ci_hexagon} ./tests/scripts/task_python_hexagon.sh",
label: 'Run Hexagon tests',
)
} finally {
junit 'build/pytest-results/*.xml'
}
}
}
} else {
Utils.markStageSkippedForConditional('BUILD: Hexagon')
}
}
}

stage('Test') {
environment {
SKIP_SLOW_TESTS = "${skip_slow_tests}"
}
parallel 'unittest: GPU': {
if (!skip_ci && is_docs_only_build != 1) {
node('TensorCore') {
Expand Down Expand Up @@ -442,7 +502,7 @@ stage('Test') {
'unittest: CPU': {
if (!skip_ci && is_docs_only_build != 1) {
node('CPU') {
ws(per_exec_ws("tvm/ut-python-cpu")) {
ws(per_exec_ws('tvm/ut-python-cpu')) {
try {
init_git()
unpack_lib('cpu', tvm_multilib_tsim)
Expand All @@ -452,7 +512,7 @@ stage('Test') {
fsim_test(ci_cpu)
sh (
script: "${docker_run} ${ci_cpu} ./tests/scripts/task_python_vta_tsim.sh",
label: "Run VTA tests in TSIM",
label: 'Run VTA tests in TSIM',
)
}
} finally {
Expand Down Expand Up @@ -537,7 +597,7 @@ stage('Test') {
Utils.markStageSkippedForConditional('topi: GPU')
}
},
'frontend: GPU': {
'frontend: GPU 1': {
if (!skip_ci && is_docs_only_build != 1) {
node('GPU') {
ws(per_exec_ws('tvm/frontend-python-gpu')) {
Expand All @@ -547,8 +607,31 @@ stage('Test') {
timeout(time: max_time, unit: 'MINUTES') {
ci_setup(ci_gpu)
sh (
script: "${docker_run} ${ci_gpu} ./tests/scripts/task_python_frontend.sh",
label: 'Run Python frontend tests',
script: "${docker_run} ${ci_gpu} ./tests/scripts/task_python_frontend.sh 1",
label: 'Run Python frontend tests (shard 1)',
)
}
} finally {
junit 'build/pytest-results/*.xml'
}
}
}
} else {
Utils.markStageSkippedForConditional('frontend: GPU 1')
}
},
'frontend: GPU 2': {
if (!skip_ci && is_docs_only_build != 1) {
node('GPU') {
ws(per_exec_ws('tvm/frontend-python-gpu')) {
try {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
ci_setup(ci_gpu)
sh (
script: "${docker_run} ${ci_gpu} ./tests/scripts/task_python_frontend.sh 2",
label: 'Run Python frontend tests (shard 2)',
)
}
} finally {
Expand All @@ -557,7 +640,7 @@ stage('Test') {
}
}
} else {
Utils.markStageSkippedForConditional('frontend: GPU')
Utils.markStageSkippedForConditional('frontend: GPU 2')
}
},
'frontend: CPU': {
Expand Down
Loading

0 comments on commit b2482d0

Please sign in to comment.