Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into thiagofc/fix-orttra…
Browse files Browse the repository at this point in the history
…iner-on-ortmodule-dev-branch
  • Loading branch information
Thiago Crepaldi committed Feb 1, 2021
2 parents 891181d + 5b69cbe commit 6b890c2
Show file tree
Hide file tree
Showing 968 changed files with 45,436 additions and 17,135 deletions.
4 changes: 4 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,7 @@ max-line-length = 120
per-file-ignores =
__init__.py:F401
format = [flake8 PEP8 ERROR] %(path)s:%(row)d:%(col)d: %(code)s %(text)s
# We generally exclude using cmake/flake8.cmake. If something needs to be excluded here
# The exclude value/s need to be on a newline otherwise it doesn't work (at least on Windows)
# exclude =
# ./onnxruntime/core/flatbuffers/ort_flatbuffers_py
19 changes: 10 additions & 9 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@
[submodule "cmake/external/date"]
path = cmake/external/date
url = https://github.com/HowardHinnant/date.git
[submodule "cmake/external/gemmlowp"]
path = cmake/external/gemmlowp
url = https://github.com/google/gemmlowp.git
[submodule "cmake/external/nsync"]
path = cmake/external/nsync
url = https://github.com/google/nsync
Expand All @@ -25,9 +22,6 @@
[submodule "cmake/external/eigen"]
path = cmake/external/eigen
url = https://gitlab.com/libeigen/eigen.git
[submodule "cmake/external/horovod"]
path = cmake/external/horovod
url = https://github.com/horovod/horovod.git
[submodule "cmake/external/cxxopts"]
path = cmake/external/cxxopts
url = https://github.com/jarro2783/cxxopts.git
Expand Down Expand Up @@ -62,9 +56,16 @@
[submodule "cmake/external/SafeInt/safeint"]
path = cmake/external/SafeInt/safeint
url = https://github.com/dcleblanc/SafeInt.git
[submodule "cmake/external/onnx-tensorrt"]
path = cmake/external/onnx-tensorrt
url = https://github.com/onnx/onnx-tensorrt.git
[submodule "cmake/external/optional-lite"]
path = cmake/external/optional-lite
url = https://github.com/martinmoene/optional-lite.git
[submodule "cmake/external/mp11"]
path = cmake/external/mp11
url = https://github.com/boostorg/mp11.git
[submodule "cmake/external/coremltools"]
path = cmake/external/coremltools
url = https://github.com/apple/coremltools.git
[submodule "cmake/external/onnx-tensorrt"]
path = cmake/external/onnx-tensorrt
url = https://github.com/onnx/onnx-tensorrt.git
branch = 7.1
92 changes: 71 additions & 21 deletions BUILD.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,12 @@
* [iOS](#iOS)

**[Training](#Training)**
* [Baseline CPU](#baseline-cpu)
* Traning Enabled Execution Providers
* [NVIDIA CUDA](#cuda-training)
* [ROCM](#ROCM)
* [Intel DNNL/MKL-ML](#dnnl-training)

***
# Inferencing
## Start: Baseline CPU

Expand Down Expand Up @@ -133,6 +137,7 @@ GCC 4.x and below are not supported.
|**Use OpenMP**|--use_openmp|OpenMP will parallelize some of the code for potential performance improvements. This is not recommended for running on single threads.|
|**Build using parallel processing**|--parallel|This is strongly recommended to speed up the build.|
|**Build Shared Library**|--build_shared_lib||
|**Enable Training support**|--enable_training||
### APIs and Language Bindings
|API|Command|Additional details|
Expand Down Expand Up @@ -339,7 +344,7 @@ Note that DNNL is built as a [shared provider library](#Execution-Provider-Share
To build for Intel GPU, replace dnnl_opencl_root with the path of the Intel SDK for OpenCL Applications.
##### Windows
##### Windows
`.\build.bat --use_dnnl --dnnl_gpu_runtime ocl --dnnl_opencl_root "c:\program files (x86)\intelswtools\sw_dev_tools\opencl\sdk"`
Expand All @@ -353,21 +358,21 @@ To build for Intel GPU, replace dnnl_opencl_root with the path of the Intel SDK
See more information on the OpenVINO Execution Provider [here](./docs/execution_providers/OpenVINO-ExecutionProvider.md).
#### Prerequisites
1. Install the Intel<sup>®</sup> Distribution of OpenVINO<sup>TM</sup> Toolkit **Release 2021.1** for the appropriate OS and target hardware :
1. Install the Intel<sup>®</sup> Distribution of OpenVINO<sup>TM</sup> Toolkit **Release 2021.2** for the appropriate OS and target hardware :
* [Linux - CPU, GPU, VPU, VAD-M](https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux)
* [Linux - FPGA](https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux-fpga)
* [Windows - CPU, GPU, VPU, VAD-M](https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-windows).
Follow [documentation](https://docs.openvinotoolkit.org/2021.1/index.html) for detailed instructions.
Follow [documentation](https://docs.openvinotoolkit.org/2021.2/index.html) for detailed instructions.
*2021.1 is the recommended OpenVINO version. [OpenVINO 2020.2](https://docs.openvinotoolkit.org/2020.2/index.html) is minimal OpenVINO version requirement.*
*The minimum ubuntu version to support 2021.1 is 18.04.*
*2021.2 is the recommended OpenVINO version. [OpenVINO 2020.3](https://docs.openvinotoolkit.org/2020.3/index.html) is minimal OpenVINO version requirement.*
*The minimum ubuntu version to support 2021.2 is 18.04.*
2. Configure the target hardware with specific follow on instructions:
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_windows.html#Install-GPU), [Linux](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps)
* To configure Intel<sup>®</sup> Movidius<sup>TM</sup> USB, please follow this getting started guide: [Linux](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps)
* To configure Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs, please follow this configuration guide: [Windows](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_windows.html#hddl-myriad), [Linux](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_linux.html#install-VPU). Follow steps 3 and 4 to complete the configuration.
* To configure Intel<sup>®</sup> Vision Accelerator Design with an Intel<sup>®</sup> Arria<sup>®</sup> 10 FPGA, please follow this configuration guide: [Linux](https://docs.openvinotoolkit.org/2021.1/openvino_docs_install_guides_installing_openvino_linux_fpga.html)
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_windows.html#Install-GPU), [Linux](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps)
* To configure Intel<sup>®</sup> Movidius<sup>TM</sup> USB, please follow this getting started guide: [Linux](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps)
* To configure Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs, please follow this configuration guide: [Windows](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_windows.html#hddl-myriad), [Linux](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux.html#install-VPU). Follow steps 3 and 4 to complete the configuration.
* To configure Intel<sup>®</sup> Vision Accelerator Design with an Intel<sup>®</sup> Arria<sup>®</sup> 10 FPGA, please follow this configuration guide: [Linux](https://docs.openvinotoolkit.org/2021.2/openvino_docs_install_guides_installing_openvino_linux_fpga.html)
3. Initialize the OpenVINO environment by running the setupvars script as shown below:
* For Linux run:
Expand Down Expand Up @@ -591,11 +596,13 @@ See more information on the ArmNN Execution Provider [here](./docs/execution_pro
source /opt/fsl-imx-xwayland/4.*/environment-setup-aarch64-poky-linux
alias cmake="/usr/bin/cmake -DCMAKE_TOOLCHAIN_FILE=$OECORE_NATIVE_SYSROOT/usr/share/cmake/OEToolchainConfig.cmake"
```
* See [Build ARM](#ARM) below for information on building for ARM devices
#### Build Instructions
```
./build.sh --use_armnn

```
The Relu operator is set by default to use the CPU execution provider for better performance. To use the ArmNN implementation build with --armnn_relu flag
```
Expand All @@ -606,9 +613,10 @@ The Batch Normalization operator is set by default to use the CPU execution prov
./build.sh --use_armnn --armnn_bn
```
To use a library outside the normal environment you can set a custom path by using --armnn_home and --armnn_libs tags that defines the path to the ArmNN home directory and the build directory respectively.
To use a library outside the normal environment you can set a custom path by providing the --armnn_home and --armnn_libs parameters to define the path to the ArmNN home directory and build directory respectively.
The ARM Compute Library home directory and build directory must also be available, and can be specified if needed using --acl_home and --acl_libs respectively.
```
./build.sh --use_armnn --armnn_home /path/to/ComputeLibrary --armnn_libs /path/to/build
./build.sh --use_armnn --armnn_home /path/to/armnn --armnn_libs /path/to/armnn/build --acl_home /path/to/ComputeLibrary --acl_libs /path/to/acl/build
```
---
Expand Down Expand Up @@ -728,14 +736,17 @@ ORT_DEBUG_NODE_IO_DUMP_DATA_TO_FILES=1
---
## Architectures
### x86
### [64-bit x86](https://en.wikipedia.org/wiki/X86-64) (also known as x86_64 or AMD64)
This is the default.
### 32-bit x86
#### Build Instructions
##### Windows
* add `--x86` argument when launching `.\build.bat`
##### Linux
* Must be built on a x86 OS
* add --x86 argument to build.sh
(Not officially supported)
---
Expand Down Expand Up @@ -1175,8 +1186,32 @@ Dockerfile instructions are available [here](./dockerfiles#migraphx)
***

# Training
## CUDA
### Prerequisites

## Baseline CPU

### Build Instructions
To build ORT with training support add `--enable_training` build instruction.

All other build options are the same for inferencing as they are for training.

#### Windows
```
.\build.bat --config RelWithDebInfo --build_shared_lib --parallel --enable_training
```

The default Windows CMake Generator is Visual Studio 2017, but you can also use the newer Visual Studio 2019 by passing
`--cmake_generator "Visual Studio 16 2019"` to `.\build.bat`


#### Linux/macOS
```
./build.sh --config RelWithDebInfo --build_shared_lib --parallel --enable_training
```

## Training Enabled Execution Providers

### <a id="cuda-training">CUDA</a>
#### Prerequisites

The default NVIDIA GPU build requires CUDA runtime libraries installed on the system:

Expand All @@ -1188,7 +1223,7 @@ The default NVIDIA GPU build requires CUDA runtime libraries installed on the sy

These dependency versions should reflect what is in [Dockerfile.training](./dockerfiles/Dockerfile.training).

### Build instructions
#### Build instructions

1. Checkout this code repo with `git clone https://github.com/microsoft/onnxruntime`

Expand All @@ -1210,8 +1245,8 @@ These dependency versions should reflect what is in [Dockerfile.training](./dock
This produces the .whl file in `./build/Linux/RelWithDebInfo/dist` for ONNX Runtime Training.
## ROCM
### Prerequisites
### ROCM
#### Prerequisites
The default AMD GPU build requires ROCM software toolkit installed on the system:
Expand All @@ -1230,4 +1265,19 @@ These dependency versions should reflect what is in [Dockerfile.training](./dock
* Change to the ONNX Runtime repo base folder: `cd onnxruntime`
* Run `./build.sh --config RelWithDebInfo --enable_training --build_wheel --use_rocm --rocm_home /opt/rocm --nccl_home /opt/rocm --mpi_home <location for openmpi>`
This produces the .whl file in `./build/Linux/RelWithDebInfo/dist` for ONNX Runtime Training.
This produces the .whl file in `./build/Linux/RelWithDebInfo/dist` for ONNX Runtime Training.
### <a id="dnnl-training"> DNNL and MKLML </a>
#### Build Instructions
##### Linux
`./build.sh --enable_training --use_dnnl`
##### Windows
`.\build.bat --enable_training --use_dnnl`
Add `--build_wheel` to build the ONNX Runtime wheel
This will produce a .whl file in `build/Linux/RelWithDebInfo/dist` for ONNX Runtime Training
16 changes: 7 additions & 9 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
# Contributing

We're always looking for your help to fix bugs and improve the product. Create a pull request and we'll be happy to take a look.
Start by reading the [Engineering Design](./docs/InferenceHighLevelDesign.md). You can find the doxygen generated documentation [here](https://microsoft.github.io/onnxruntime/).
We're always looking for your help to improve the product (bug fixes, new features, documentation, etc).

## Contributing a code change
* Start by reading the [Engineering Design](./docs/InferenceHighLevelDesign.md). More documentation can be found in the [docs folder](./docs/) and [here](https://microsoft.github.io/onnxruntime/).
* If your change is non-trivial or introduces new public facing APIs (discussed in more detail below) please use the [feature request issue template](https://github.com/microsoft/onnxruntime/issues/new?template=feature_request.md) to discuss it with the team and get consensus on the basic design and direction first. For all other changes, you can directly create a pull request (PR) and we'll be happy to take a look.
* Make sure your PR adheres to the [PR Guidelines](./docs/PR_Guidelines.md) and [Coding Conventions and Standards](./docs/Coding_Conventions_and_Standards.md) established by the team.
* If you're unsure about any of the above and want to contribute, you're welcome to [start a discussion](https://github.com/microsoft/onnxruntime/discussions) with the team.

## Proposing new public APIs

Expand Down Expand Up @@ -47,13 +52,6 @@ For making changes to the Windows Machine Learning WinRT API, please label your
* Note: After creating a pull request, you might not see a build getting triggered right away. One of the
onnxruntime team members will trigger the build for you.

## Coding guidelines

Please see [Coding Conventions and Standards](./docs/Coding_Conventions_and_Standards.md)

## Guidelines for creating a good PR (pull request)
[PR Guidelines](./docs/PR_Guidelines.md)

## Licensing guidelines

This project welcomes contributions and suggestions. Most contributions require you to
Expand Down
15 changes: 9 additions & 6 deletions NuGet.config
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<solution>
<solution>
<add key="disableSourceControlIntegration" value="true" />
</solution>
<packageSources>
<add key="NuGet Official" value="https://api.nuget.org/v3/index.json" />
<add key="onnxruntime_public" value="https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime_public/nuget/v3/index.json" />
</packageSources>
</solution>
<packageSources>
<clear />
<add key="NuGet Official" value="https://api.nuget.org/v3/index.json" />
</packageSources>
<disabledPackageSources>
<clear />
</disabledPackageSources>
</configuration>
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ For production scenarios, it's strongly recommended to build only from an [offic

|API|Supported Versions|Samples|
|---|---|---|
[Python](https://aka.ms/onnxruntime-python)| 3.5, 3.6, 3.7, 3.8 (3.8 excludes Win GPU and Linux ARM)<br>[Python Dev Notes](./docs/Python_Dev_Notes.md)| [Samples](./samples#python)|
[Python](https://aka.ms/onnxruntime-python)| 3.6, 3.7, 3.8, 3.9 (3.8/3.9 excludes Win GPU and Linux ARM)<br>[Python Dev Notes](./docs/Python_Dev_Notes.md)| [Samples](./samples#python)|
|[C#](docs/CSharp_API.md)| | [Samples](./samples#C)|
|[C++](./include/onnxruntime/core/session/onnxruntime_cxx_api.h)| |[Samples](./samples#CC)|
|[C](docs/C_API.md)| | [Samples](./samples#CC)|
Expand Down
Loading

0 comments on commit 6b890c2

Please sign in to comment.