Skip to content

Commit

Permalink
Add scripts/documentation for VSCode setup with a docker dev image.
Browse files Browse the repository at this point in the history
* Forks a subset of my shell functions into docker_shell_funcs.sh, specifically needed to create docker images that run as yourself.
* Extends the readme with the three command bootstrap to get a dev container running.
* Step by step instructions for configuring VSCode for Intellisense in either npcomp or LLVM.
* Changes LLVM config options to enable tests. This setup is now suitable for upstream changes as well without rebuilding.
  • Loading branch information
stellaraccident committed Oct 8, 2020
1 parent ddc2e9d commit 51d5124
Show file tree
Hide file tree
Showing 6 changed files with 197 additions and 82 deletions.
136 changes: 69 additions & 67 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,93 +116,95 @@ ninja check-npcomp
export PYTHONPATH="$(realpath python):$(realpath build/python)"
```

### PyTorch 1.3 - ATen pseudo-device type dispatch
### PyTorch Frontend Build (manual docker instructions)

The currently functional approach to PyTorch integration uses an ATen pseudo
device for program capture. It is activated by including the PyTorch cmake
path and settind `-DNPCOMP_ENABLE_TORCH_TYPE_DISPATCH=ON`. This approach has a
very fragile dependency on a specific PyTorch revisions in the ~1.3 era and
currently must be built via the docker image in `docker/pytorch-1.3`.
Create docker image (or follow your own preferences):

We are migrating to newer approaches that build with more recent PyTorch
versions, but these are not yet functional (see below).
* Mount the (host) source directory to `/src/mlir-npcomp` (in the container).
* Mount the `/build` directory (in the container) appropriately for your case.

Docker container setup:
```shell
docker build docker/pytorch-1.6 --tag local/npcomp:build-pytorch-1.6
docker volume create npcomp-build
```

Shell into docker image:

```shell
# One of the maintainers does periodically push new images. To use one of these,
# skip the build step and use:
# BUILD_IMAGE_TAG="stellaraccident/npcomp:build-pytorch-1.3"
# Since we are not planning to support this branch long term, this process is
# entirely ad-hoc at present and geared for project maintainers and build bots
# to be able to make progress.
# See https://hub.docker.com/repository/docker/stellaraccident/npcomp
BUILD_IMAGE_TAG="local/npcomp:build-pytorch-1.3"

# Build the docker image (rebuilds PyTorch, so takes quite some time).
docker build docker/pytorch-1.3 --tag $BUILD_IMAGE_TAG

# Docker workflow (or use your own preferences).
# Create a volume for npcomp build artifacts.
docker volume create npcomp-pytorch-1.3-build

# Run the container, mounting /npcomp to the source directory and the volume
# above to the /build directory. The source directory is mounted read-only to
# avoid the container putting root owned files there.
# Replace `$HOME/src/mlir-npcomp` with an appropriate path to where the project
# is checked out.
docker run \
--mount type=bind,source=$HOME/src/mlir-npcomp,target=/npcomp,readonly \
--mount source=npcomp-pytorch-1.3-build,target=/build \
--rm -it $BUILD_IMAGE_TAG /bin/bash
--mount type=bind,source=$HOME/src/mlir-npcomp,target=/src/mlir-npcomp \
--mount source=npcomp-build,target=/build \
--rm -it local/npcomp:build-pytorch-1.6 /bin/bash
```

Build/test npcomp (from within docker image):

```shell
# From within the docker image.
# Install MLIR and configure project.
cd /npcomp
BUILD_DIR=/build ./build_tools/install_mlir.sh
BUILD_DIR=/build ./build_tools/cmake_configure.sh \
-DCMAKE_PREFIX_PATH=/opt/conda/lib/python3.6/site-packages/torch/share/cmake \
-DNPCOMP_ENABLE_TORCH_TYPE_DISPATCH=ON

# Build.
cd /build
ninja
ninja check-npcomp
ninja check-frontends-pytorch
cd /src/mlir-npcomp
./build_tools/install_mlir.sh
./build_tools/cmake_configure.sh
cmake --build /build/npcomp --target check-npcomp check-frontends-pytorch
```

### PyTorch 1.6+ - Graph API <-> MLIR
### VSCode with a Docker Dev Image

Note: This variant is not yet complete in any useful way.

Create docker image (or follow your own preferences):
#### Start a docker dev container based on our image

* Map the source directory to `/npcomp`
* Map the `/build` directory appropriately for your case.
Assumes that mlir-npcomp is checked out locally under `~/src/mlir-npcomp`.
See `docker_shell_funcs.sh` for commands to modify if different.

```shell
BUILD_IMAGE_TAG="local/npcomp:build-pytorch-1.6"
docker build docker/pytorch-1.6 --tag $BUILD_IMAGE_TAG
docker volume create npcomp-pytorch-1.6-build
# Build/start the container.
source ./build_tools/docker_shell_funcs.sh
npcomp_docker_build # Only needed first time/on updates to docker files.
npcomp_docker_start
```

Shell into docker image:

```shell
docker run \
--mount type=bind,source=$HOME/src/mlir-npcomp,target=/npcomp,readonly \
--mount source=npcomp-pytorch-1.6-build,target=/build \
--rm -it $BUILD_IMAGE_TAG /bin/bash
# Get an interactive shell to the container and initial build.
npcomp_docker_login
```

Build/test npcomp (from within docker image):

```shell
# From within the docker image.
cd /npcomp
./build_tools/install_mlir.sh
./build_tools/cmake_configure.sh
cmake --build /build --target check-npcomp check-frontends-pytorch
# Stop the container (when done).
npcomp_docker_stop
```

### Configure VSCode:

Attach to your running container by opening the Docker extension tab (left panel), right clicking on the container name, and selecting "Attach Visual Studio code".

Install extensions in container:
* CMake Tools
* C/C++
* C++ Intellisense

#### Add workspace folders:

* `mlir-npcomp` source folder
* `external/llvm-project` source folder

#### Configure general settings:

`Ctrl-Shift-P` > `Preferences: Open Settings (UI)`

* For `mlir-npcomp` folder:
* `Cmake: Build directory`: `/build/npcomp`
* Uncheck `Cmake: Configure On Edit` and `Cmake: Configure on Open`
* For `llvm-project` folder:
* `Cmake: Build directory`: `/build/llvm-build`
* Uncheck `Cmake: Configure On Edit` and `Cmake: Configure on Open`

#### Configure Intellisense:

`Ctrl-Shift-P` > `C/C++: Edit Configurations (UI)`

* Open C/C++ config (for each project folder):
* Under Advanced, Compile Commands:
* set `/build/npcomp/compile_commands.json` for mlir-npcomp
* set `/build/llvm-build/compile_commands.json` for llvm-project
* Open a C++ file, give it a few seconds and see if you get code completion
(press CTRL-Space).

Make sure to save your workspace (prefer a local folder with the "Use Local" button)!
6 changes: 3 additions & 3 deletions build_tools/cmake_configure.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ set -e

# Setup directories.
td="$(realpath $(dirname $0)/..)"
build_dir="$(realpath "${BUILD_DIR:-$td/build}")"
install_mlir="$build_dir/install-mlir"
build_mlir="$build_dir/build-mlir"
build_dir="$(realpath "${NPCOMP_BUILD_DIR:-$td/build}")"
build_mlir="${LLVM_BUILD_DIR-$build_dir/build-mlir}"
install_mlir="${LLVM_INSTALL_DIR-$build_dir/install-mlir}"
declare -a extra_opts

if ! [ -d "$install_mlir/include/mlir" ]; then
Expand Down
55 changes: 55 additions & 0 deletions build_tools/docker_shell_funcs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Build the docker images for npcomp:
# npcomp:build-pytorch-1.6
# me/npcomp:build-pytorch-1.6 (additional dev packages and current user)
function npcomp_docker_build() {
if ! [ -f "docker/pytorch-1.6/Dockerfile" ]; then
echo "Please run out of mlir-npcomp/ source directory..."
return 1
fi
echo "Building out of $(pwd)..."
docker build docker/pytorch-1.6 --tag npcomp:build-pytorch-1.6
npcomp_docker_build_for_me npcomp:build-pytorch-1.6
}

# Start a container named "npcomp" in the background with the current-user
# dev image built above.
function npcomp_docker_start() {
local host_src_dir="${1-$HOME/src/mlir-npcomp}"
if ! [ -d "$host_src_dir" ]; then
echo "mlir-npcomp source directory not found:"
echo "Pass path to host source directory as argument (default=$host_src_dir)."
return 1
fi
docker volume create npcomp-build
docker run -d --rm --name "$container" \
--mount source=npcomp-build,target=/build \
--mount type=bind,source=$host_src_dir,target=/src/mlir-npcomp \
me/npcomp:build-pytorch-1.6 tail -f /dev/null
}

# Stop the container named "npcomp".
function npcomp_docker_stop() {
docker stop npcomp
}

# Get an interactive bash shell to the "npcomp" container.
function npcomp_docker_login() {
docker_execme -it npcomp /bin/bash
}

### Implementation helpers below.
# From a root image, build an image just for me, hard-coded with a user
# matching the host user and a home directory that mirrors that on the host.
function npcomp_docker_build_for_me() {
local root_image="$1"
echo "
FROM $root_image
USER root
RUN apt install -y sudo byobu git procps lsb-release
RUN addgroup --gid $(id -g $USER) $USER
RUN mkdir -p $(dirname $HOME) && useradd -m -d $HOME --gid $(id -g $USER) --uid $(id -u $USER) $USER
RUN echo '$USER ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
USER $USER
" | docker build --tag me/${root_image} -
}
14 changes: 6 additions & 8 deletions build_tools/install_mlir.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@
# BUILD_DIR=/build ./build_tools/install_mlir.sh
set -e
td="$(realpath $(dirname $0)/..)"
build_dir="$(realpath "${BUILD_DIR:-$td/build}")"
build_dir="$(realpath "${NPCOMP_BUILD_DIR:-$td/build}")"
build_mlir="${LLVM_BUILD_DIR-$build_dir/build-mlir}"
install_mlir="${LLVM_INSTALL_DIR-$build_dir/install-mlir}"

# Find LLVM source (assumes it is adjacent to this directory).
LLVM_SRC_DIR="$(realpath "${LLVM_SRC_DIR:-$td/external/llvm-project}")"
Expand All @@ -15,10 +17,7 @@ if ! [ -f "$LLVM_SRC_DIR/llvm/CMakeLists.txt" ]; then
exit 1
fi
echo "Using LLVM source dir: $LLVM_SRC_DIR"
echo "Build directory: $build_dir"
# Setup directories.
build_mlir="$build_dir/build-mlir"
install_mlir="$build_dir/install-mlir"
echo "Building MLIR in $build_mlir"
echo "Install MLIR to $install_mlir"
mkdir -p "$build_mlir"
Expand All @@ -32,16 +31,15 @@ set -x
cmake -GNinja \
"-H$LLVM_SRC_DIR/llvm" \
"-B$build_mlir" \
-DCMAKE_EXPORT_COMPILE_COMMANDS=TRUE \
-DLLVM_INSTALL_UTILS=ON \
-DLLVM_ENABLE_PROJECTS=mlir \
-DLLVM_TARGETS_TO_BUILD="X86;AArch64;ARM" \
-DLLVM_TARGETS_TO_BUILD="X86" \
-DLLVM_INCLUDE_TOOLS=ON \
-DLLVM_BUILD_TOOLS=OFF \
-DLLVM_INCLUDE_TESTS=OFF \
"-DCMAKE_INSTALL_PREFIX=$install_mlir" \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DLLVM_ENABLE_ASSERTIONS=On \
-DLLVM_ENABLE_RTTI=On \
"-DMLIR_BINDINGS_PYTHON_ENABLED=ON"
-DMLIR_BINDINGS_PYTHON_ENABLED=ON

cmake --build "$build_mlir" --target install
8 changes: 4 additions & 4 deletions docker/pytorch-1.6/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,16 @@ RUN ln -s /usr/bin/llvm-symbolizer-10 /usr/bin/llvm-symbolizer
RUN pip3 install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
RUN ln -s /usr/local/lib/python3.8/dist-packages/torch /pytorch

# Other dev tools here (avoids forking large images when changed).
RUN apt install -y byobu

# Build configuration
RUN mkdir /ccache && ln -s /usr/bin/ccache /ccache/clang && ln -s /usr/bin/ccache /ccache/clang++
RUN mkdir /build && chmod a+rw /build /ccache
ENV PATH "/ccache:${PATH}"
ENV CC clang
ENV CXX clang++
# Binary distributions of torch force CXX11 ABI 0 :(
ENV CXXFLAGS "-D_GLIBCXX_USE_CXX11_ABI=0"
ENV LDFLAGS "-fuse-ld=/usr/bin/ld.lld"
ENV CMAKE_PREFIX_PATH /pytorch/share/cmake
ENV BUILD_DIR /build
ENV LLVM_BUILD_DIR /build/llvm-build
ENV LLVM_INSTALL_DIR /build/llvm-install
ENV NPCOMP_BUILD_DIR /build/npcomp
60 changes: 60 additions & 0 deletions docs/pytorch13_build.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Deprecated PyTorch 1.3 based build

These instructions are retained for the transition. Refer to top-level README for up to date instructions.

### PyTorch 1.3 - ATen pseudo-device type dispatch

The currently functional approach to PyTorch integration uses an ATen pseudo
device for program capture. It is activated by including the PyTorch cmake
path and settind `-DNPCOMP_ENABLE_TORCH_TYPE_DISPATCH=ON`. This approach has a
very fragile dependency on a specific PyTorch revisions in the ~1.3 era and
currently must be built via the docker image in `docker/pytorch-1.3`.

We are migrating to newer approaches that build with more recent PyTorch
versions, but these are not yet functional (see below).

Docker container setup:

```shell
# One of the maintainers does periodically push new images. To use one of these,
# skip the build step and use:
# BUILD_IMAGE_TAG="stellaraccident/npcomp:build-pytorch-1.3"
# Since we are not planning to support this branch long term, this process is
# entirely ad-hoc at present and geared for project maintainers and build bots
# to be able to make progress.
# See https://hub.docker.com/repository/docker/stellaraccident/npcomp
BUILD_IMAGE_TAG="local/npcomp:build-pytorch-1.3"

# Build the docker image (rebuilds PyTorch, so takes quite some time).
docker build docker/pytorch-1.3 --tag $BUILD_IMAGE_TAG

# Docker workflow (or use your own preferences).
# Create a volume for npcomp build artifacts.
docker volume create npcomp-pytorch-1.3-build

# Run the container, mounting /npcomp to the source directory and the volume
# above to the /build directory. The source directory is mounted read-only to
# avoid the container putting root owned files there.
# Replace `$HOME/src/mlir-npcomp` with an appropriate path to where the project
# is checked out.
docker run \
--mount type=bind,source=$HOME/src/mlir-npcomp,target=/npcomp,readonly \
--mount source=npcomp-pytorch-1.3-build,target=/build \
--rm -it $BUILD_IMAGE_TAG /bin/bash
```

```shell
# From within the docker image.
# Install MLIR and configure project.
cd /npcomp
BUILD_DIR=/build ./build_tools/install_mlir.sh
BUILD_DIR=/build ./build_tools/cmake_configure.sh \
-DCMAKE_PREFIX_PATH=/opt/conda/lib/python3.6/site-packages/torch/share/cmake \
-DNPCOMP_ENABLE_TORCH_TYPE_DISPATCH=ON

# Build.
cd /build
ninja
ninja check-npcomp
ninja check-frontends-pytorch
```

0 comments on commit 51d5124

Please sign in to comment.