Skip to content

Collective Knowledge repository for evaluating and optimising performance of Caffe

License

Notifications You must be signed in to change notification settings

nagyistge/ck-caffe

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Collective Knowledge repository for optimising Caffe-based designs

CK-Caffe is an open framework for collaborative and reproducible optimisation of convolutional neural networks. It's based on the Caffe framework from the Berkeley Vision and Learning Center (BVLC) and the Collective Knowledge framework from the cTuning Foundation. In essence, CK-Caffe is simply a suite of convenient wrappers for building, evaluating and optimising performance of Caffe.

As outlined in our vision, we invite the community to collaboratively design and optimize convolutional neural networks to meet the performance, accuracy and cost requirements for deployment on a range of form factors - from sensors to self-driving cars. To this end, CK-Caffe leverages the key capabilities of CK to crowdsource experimentation across diverse platforms, CNN designs, optimization options, and so on; exchange experimental data in a flexible JSON-based format; and apply leading-edge predictive analytics to extract valuable insights from the experimental data.

Examples

Compare accuracy of 4 CNNs

In this Jupyter notebook, we compare the Top-1 and Top-5 accuracy of 4 CNNs:

on the Imagenet validation set (50,000 images).

We have thus independently verified that on this data set SqueezeNet matches (and even slightly exceeds) the accuracy of AlexNet.

The experimental data is stored in the main CK-Caffe repository under 'experiment'.

Compare performance of 4 CNNs on Chromebook 2

This notebook investigates effects on inference performance of varying the batch size:

  • across the same 4 CNNs;
  • with 4 BLAS libraries:
    • [CPU] OpenBLAS 0.2.18 (one thread per core);
    • [GPU] clBLAS 2.4 (OpenCL 1.1 compliant);
    • [GPU] CLBlast dev (35623cd > 0.8.0);
    • [GPU] CLBlast dev (35623cd > 0.8.0) with Mali-optimized overlay (641bb07);
  • on the Samsung Chromebook 2 platform:
    • [CPU] quad-core ARM Cortex-A15 (@ 1900 MHz);
    • [GPU] quad-core ARM Mali-T628 (@ 600 MHz);
    • [GPU] OpenCL driver 6.0 (r6p0); OpenCL standard 1.1.

Finally, this notebook compares the best performance per image across the CNNs and BLAS libraries. When using OpenBLAS, SqueezeNet 1.1 is 2 times faster than SqueezeNet 1.0 and 2.4 times faster than AlexNet, broadly in line with expectations set by the SqueezeNet paper.

When using OpenCL BLAS libraries, however, SqueezeNet 1.0 is not necessarily faster than AlexNet, despite roughly 500 times reduction in the weights' size. This suggests that an optimal network design for a given task may depend on the software stack as well as on the hardware platform. Moreover, design choices may well shift over time, as software matures and new hardware becomes available. That's why we believe it is necessary to leverage community effort to collectively grow design and optimisation knowledge.

The experimental data and visualisation notebooks are stored in a separate repository which can be obtained as follows:

ck pull repo:ck-caffe-explore-batch-size-chromebook2 \
    --url=https://github.com/dividiti/ck-caffe-explore-batch-size-chromebook2.git

Contributors

License

  • BSD (3 clause)

Status

Under development.

Installing CK-Caffe on Ubuntu

Before installing CK-Caffe on the target system, several libraries and programs should be installed. So far, instructions for the following Linux flavours are available:

Conventions

In this guide, shell commands prefixed with '$' should be run as user, whereas commands prefixed with '#' should be run as root (or as user with 'sudo').

For example, to install the 'pip' package manager and then Jupyter on Ubuntu, run as root:

# apt install python-pip
# pip install jupyter

or as user:

$ sudo apt install python-pip
$ sudo -H pip install jupyter

Installing CK-Caffe dependencies on Ubuntu

Installing the dependencies is recommended via 'apt install' (for standard Ubuntu packages), or 'pip install' (for standard Python packages, typically of more recent versions than those available via 'apt install'). This can be simply done by opening a Linux shell and copying-and-pasting commands from cells below.

Installing core CK dependencies

Collective Knowledge has only two dependencies: Python (2.x and 3.x) and Git, which can be installed as follows:

# apt install  \
    python-dev \
    git

Installing common dependencies

Some CK packages and Caffe require common Linux utilities (e.g. make, cmake, wget), which can be installed as follows:

# apt install \
    coreutils \
    build-essential \
    make \
    cmake \
    wget \
    python-pip

Installing Caffe dependencies

The BVLC Caffe framework has quite a few dependencies. If you've already run Caffe on your machine, it's likely that you've already satisfied all of them. If not, however, you can easily install them in one gollop as follows:

# apt install \
    libboost-all-dev \
    libgflags-dev \
    libgoogle-glog-dev \
    libhdf5-serial-dev \
    liblmdb-dev \
    libleveldb-dev \
    libprotobuf-dev \
    protobuf-compiler \
    libsnappy-dev \
    libopencv-dev
# pip install \
    protobuf

Installing optional dependencies

# apt install \
    libatlas-base-dev \
# pip install \
    jupyter \
    pandas numpy scipy matplotlib \
    scikit-image scikit-learn \
    pyyaml

Checking all dependencies

You can check all the dependencies on an Ubuntu system by running this notebook. (View the output of this notebook on an Odroid XU3 board here.)

Installing CK

Please proceed to installing CK.

Installing CK-Caffe dependencies on Gentoo

Installing the dependencies is recommended via 'emerge' (for standard Gentoo packages), or 'pip install' (for standard Python packages, typically of more recent versions than those available via 'emerge'). This can be simply done by opening a Linux shell and copying-and-pasting commands from cells below.

Installing core CK dependencies

Collective Knowledge has only two dependencies: Python (2.x and 3.x) and Git, which can be installed as follows:

# emerge  \
    dev-lang/python \
    dev-vcs/git

Installing common dependencies

Some CK packages and Caffe require common Linux utilities (e.g. make, cmake, wget), which can be installed as follows:

# emerge \
    sys-devel/gcc \
    sys-devel/make \
    dev-util/cmake \
    net-misc/wget \
    dev-python/pip

Installing Caffe dependencies

The BVLC Caffe framework has quite a few dependencies. If you've already run Caffe on your machine, it's likely that you've already satisfied all of them. If not, however, you can easily install them in one gollop as follows:

# emerge \
    dev-libs/boost \
    dev-util/boost-build \
    dev-cpp/gflags \
    dev-cpp/glog \
    sci-libs/hdf5 \
    dev-db/lmdb \
    dev-libs/leveldb \
    dev-libs/protobuf \
    app-arch/snappy \
    media-libs/opencv
# pip install \
    protobuf

Installing optional dependencies

# emerge \
    sci-libs/atlas
# pip install \
    jupyter \
    pandas numpy scipy matplotlib \
    scikit-image scikit-learn \
    pyyaml

Installing CK

Please proceed to installing CK.

Installing CK

Clone CK from GitHub into e.g. '$HOME/CK':

$ git clone https://github.com/ctuning/ck.git $HOME/CK

Add the following to your '$HOME/.bashrc' and run 'source ~/.bashrc' after that:

# Collective Knowledge.
export CK_ROOT=${HOME}/CK
export CK_REPOS=${HOME}/CK_REPOS
export CK_TOOLS=${HOME}/CK_TOOLS
export PATH=${HOME}/CK/bin:$PATH

Install the Python interface to CK:

$ cd $HOME/CK && sudo python setup.py install

Test that both the command line and Python interfaces work:

$ ck version
V1.7.4dev
$ python -c "import ck.kernel as ck; print (ck.__version__)"
V1.7.4dev

Installing CK Caffe

We are now ready to install and run CK-Caffe:

$ ck pull repo:ck-caffe --url=https://github.com/dividiti/ck-caffe
$ ck run program:caffe

Sample run

TBD

Misc hints

Creating dataset subsets

The ILSVRC2012 validation dataset contains 50K images. For quick experiments, you can create a subset of this dataset, as follows. Run:

$ ck install package:imagenet-2012-val-lmdb-256

When prompted, enter the number of images to convert to LMDB, say, N = 100. The first N images will be taken.

Setting environment variables

To set environment variables for running the program, use e.g.:

$ ck run program:caffe --env.CK_CAFFE_BATCH_SIZE=1 --env.CK_CAFFE_ITERATIONS=10

About

Collective Knowledge repository for evaluating and optimising performance of Caffe

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 52.8%
  • Python 37.1%
  • Shell 10.1%