Skip to content

avidov/flashlight

 
 

Repository files navigation

Flashlight: Fast, Flexible Machine Learning in C++

Quickstart | Installation | Documentation

CircleCI Documentation Status Docker Image Build Status Join the chat at https://gitter.im/flashlight-ml/community

Flashlight is a fast, flexible machine learning library written entirely in C++ from the Facebook AI Research Speech team and the creators of Torch and Deep Speech. Its core features include:

  • Just-in-time kernel compilation with modern C++ with the ArrayFire tensor library.
  • CUDA, CPU, and OpenCL (coming soon) backends for GPU and CPU training.
  • An emphasis on efficiency and scale.

Native support in C++ and simple extensibility makes Flashlight a powerful research framework that's hackable to its core and enables fast iteration on new experimental setups and algorithms without sacrificing performance. In a single repository, Flashlight provides applications for research across multiple domains:

Project Layout

Flashlight is broken down into a few parts:

  • flashlight/lib contains kernels and standalone utilities for sequence losses, beam search decoding, text processing, and more.
  • flashlight/fl is the core neural network library using the ArrayFire tensor library.
  • flashlight/app are applications of the core library to machine learning across domains.
  • flashlight/ext are extensions on top of Flashlight and ArrayFire that are useful across applications.

Quickstart

First, install Flashlight. And link it to your own project.

Sequential forms a sequence of Flashlight Modules for chaining computation.

Implementing a simple convnet is easy.
#include <flashlight/fl/flashlight.h>

Sequential model;

model.add(View(af::dim4(IM_DIM, IM_DIM, 1, -1)));
model.add(Conv2D(
    1 /* input channels */,
    32 /* output channels */,
    5 /* kernel width */,
    5 /* kernel height */,
    1 /* stride x */,
    1 /* stride y */,
    PaddingMode::SAME; /* padding mode */,
    PaddingMode::SAME; /* padding mode */));
model.add(ReLU());
model.add(Pool2D(
    2 /* kernel width */,
    2 /* kernel height */,
    2 /* stride x */,
    2 /* stride y */));
model.add(Conv2D(32, 64, 5, 5, 1, 1, PaddingMode::SAME;, PaddingMode::SAME;));
model.add(ReLU());
model.add(Pool2D(2, 2, 2, 2));
model.add(View(af::dim4(7 * 7 * 64, -1)));
model.add(Linear(7 * 7 * 64, 1024));
model.add(ReLU());
model.add(Dropout(0.5));
model.add(Linear(1024, 10));
model.add(LogSoftmax());

Performing forward and backward computation is straightforwards:

auto output = model.forward(input);
auto loss = categoricalCrossEntropy(output, target);
loss.backward();

See the MNIST example for a full tutorial including a training loop and dataset abstractions.

Variable is the base Fashlight tensor that operates on ArrayFire arrays. Tape-based Automatic differentiation in Flashlight is simple and works as you'd expect.

Autograd Example
auto A = Variable(af::randu(1000, 1000), true /* calcGrad */);
auto B = 2.0 * A;
auto C = 1.0 + B;
auto D = log(C);
D.backward(); // populates A.grad() along with gradients for B, C, and D.

Installation

Requirements

At minimum, compilation requires:

  • A C++ compiler with good C++14 support (e.g. gcc/g++ >= 5)
  • CMake -- version 3.10 or later, and make
  • A Unix-ish operating system. We're currently exploring experimental support on Windows.

Building

Flashlight is most-easily built and installed with vcpkg. Only the CUDA backend is currently supported with vcpkg. First, install CUDA >= 9.2, cuDNN, and NCCL. Then, after installing vcpkg install the libraries and core with:

./vcpkg install flashlight-cuda

To see the features available for installation, run ./vcpkg search flashlight-cuda. Integrating Flashlight into your own project is simple. vcpkg CMake toolchain integration is well-supported. OpenCL support in vcpkg is coming soon.

In-Source Build

To build your clone of Flashlight from source using vcpkg and CMake, first install dependencies:

./vcpkg install \
    cuda intel-mkl fftw3 cub kenlm            \ # for flashlight libraries
    arrayfire[cuda] cudnn nccl openmpi cereal \ # for the flashlight neural net library
    gflags glog                               \ # for flashlight application libraries
    libsndfile                                \ # for the flashlight asr application
    stb                                       \ # for the flashlight imgclass application
    gtest                                       # optional, if building tests

Clone the repository:

git clone https://github.com/facebookresearch/flashlight.git && cd flashlight
mkdir -p build && cd build

Then, build from source using vcpkg's CMake toolchain:

cmake .. \
    -DFL_BACKEND=CUDA
    -DCMAKE_TOOLCHAIN_FILE=[path to your vcpkg clone]/scripts/buildsystems/vcpkg.cmake
make -j$(nproc)

To build a subset of Flashlight's features, see the installation options in the documentation.

Building from Source

Instructions to build fully from source can be found in the documentation.

Contributing and Contact

Contact: vineelkpratap@fb.com, awni@fb.com, jacobkahn@fb.com, qiantong@fb.com, antares@fb.com, padentomasello@fb.com, jcai@fb.com, gab@fb.com, vitaliy888@fb.com, locronan@fb.com

Flashlight is being very actively developed. See CONTRIBUTING for more on how to help out.

Acknowledgments

Some of Flashlight's code is derived from arrayfire-ml.

License

Flashlight is under a BSD license. See LICENSE for more information.

About

A C++ standalone library for machine learning

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 88.9%
  • CMake 7.1%
  • Python 1.8%
  • Cuda 1.7%
  • C 0.5%