Skip to content

[Documentation] Programming Model, Kernel Programming guide #1388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions docs/source/bibliography.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@


@techreport{scott70,
author = {Dana Scott},
institution = {OUCL},
month = {November},
number = {PRG02},
pages = {30},
title = {OUTLINE OF A MATHEMATICAL THEORY OF COMPUTATION},
year = {1970}
}

@article{PLOTKIN20043,
abstract = {We review the origins of structural operational semantics. The main publication `A Structural Approach to Operational Semantics,' also known as the `Aarhus Notes,' appeared in 1981 [G.D. Plotkin, A structural approach to operational semantics, DAIMI FN-19, Computer Science Department, Aarhus University, 1981]. The development of the ideas dates back to the early 1970s, involving many people and building on previous work on programming languages and logic. The former included abstract syntax, the SECD machine, and the abstract interpreting machines of the Vienna school; the latter included the λ-calculus and formal systems. The initial development of structural operational semantics was for simple functional languages, more or less variations of the λ-calculus; after that the ideas were gradually extended to include languages with parallel features, such as Milner's CCS. This experience set the ground for a more systematic exposition, the subject of an invited course of lectures at Aarhus University; some of these appeared in print as the 1981 Notes. We discuss the content of these lectures and some related considerations such as `small state' versus `grand state,' structural versus compositional semantics, the influence of the Scott–Strachey approach to denotational semantics, the treatment of recursion and jumps, and static semantics. We next discuss relations with other work and some immediate further development. We conclude with an account of an old, previously unpublished, idea: an alternative, perhaps more readable, graphical presentation of systems of rules for operational semantics.},
author = {Gordon D Plotkin},
doi = {https://doi.org/10.1016/j.jlap.2004.03.009},
issn = {1567-8326},
journal = {The Journal of Logic and Algebraic Programming},
keywords = {Semantics of programming languages, (Structural) operational semantics, Structural induction, (Labelled) transition systems, -calculus, Concurrency, Big step semantics, Small-step semantics, Abstract machines, Static semantics},
note = {Structural Operational Semantics},
pages = {3-15},
title = {The origins of structural operational semantics},
url = {https://www.sciencedirect.com/science/article/pii/S1567832604000268},
volume = {60-61},
year = {2004}
}
3 changes: 3 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,11 @@
"sphinxcontrib.googleanalytics",
"myst_parser",
"autoapi.extension",
"sphinxcontrib.bibtex",
]

bibtex_bibfiles = ["bibliography.bib"]

# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
templates_path = []
Expand Down
8 changes: 7 additions & 1 deletion docs/source/ext_links.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
**********************************************************
THESE ARE EXTERNAL PROJECT LINKS USED IN THE DOCUMENTATION
**********************************************************

.. _math: https://docs.python.org/3/library/math.html
.. _NumPy*: https://numpy.org/
.. _Numba*: https://numba.pydata.org/
.. _numba-dpex: https://github.com/IntelPython/numba-dpex
Expand All @@ -14,6 +14,7 @@
.. _SYCL*: https://www.khronos.org/sycl/
.. _dpctl: https://intelpython.github.io/dpctl/latest/index.html
.. _Data Parallel Control: https://intelpython.github.io/dpctl/latest/index.html
.. _DLPack: https://dmlc.github.io/dlpack/latest/
.. _Dpnp: https://intelpython.github.io/dpnp/
.. _dpnp: https://intelpython.github.io/dpnp/
.. _Data Parallel Extension for Numpy*: https://intelpython.github.io/dpnp/
Expand All @@ -28,3 +29,8 @@
.. _oneDPL: https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-library.html#gs.5izf63
.. _UXL: https://uxlfoundation.org/
.. _oneAPI GPU optimization guide: https://www.intel.com/content/www/us/en/docs/oneapi/optimization-guide-gpu/2024-0/general-purpose-computing-on-gpu.html
.. _dpctl.tensor.usm_ndarray: https://intelpython.github.io/dpctl/latest/docfiles/dpctl/usm_ndarray.html#dpctl.tensor.usm_ndarray
.. _dpnp.ndarray: https://intelpython.github.io/dpnp/reference/ndarray.html

.. _Dispatcher: https://numba.readthedocs.io/en/stable/reference/jit-compilation.html#dispatcher-objects
.. _Unboxes: https://numba.readthedocs.io/en/stable/extending/interval-example.html#boxing-and-unboxing
130 changes: 84 additions & 46 deletions docs/source/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,33 +6,38 @@ Overview

Data Parallel Extension for Numba* (`numba-dpex`_) is a free and open-source
LLVM-based code generator for portable accelerator programming in Python. The
code generator implements a new pseudo-kernel programming domain-specific
language (DSL) called `KAPI` that is modeled after the C++ DSL `SYCL*`_. The
SYCL language is an open standard developed under the Unified Acceleration
Foundation (`UXL`_) as a vendor-agnostic way of programming different types of
data-parallel hardware such as multi-core CPUs, GPUs, and FPGAs. Numba-dpex and
KAPI aim to bring the same vendor-agnostic and standard-compliant programming
model to Python.
code generator implements a new kernel programming API (kapi) in pure Python
that is modeled after the API of the C++ embedded domain-specific language
(eDSL) `SYCL*`_. The SYCL eDSL is an open standard developed under the Unified
Acceleration Foundation (`UXL`_) as a vendor-agnostic way of programming
different types of data-parallel hardware such as multi-core CPUs, GPUs, and
FPGAs. Numba-dpex and kapi aim to bring the same vendor-agnostic and
standard-compliant programming model to Python.

Numba-dpex is built on top of the open-source `Numba*`_ JIT compiler that
implements a CPython bytecode parser and code generator to lower the bytecode to
LLVM IR. The Numba* compiler is able to compile a large sub-set of Python and
most of the NumPy library. Numba-dpex uses Numba*'s tooling to implement the
parsing and typing support for the data types and functions defined in the KAPI
DSL. A custom code generator is then used to lower KAPI to a form of LLVM IR
that includes special LLVM instructions that define a low-level data-parallel
kernel API. Thus, a function defined in KAPI is compiled to a data-parallel
kernel that can run on different types of hardware. Currently, compilation of
KAPI is possible for x86 CPU devices, Intel Gen9 integrated GPUs, Intel UHD
integrated GPUs, and Intel discrete GPUs.


The following example shows a pairwise distance matrix computation in KAPI.
LLVM intermediate representation (IR). The Numba* compiler is able to compile a
large sub-set of Python and most of the NumPy library. Numba-dpex uses Numba*'s
tooling to implement the parsing and the typing support for the data types and
functions defined in kapi. A custom code generator is also introduced to lower
kapi functions to a form of LLVM IR that defined a low-level data-parallel
kernel. Thus, a function written kapi although purely sequential when executed
in Python can be compiled to an actual data-parallel kernel that can run on
different types of hardware. Compilation of kapi is possible for x86
CPU devices, Intel Gen9 integrated GPUs, Intel UHD integrated GPUs, and Intel
discrete GPUs.

The following example presents a pairwise distance matrix computation as written
in kapi. A detailed description of the API and all relevant concepts are dealt
with elsewhere in the documentation, for now the example introduces the core
tenet of the programming model.

.. code-block:: python
:linenos:

from numba_dpex import kernel_api as kapi
import math
import dpnp


def pairwise_distance_kernel(item: kapi.Item, data, distance):
Expand All @@ -49,41 +54,74 @@ The following example shows a pairwise distance matrix computation in KAPI.
distance[j, i] = math.sqrt(d)


Skipping over much of the language details, at a high-level the
``pairwise_distance_kernel`` can be viewed as a data-parallel function that gets
executed individually by a set of "work items". That is, each work item runs the
same function for a subset of the elements of the input ``data`` and
``distance`` arrays. For programmers familiar with the CUDA or OpenCL languages,
it is the same programming model that is referred to as Single Program Multiple
Data (SPMD). As Python has no concept of a work item the KAPI function itself is
sequential and needs to be compiled to convert it into a parallel version. The
next example shows the changes to the original script to compile and run the
data = dpnp.random.ranf((10000, 3), device="gpu")
dist = dpnp.empty(shape=(data.shape[0], data.shape[0]), device="gpu")
exec_range = kapi.Range(data.shape[0], data.shape[0])
kapi.call_kernel(kernel(pairwise_distance_kernel), exec_range, data, dist)

The ``pairwise_distance_kernel`` function conceptually defines a data-parallel
function to be executed individually by a set of "work items". That is, each
work item runs the function for a subset of the elements of the input ``data``
and ``distance`` arrays. The ``item`` argument passed to the function identifies
the work item that is executing a specific instance of the function. The set of
work items is defined by the ``exec_range`` object and the ``call_kernel`` call
instructs every work item in ``exec_range`` to execute
``pairwise_distance_kernel`` for a specific subset of the data.

The logical abstraction exposed by kapi is referred to as Single Program
Multiple Data (SPMD) programming model. CUDA or OpenCL programmers will
recognize the programming model exposed by kapi as similar to the one in those
languages. However, as Python has no concept of a work item a kapi function
executes sequentially when invoked from Python. To convert it into a true
data-parallel function, the function has to be first compiled using numba-dpex.
The next example shows the changes to the original script to compile and run the
``pairwise_distance_kernel`` in parallel.

.. code-block:: python
:linenos:
:emphasize-lines: 7, 25

import numba_dpex as dpex

from numba_dpex import kernel, call_kernel
from numba_dpex import kernel_api as kapi
import math
import dpnp


@dpex.kernel
def pairwise_distance_kernel(item: kapi.Item, data, distance):
i = item.get_id(0)
j = item.get_id(1)

data_dims = data.shape[1]

d = data.dtype.type(0.0)
for k in range(data_dims):
tmp = data[i, k] - data[j, k]
d += tmp * tmp

distance[j, i] = math.sqrt(d)


data = dpnp.random.ranf((10000, 3), device="gpu")
distance = dpnp.empty(shape=(data.shape[0], data.shape[0]), device="gpu")
dist = dpnp.empty(shape=(data.shape[0], data.shape[0]), device="gpu")
exec_range = kapi.Range(data.shape[0], data.shape[0])
call_kernel(kernel(pairwise_distance_kernel), exec_range, data, distance)

To compile a KAPI function into a data-parallel kernel and run it on a device,
three things need to be done: allocate the arguments to the function on the
device where the function is to execute, compile the function by applying a
numba-dpex decorator, and `launch` or execute the compiled kernel on the device.
dpex.call_kernel(pairwise_distance_kernel, exec_range, data, dist)

Allocating arrays or scalars to be passed to a compiled KAPI function is not
done directly in numba-dpex. Instead, numba-dpex supports passing in
To compile a kapi function, the ``call_kernel`` function from kapi has to be
substituted by the one provided in ``numba_dpex`` and the ``kernel`` decorator
has to be added to the kapi function. The actual device for which the function
is compiled and on which it executes is controlled by the input arguments to
``call_kernel``. Allocating the input arguments to be passed to a compiled kapi
function is not done by numba-dpex. Instead, numba-dpex supports passing in
tensors/ndarrays created using either the `dpnp`_ NumPy drop-in replacement
library or the `dpctl`_ SYCl-based Python Array API library. To trigger
compilation, the ``numba_dpex.kernel`` decorator has to be used, and finally to
launch a compiled kernel the ``numba_dpex.call_kernel`` function should be
invoked.

For a more detailed description about programming with numba-dpex, refer
the :doc:`programming_model`, :doc:`user_guide/index` and the
:doc:`autoapi/index` sections of the documentation. To setup numba-dpex and try
it out refer the :doc:`getting_started` section.
library or the `dpctl`_ SYCl-based Python Array API library. The objects
allocated by these libraries encode the device information for that allocation.
Numba-dpex extracts the information and uses it to compile a kernel for that
specific device and then executes the compiled kernel on it.

For a more detailed description about programming with numba-dpex, refer the
:doc:`programming_model`, :doc:`user_guide/index` and the :doc:`autoapi/index`
sections of the documentation. To setup numba-dpex and try it out refer the
:doc:`getting_started` section.
Loading