Skip to content

Keras v3 Support #1116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 11 additions & 3 deletions docs/frontend/keras.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
================
Keras and QKeras
Keras and its quantized variants
================

Keras and the quantization library QKeras are well supported in ``hls4ml``. Currently, the Keras v2 (``tf.keras``) is the preferred version, and the future versions of ``hls4ml`` will expand support for Keras v3. The frontend is based on the parsing the serialized json representation of the model.
Keras and the quantization library QKeras are well supported in ``hls4ml``. Both Keras v2 (``tf.keras``) and the new Keras v3 are supported. While the Keras v2 support is based on parsing the serialized json representation of the model, the Keras v3 support uses direct model inspection.

Currently, ``hls4ml`` can parse most Keras layers, including core layers, convolutional layers, pooling layers, recurrent layers, merging/reshaping layers and activation layers, implemented either via sequential or functional API. Notably missing are the attention and normalization layers. The equivalent QKeras API and quantizers are also supported. The ``Lambda`` layers don't save their state in the serialized format and are thus impossible to parse. In this case, the ``Lambda`` layers can be implemented as custom layers and parsed via the :ref:`Extension API`.
Currently, ``hls4ml`` can parse most Keras layers, including core layers, convolutional layers, pooling layers, recurrent layers, merging/reshaping layers and activation layers, implemented either via sequential or functional API. Notably missing are the attention and normalization layers. The ``Lambda`` layers don't save their state in the serialized format and are thus impossible to parse. In this case, the ``Lambda`` layers can be implemented as custom layers and parsed via the :ref:`Extension API`.

The ``data_format='channels_first'`` parameter of Keras layers is supported, but not extensively tested. All HLS implementations in ``hls4ml`` are based on ``channels_last`` data format and need to be converted to that format before the HLS code can be emitted. We encourage users of ``channels_first`` to report their experiences to developers on GitHub.


* `QKeras <https://github.com/fastmachinelearning/qkeras>`_
The equivalent QKeras API and its quantizers are also supported by ``hls4ml``. QKeras is not compatible with Keras v3.
* `HGQ <https://github.com/calad0i/HGQ>`_
The equivalent HGQ API is also supported. HGQ is not compatible with Keras v3. See `advanced/HGQ <../advanced/hgq.html>`__ for more information.
* `HGQ2 <https://github.com/calad0i/HGQ2>`_
HGQ2 is based on Keras v3. Its support in hls4ml is currently under development.

The development team of ``hls4ml`` is currently exploring options for QKeras alternative and will provide a drop-in replacement API compatible with Keras v3.
20 changes: 16 additions & 4 deletions docs/intro/setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,26 @@ version can be installed directly from ``git``:
Dependencies
============

The ``hls4ml`` library requires python 3.10 or later, and depends on a number of Python packages and external tools for synthesis and simulation. Python dependencies are automatically managed
by ``pip`` or ``conda``.
.. note::
As of version 1.1.0+, all conversion frontend specific packages are optional. Only install the packages you need.

* `TensorFlow <https://pypi.org/project/tensorflow/>`_ (version 2.8 to 2.14) and `QKeras <https://pypi.org/project/qkeras/>`_ are required by the Keras converter. One may want to install newer versions of QKeras from GitHub. Newer versions of TensorFlow can be used, but QKeras and hl4ml do not currently support Keras v3.
The ``hls4ml`` library requires python 3.10 or later, and depends on a number of Python packages and external tools for synthesis and simulation. Python dependencies are automatically managed by ``pip`` or ``conda``.

The following Python packages are all optional and are only required if you intend to use the corresponding converter. Only install the packages you need.

* `Keras <https://pypi.org/project/keras/>`_ is required by the Keras converter.
* `TensorFlow <https://pypi.org/project/tensorflow/>`_ (version 2.8 to 2.14) is required by the Keras v2 converter (keras v2 is included in TensorFlow).
* `Keras <https://pypi.org/project/keras/>` 3.0 or above is required by the Keras v3 converter. Keras v3 supports multiple backends for training and inference, and the conversion is not tied any specific backend. Notice that Keras v3 may **not** coexist with Keras v2 in the same Python environment.

* `ONNX <https://pypi.org/project/onnx/>`_ (version 1.4.0 and newer) is required by the ONNX converter.

* `PyTorch <https://pytorch.org/get-started>`_ package is optional. If not installed, the PyTorch converter will not be available.
* `PyTorch <https://pytorch.org/get-started>`_ is required by the PyTorch converter.

* Quantization support
* `QKeras <https://github.com/fastmachinelearning/qkeras>`_: based on Keras v2. See `frontend/keras <../frontend/keras.html>`_ for more details
* `HGQ <https://github.com/calad0i/HGQ>`_: Based on Keras v2. See `advanced/HGQ <../advanced/hgq.html>`_ for more details.
* `Brevitas <https://xilinx.github.io/brevitas/>`_: Based on PyTorch. See `frontend/pytorch <../frontend/pytorch.html>`_ for more details.
* `QONNX <https://github.com/fastmachinelearning/qonnx>`_: Based on ONNX. See `frontend/onnx <../frontend/onnx.html>`_ for more details.

Running C simulation from Python requires a C++11-compatible compiler. On Linux, a GCC C++ compiler ``g++`` is required. Any version from a recent
Linux should work. On MacOS, the *clang*-based ``g++`` is enough. For the oneAPI backend, one must have oneAPI installed, along with the FPGA compiler,
Expand Down
14 changes: 11 additions & 3 deletions hls4ml/backends/fpga/fpga_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -914,7 +914,7 @@ def generate_conv2d_line_buffer_fn(
return generated_code

@staticmethod
def permute_config_gen(name: str, shape: tuple[int, ...], perm: tuple[int, ...]):
def transpose_config_gen(name: str, shape: tuple[int, ...], perm: tuple[int, ...]):
"""
Generate new shape and perm_strides for a permute operation. Operates by mapping the output index
to input input index by:
Expand All @@ -933,12 +933,20 @@ def permute_config_gen(name: str, shape: tuple[int, ...], perm: tuple[int, ...])
perm (tuple[int, ...]): The permutation of the dimensions.

Returns:
(new_shape, perm_strides) (tuple, tuple): the output shape and permutation strides.
dict: Dictionary containing the configuration.
"""
new_shape = tuple(shape[i] for i in perm)
strides = np.cumprod((shape[1:] + (1,))[::-1])[::-1]
perm_strides = tuple(int(strides[i]) for i in perm)
return (new_shape, perm_strides)
return dict(
dims=len(shape),
N=math.prod(shape),
from_shape=', '.join(str(x) for x in shape),
perm=', '.join(str(x) for x in perm),
perm_strides=', '.join(str(x) for x in perm_strides),
to_shape=', '.join(str(x) for x in new_shape),
config_name=name,
)

@model_optimizer()
def write_hls(self, model):
Expand Down
12 changes: 2 additions & 10 deletions hls4ml/backends/oneapi/passes/reshaping_templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,16 +185,8 @@ def format(self, node):
perm = tuple(node.get_attr('perm'))
name = f'config{node.index}'

new_shape, perm_strides = node.model.config.backend.permute_config_gen(name, shape, perm)
return transpose_config_template.format(
dims=len(shape),
N=int(np.prod(shape)),
from_shape=', '.join(str(x) for x in shape),
perm=', '.join(str(x) for x in perm),
perm_strides=', '.join(str(x) for x in perm_strides),
to_shape=', '.join(str(x) for x in new_shape),
config_name=name,
)
conf = node.model.config.backend.transpose_config_gen(name, shape, perm)
return transpose_config_template.format(**conf)


class TransposeFunctionTemplate(FunctionCallTemplate):
Expand Down
108 changes: 108 additions & 0 deletions hls4ml/backends/vivado/passes/einsum.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
from math import ceil

from hls4ml.backends.backend import get_backend
from hls4ml.backends.template import FunctionCallTemplate, LayerConfigTemplate
from hls4ml.model.layers import Einsum

from .reshaping_templates import transpose_config_template

# Shared Dense template
# Einsum template

einsum_config_template = '''
struct config{index} {{
typedef config{index}_tpose_inp0 tpose_inp0_conf;
typedef config{index}_tpose_inp1 tpose_inp1_conf;
typedef config{index}_tpose_out tpose_out_conf;

typedef {accum_t.name} accum_t;

// Layer Sizes
static const unsigned n_free0 = {n_free0};
static const unsigned n_free1 = {n_free1};
static const unsigned n_contract = {n_contract};
static const unsigned n_inplace = {n_inplace};

// Resource reuse info
static const unsigned io_type = nnet::{iotype};
static const unsigned strategy = nnet::{strategy};
static const unsigned reuse_factor = {reuse_factor};
static const unsigned multiplier_limit = {multiplier_limit};
static const bool store_weights_in_bram = false; // NOT USED

template <class x_T, class y_T>
using product = nnet::product::{product_type}<x_T, y_T>;
}};
'''

einsum_function_template = 'nnet::einsum<{input0_t}, {input1_t}, {output_t}, {config}>({input0}, {input1}, {output});'

einsum_include_list = ['nnet_utils/nnet_einsum.h']


class EinsumConfigTemplate(LayerConfigTemplate):
def __init__(self):
super().__init__(Einsum)
self.template = einsum_config_template

def format(self, node: Einsum):
default_params = self._default_config_params(node)

strategy = node.attributes.attributes['strategy']
io_type = node.model.config.get_config_value('IOType')

assert io_type == 'io_parallel', 'EinsumDense layer only supports io_parallel for now'
assert strategy.lower() == 'latency', 'EinsumDense layer only supports Latency strategy for now'

# EinsumDense config
params = default_params.copy()
params['strategy'] = strategy
params['n_free0'] = node.attributes.attributes['n_free0']
params['n_free1'] = node.attributes.attributes['n_free1']
params['n_contract'] = node.attributes.attributes['n_contract']
params['n_inplace'] = node.attributes.attributes['n_inplace']
inp0_t = node.get_input_variable(node.inputs[0]).type.precision
inp1_t = node.get_input_variable(node.inputs[1]).type.precision
params['product_type'] = get_backend('vivado').product_type(inp0_t, inp1_t)

total_mults = params['n_free0'] * params['n_free1'] * params['n_contract'] * params['n_inplace']
params['multiplier_limit'] = ceil(total_mults / params['reuse_factor'])

einsum_conf = self.template.format(**params)

# inp/out transpose config
inp0_shape = node.attributes.attributes['inp0_shape']
inp1_shape = node.attributes.attributes['inp1_shape']
out_interpert_shape = node.attributes.attributes['out_interpert_shape']
inp0_tpose_idxs = node.attributes.attributes['inp0_tpose_idxs']
inp1_tpose_idxs = node.attributes.attributes['inp1_tpose_idxs']
out_tpose_idxs = node.attributes.attributes['out_tpose_idxs']
tpose_inp0_conf_name = f'config{node.index}_tpose_inp0'
tpose_inp1_conf_name = f'config{node.index}_tpose_inp1'
tpose_out_conf_name = f'config{node.index}_tpose_out'

conf = node.model.config.backend.transpose_config_gen(tpose_inp0_conf_name, inp0_shape, inp0_tpose_idxs)
inp0_tpose_conf = transpose_config_template.format(**conf)
conf = node.model.config.backend.transpose_config_gen(tpose_inp1_conf_name, inp1_shape, inp1_tpose_idxs)
inp1_tpose_conf = transpose_config_template.format(**conf)
conf = node.model.config.backend.transpose_config_gen(tpose_out_conf_name, out_interpert_shape, out_tpose_idxs)
out_tpose_conf = transpose_config_template.format(**conf)

return '\n\n'.join((inp0_tpose_conf, inp1_tpose_conf, out_tpose_conf, einsum_conf))


class EinsumFunctionTemplate(FunctionCallTemplate):
def __init__(self):
super().__init__(Einsum, include_header=einsum_include_list)
self.template = einsum_function_template

def format(self, node: Einsum):
params = {}
params['config'] = f'config{node.index}'
params['input0_t'] = node.get_input_variable(node.inputs[0]).type.name
params['input1_t'] = node.get_input_variable(node.inputs[1]).type.name
params['output_t'] = node.get_output_variable().type.name
params['input0'] = node.get_input_variable(node.inputs[0]).name
params['input1'] = node.get_input_variable(node.inputs[1]).name
params['output'] = node.get_output_variable().name
return self.template.format(**params)
147 changes: 147 additions & 0 deletions hls4ml/backends/vivado/passes/einsum_dense.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
from hls4ml.backends.backend import get_backend
from hls4ml.backends.template import FunctionCallTemplate, LayerConfigTemplate
from hls4ml.model.layers import EinsumDense

from .reshaping_templates import transpose_config_template

# Shared Dense template

dense_config_template = '''struct config{index}_dense : nnet::dense_config {{
static const unsigned n_in = {n_in};
static const unsigned n_out = {n_out};
static const unsigned reuse_factor = {reuse};
static const unsigned strategy = nnet::{strategy};
static const unsigned n_zeros = {nzeros};
static const unsigned multiplier_limit = DIV_ROUNDUP(n_in * n_out, reuse_factor) - n_zeros / reuse_factor;
typedef {accum_t.name} accum_t;
typedef {bias_t.name} bias_t;
typedef {weight_t.name} weight_t;
template<class data_T, class res_T, class CONFIG_T>
using kernel = nnet::{dense_function}<data_T, res_T, CONFIG_T>;
template<class x_T, class y_T>
using product = nnet::product::{product_type}<x_T, y_T>;
}};\n'''

# EinsumDense template

einsum_dense_config_template = '''
struct config{index} {{
typedef config{index}_tpose_inp tpose_inp_conf;
typedef config{index}_tpose_out tpose_out_conf;
{kernel_config};

typedef {accum_t.name} accum_t;
typedef {bias_t.name} bias_t;

// Layer Sizes
static const unsigned n_free_data = {n_free_data};
static const unsigned n_free_kernel = {n_free_kernel};
static const unsigned n_contract = {n_contract};
static const unsigned n_inplace = {n_inplace};

// Resource reuse info
static const unsigned io_type = nnet::{iotype};
static const unsigned strategy = nnet::{strategy};
static const unsigned reuse_factor = {reuse_factor};
static const unsigned parallelization_factor = {parallelization_factor}; // Only useful when n_inplace > 1
static const bool store_weights_in_bram = false; // NOT USED
}};
'''

einsum_dense_function_template = 'nnet::einsum_dense<{input_t}, {output_t}, {config}>({input}, {output}, {w}, {b});'
einsum_dense_da_function_template = 'nnet::einsum_dense<{input_t}, {output_t}, {config}>({input}, {output}, {b});'

einsum_dense_include_list = ['nnet_utils/nnet_einsum_dense.h', 'nnet_utils/nnet_dense.h']


class EinsumDenseConfigTemplate(LayerConfigTemplate):
def __init__(self):
super().__init__(EinsumDense)
self.template = einsum_dense_config_template
self.dense_template = dense_config_template

def dense_config(self, node: EinsumDense):
dense_params = self._default_config_params(node)
strategy = node.attributes['strategy']
dense_params['strategy'] = strategy
dense_params['n_in'] = node.attributes.attributes['n_contract']
dense_params['n_out'] = node.attributes.attributes['n_free_kernel']
if node.attributes.attributes['n_inplace'] == 1:
dense_params['nzeros'] = node.get_weights('weight').nzeros # type: ignore
else:
dense_params['nzeros'] = '-1; // Not making sense when kernels are switching'
dense_params['product_type'] = get_backend('vivado').product_type(
node.get_input_variable().type.precision, node.get_weights('weight').type.precision # type: ignore
)

dense_params['dense_function'] = 'DenseLatency' # Latency only for now

dense_config = self.dense_template.format(**dense_params)
return dense_config

def format(self, node: EinsumDense):
default_params = self._default_config_params(node)

strategy = node.attributes['strategy']
io_type = node.model.config.get_config_value('IOType')

assert io_type == 'io_parallel', 'EinsumDense layer only supports io_parallel and distributed_arithmetic'

# EinsumDense config
params = default_params.copy()
params['strategy'] = strategy
params['n_free_data'] = node.attributes.attributes['n_free_data']
params['n_free_kernel'] = node.attributes.attributes['n_free_kernel']
params['n_contract'] = node.attributes.attributes['n_contract']
params['n_inplace'] = node.attributes.attributes['n_inplace']
if strategy.lower() == 'latency':
params['kernel_config'] = f'typedef config{node.index}_dense dense_conf'
else:
assert strategy.lower() == 'distributed_arithmetic', 'EinsumDense layer only supports Latency strategy for now'
inp_t = node.get_input_variable().type.name
result_t = node.get_output_variable().type.name
index = node.index
conf = f'constexpr static auto da_kernel = nnet::einsum_dense{index}_da_kernel<{inp_t}, {result_t}>'
params['kernel_config'] = conf
pf = node.attributes.attributes['parallelization_factor']
if pf < 0:
pf = params['n_inplace']
params['parallelization_factor'] = pf

einsum_conf = self.template.format(**params)

# inp/out transpose config
inp_shape = node.attributes.attributes['inp_shape']
out_interpert_shape = node.attributes.attributes['out_interpert_shape']
inp_tpose_idxs = node.attributes.attributes['inp_tpose_idxs']
out_tpose_idxs = node.attributes.attributes['out_tpose_idxs']
tpose_inp_conf_name = f'config{node.index}_tpose_inp'
tpose_out_conf_name = f'config{node.index}_tpose_out'

conf = node.model.config.backend.transpose_config_gen(tpose_inp_conf_name, inp_shape, inp_tpose_idxs)
inp_tpose_conf = transpose_config_template.format(**conf)
conf = node.model.config.backend.transpose_config_gen(tpose_out_conf_name, out_interpert_shape, out_tpose_idxs)
out_tpose_conf = transpose_config_template.format(**conf)

if strategy.lower() == 'distributed_arithmetic':
return '\n\n'.join((inp_tpose_conf, out_tpose_conf, einsum_conf))

dense_config = self.dense_config(node)
return '\n\n'.join((inp_tpose_conf, out_tpose_conf, dense_config, einsum_conf))


class EinsumDenseFunctionTemplate(FunctionCallTemplate):
def __init__(self):
super().__init__(EinsumDense, include_header=einsum_dense_include_list)
self.template = einsum_dense_function_template

def format(self, node):
params = self._default_function_params(node)
params['b'] = node.get_weights('bias').name

strategy = node.attributes['strategy']
if strategy == 'distributed_arithmetic':
return einsum_dense_da_function_template.format(**params)

params['w'] = node.get_weights('weight').name
return einsum_dense_function_template.format(**params)
Loading
Loading