Releases: apple/coremltools
coremltools 4.0b2
What's New
- Improved documentation available at http://coremltools.readme.io.
- New converter path to directly convert PyTorch models without going through ONNX.
- Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
- New unified conversion API
ct.convert()
for converting PyTorch and TensorFlow (includingtf.keras
) models. - New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
- New utilities to configure inputs while converting from PyTorch and TensorFlow, using
ct.convert()
withct.ImageType()
,ct.ClassifierConfig()
, etc., see details: https://coremltools.readme.io/docs/neural-network-conversion. - onnx-coreml converter is now moved under coremltools and can be accessed as
ct.converters.onnx.convert()
.
Deprecations
-
Deprecated the following methods
NeuralNetworkShaper
class.get_allowed_shape_ranges()
.can_allow_multiple_input_shapes()
.visualize_spec()
method of theMLModel
class.quantize_spec_weights()
, instead use thequantize_weights()
method.get_custom_layer_names()
,replace_custom_layer_name()
,has_custom_layer()
, moved them to internal methods.
-
Added deprecation warnings for, will be deprecated in next major release.
convert_neural_network_weights_to_fp16()
,convert_neural_network_spec_weights_to_fp16()
. Instead use thequantize_weights()
method. See https://coremltools.readme.io/docs/quantization for details.
Known Issues
- Latest version of Pytorch tested to work with the converter is Torch 1.5.0.
- TensorFlow 2 model conversion is supported for models with 1 concrete function.
- Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
coremltools.utils.rename_feature
does not work correctly in renaming the output feature of a model of type neural network classifierleaky_relu
layer is not added yet to the PyTorch converter, although it's supported in MIL and the Tensorflow converters.
coremltools 4.0b1
Whats New
- New documentation available at http://coremltools.readme.io.
- New converter path to directly convert PyTorch models without going through ONNX.
- Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
- New unified conversion API
ct.convert()
for converting PyTorch and TensorFlow (includingtf.keras
) models. - New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
- New utilities to configure inputs while converting from PyTorch and TensorFlow, using
ct.convert()
withct.ImageType()
,ct.ClassifierConfig()
, etc., see details: https://coremltools.readme.io/docs/neural-network-conversion. - onnx-coreml converter is now moved under coremltools and can be accessed as
ct.converters.onnx.convert()
.
Deprecations
-
Deprecated the following methods
NeuralNetworkShaper
class.get_allowed_shape_ranges()
.can_allow_multiple_input_shapes()
.visualize_spec()
method of theMLModel
class.quantize_spec_weights()
, instead use thequantize_weights()
method.get_custom_layer_names()
,replace_custom_layer_name()
,has_custom_layer()
, moved them to internal methods.
-
Added deprecation warnings for, will be deprecated in next major release.
convert_neural_network_weights_to_fp16()
,convert_neural_network_spec_weights_to_fp16()
. Instead use thequantize_weights()
method. See https://coremltools.readme.io/docs/quantization for details.
Known Issues
- Tensorflow 2 model conversion is supported for models with 1 concrete function.
- Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
coremltools.utils.rename_feature
does not work correctly in renaming the output feature of a model of type neural network classifierleaky_relu
layer is not added yet to the PyTorch converter, although its supported in MIL and the Tensorflow converters.
coremltools 3.4
- Added support for
tf.einsum
op - Bug fixes in image pre-processing error handling, quantization function for the
embeddingND
layer, conversion oftf.stack
op - Updated the transpose removal mlmodel pass
- Fixed import statement to support scikit-learn >=0.21 (@sapieneptus )
- Added deprecation warnings for class
NeuralNetworkShaper
and methodsvisualize_spec
,quantize_spec_weights
- Updated the names of a few functions that were unintentionally exposed to the public API, to internal, by prepending with underscore. The original methods still work but deprecation warnings have been added.
coremltools 3.3
Release Notes
Bug Fixes
- Add support for converting Softplus layer in coremltools.
- Fix in gelu and layer norm fusion pass.
- Simplified build & CI setup.
- Fixed critical numpy
coremltools 3.2
This release includes new op conversion supports, bug fixes, and improved graph optimization passes.
Install/upgrade to the latest coremltools
with pip install --upgrade coremltools
.
More details can be found in neural-network-guide.md.
coremltools 3.1
Changes:
- Add support for TensorFlow 2.x file format (.h5, SavedModel, and concrete functions).
- Add support for several new ops, such as
AddV2
,FusedBatchNormV3
. - Bug fixes in the Tensorflow converter's op fusion graph pass.
Known Issues:
tf.keras
model conversion supported only with TensorFlow 2- Currently, there are issues while invoking the TensorFlow 2.x model conversion in Python 2.x.
- Currently, there are issues while converting
tf.keras
graphs that contain recurrent layers.
coremltools 3.0
Release coremltools 3.0
We are very excited about the release of coremltools 3 and for Core ML release notes to become a fixture, increasing the issues resolved and features added. In this document, we give you an overview of the features and issues that were resolved in the most recent release. The issues can also be found on the project boards of each respective repository (for example, coremltools). The labels will also indicate the type of issue.
In addition to the features and improvements introduced in this release, there have been some changes within the repository. There are now issue templates to help specify the type of issue whether its a bug, feature request or question. and help us triage quickly. There is also a new document, contributing.md which contains guidelines for community engagement.
coremltools 3.0
We are happy to announce the official release of coremltools 3 which aligns with Core ML 3. It includes a new version of the .mlmodel specification (version 4) which brings with it support for:
- Updatable models - Neural Network and KNN
- More dynamic and expressive neural networks - approx. 100 more layers added compared to Core ML 2
- Dynamic control flows
- Nearest neighbor classifiers
- Recommenders
- Linked models
- Sound analysis preprocessing
- Runtime adjustable parameters for on-device update
This version of coremltools also includes a new converter path for TensorFlow models. The tfcoreml converter has been updated to include this new path to convert to specification 4 which can handle control flow and cyclic tensor flow graphs.
Control flow example can be found here.
Updatable Models
Core ML 3 supports an on-device update of models. Version 4 of the .mlmodel
specification can encapsulate all the necessary parameters for a model update. Nearest neighbor, neural networks and pipeline models can all be made updatable.
Updatable neural networks support the training of convolution and fully connected layer weights (with back-propagation through many other layers types). Categorical cross-entropy and mean squared error losses are available along with stochastic gradient descent and Adam optimizers.
See examples of how to convert and create updatable models.
See the MLUpdateTask API reference for how to update a model from within an app.
Neural Networks
- Support for new layers in Core ML 3 added to the
NeuralNetworkBuilder
- Exact rank mapping of multi dimensional array inputs
- Control Flow related layers (branch, loop, range, etc.)
- Element-wise unary layers (ceil, floor, sin, cos, gelu, etc.)
- Element-wise binary layers with broadcasting (addBroadcastable, multiplyBroadcastable, etc)
- Tensor manipulation layers (gather, scatter, tile, reverse, etc.)
- Shape manipulation layers (squeeze, expandDims, getShape, etc.)
- Tensor creation layers (fillDynamic, randomNormal, etc.)
- Reduction layers (reduceMean, reduceMax, etc.)
- Masking / Selection Layers (whereNonZero, lowerTriangular, etc.)
- Normalization layers (layerNormalization)
- For a full list of supported layers in Core ML 3, check out Core ML specification documentation or NeuralNetwork.proto.
- Support conversion of recurrent networks from TensorFlow
coremltools 3.0 beta 6 release
Merge pull request #444 from aseemw/dev/coremltools_3_0_release 3.0b6 release
coremltools 3.0b beta release
This is the first beta release of coremltools 3 which aligns with the preview of Core ML 3. It includes a new version of the .mlmodel specification which brings with it support for:
- Updatable models
- More dynamic and expressive neural networks
- Nearest neighbor classifiers
- Recommenders
- Linked models
- Sound analysis preprocessing
- Runtime adjustable parameters
This release also enhances and introduces the following converters and utilities:
- Keras converter
- Adds support for converting training details using respect_trainable flag
- Scikit converter
- Nearest neighbor classifier conversion
- NeuralNetworkBuilder
- Support for all new layers introduced in CoreML 3
- Support for adding update details such as marking layers updatable, specifying a loss function and providing an optimizer
- KNearestNeighborsClassifierBuilder (new)
- Newly added to support simple programatic construction of nearest neighbor classifiers
- Tensorflow (new)
- A new tensorflow converter with improved graph transformation capabilities and support for version 4 of the .mlmodel specification
- This is used by the new tfcoreml beta converter package as well. Try it out with
pip install tfcoreml==0.4.0b1
This release also adds Python 3.7 support for coremltools
Updatable Models
Core ML 3 supports on-device update of models. Version 4 of the .mlmodel specification can encapsulate all the necessary parameters for a model update. Nearest neighbor, neural networks and pipeline models can all be made updatable.
Updatable neural networks support training of convolution and fully connected layer weights (with back-propagation through many other layers types). Categorical cross entropy and mean squared error losses are available along with stochastic gradient descent and Adam optimizers.
See examples of how to convert and create updatable models
See the MLUpdateTask API reference for how update a model from within an app.
Neural Networks
- Support for new layers in Core ML 3 added to the NeuralNetworkBuilder
- Exact rank mapping of multi dimensional array inputs
- Control Flow related layers (branch, loop, range, etc.)
- Element-wise unary layers (ceil, floor, sin, cos, gelu, etc.)
- Element-wise binary layers with broadcasting (addBroadcastable, multiplyBroadcastable, etc)
- Tensor manipulation layers (gather, scatter, tile, reverse, etc.)
- Shape manipulation layers (squeeze, expandDims, getShape, etc.)
- Tensor creation layers (fillDynamic, randomNormal, etc.)
- Reduction layers (reduceMean, reduceMax, etc.)
- Masking / Selection Layers (whereNonZero, lowerTriangular, etc.)
- Normalization layers (layerNormalization)
- For a full list of supported layers in Core ML 3, check out CoreML specification documentation (NeuralNetwork.proto).
- Support conversion of recurrent networks from TensorFlow
Known Issues
coremltools 3.0b1
- Converting a Keras model that uses mean squared error for the loss function will not create a valid model. A workaround is to set respect_trainable to False (the default) when converting and then manually add the loss function.
Core ML 3 Developer Beta 1
- The default number of epochs encoded in model is not respected and may run for 0 epochs, immediately returning without training.
- Workaround: Explicitly supply epochs via MLModelConfiguration updateParameters using MLParameterKey.epochs even if you want to use the default value encoded in the model.
- Loss returned by the ADAM optimizer is not correct
- Some updatable pipeline models containing a static neural network sub-model can intermittently fail to update with the error: “Attempting to hash an MLFeatureValue that is not an image or multi array”. This error will surface in task.error as part of MLUpdateContext passed to the provided completion handler.
- Workaround: Retry model update by creating a new update task with the same training data.
- Some of the new neural network layers may result in an error when the model is run on a non-CPU compute device.
- Workaround: restrict computation to CPU with MLModelConfiguration computeUnits
- Enumerated shape flexibility, when used with Neural network inputs with 'exact_rank' mapping (i.e. rank 5 disabled), may result in an error during prediction.
- Workaround: use range shape flexibility
coremltools 2.1.0
Merge pull request #322 from aseemw/dev/release_2.1 Update version to 2.1