Releases: keras-team/keras
Keras Release 2.7.0 RC1
Cherrypicked the documentation update for functional model slicing.
Keras Release 2.7.0 RC0
Remove temporary monitoring now that underlying perf issue is resolved PiperOrigin-RevId: 398533606
Keras Release 2.6.0
Keras 2.6.0 is the first release of TensorFlow implementation of Keras in the present repo.
The code under tensorflow/python/keras is considered legacy and will be removed in future releases (tf 2.7 or later). For any user who import tensorflow.python.keras, please update your code to public tf.keras instead.
The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.
For the detailed release notes about tf.keras behavior changes, please take a look for tensorflow release notes.
Keras Release 2.6.0 RC3
Keras Release 2.6.0 RC3 fix a security issue for loading keras models via yaml, which could allow arbitrary code execution.
Keras Release 2.6.0 RC2
Keras 2.6.0 RC2 is a minor bug-fix release.
- Fix TextVectorization layer with output_sequence_length on unknown input shapes.
- Output int64 by default from Discretization layer.
- Fix serialization of Hashing layer.
- Add more explicit error message for instance type checking of optimizer.
Keras Release 2.6.0 RC1
Keras 2.6.0 RC1 is a minor bug-fix release
- Pin the Protobuf version to 3.9.2 which is same as the version used by Tensorflow.
Keras Release 2.6.0 RC0
Keras 2.6.0 is the first release of TensorFlow implementation of Keras in the present repo.
The code under tensorflow/python/keras
is considered legacy and will be removed in future releases (tf 2.7 or later). For any user who import tensorflow.python.keras
, please update your code to public tf.keras
instead.
The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras
.
For the detailed release notes about tf.keras
behavior changes, please take a look for tensorflow release notes.
Keras 2.4.0
As previously announced, we have discontinued multi-backend Keras to refocus exclusively on the TensorFlow implementation of Keras.
In the future, we will develop the TensorFlow implementation of Keras in the present repo, at keras-team/keras
. For the time being, it is being developed in tensorflow/tensorflow
and distributed as tensorflow.keras
. In this future, the keras
package on PyPI will be the same as tf.keras
.
This release (2.4.0) simply redirects all APIs in the standalone keras
package to point to tf.keras
. This helps address user confusion regarding differences and incompatibilities between tf.keras
and the standalone keras
package. There is now only one Keras: tf.keras
.
- Note that this release may be breaking for some workflows when going from Keras 2.3.1 to 2.4.0. Test before upgrading.
- Note that we still recommend that you import Keras as
from tensorflow import keras
, rather thanimport keras
, for the time being.
Keras 2.3.1
Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.
Changes
- Bug fixes
- Documentation fixes
- No API changes
- No breaking changes
Keras 2.3.0
Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.
This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.
This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.
Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.
API changes
- Add
size(x)
to backend API. add_metric
method added to Layer / Model (used in a similar way asadd_loss
, but for metrics), as well as the metricsproperty
.- Variables set as attributes of a Layer are now tracked in
layer.weights
(includinglayer.trainable_weights
orlayer.non_trainable_weights
as appropriate). - Layers set as attributes of a Layer are now tracked (so the weights/metrics/losses/etc of a sublayer are tracked by parent layers). This behavior already existed for Model specifically and is now extended to all Layer subclasses.
- Introduce class-based losses (inheriting from
Loss
base class). This enables losses to be parameterized via constructor arguments. Loss classes added:MeanSquaredError
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredLogarithmicError
BinaryCrossentropy
CategoricalCrossentropy
SparseCategoricalCrossentropy
Hinge
SquaredHinge
CategoricalHinge
Poisson
LogCosh
KLDivergence
Huber
- Introduce class-based metrics (inheriting from
Metric
base class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:Accuracy
MeanSquaredError
Hinge
CategoricalHinge
SquaredHinge
FalsePositives
TruePositives
FalseNegatives
TrueNegatives
BinaryAccuracy
CategoricalAccuracy
TopKCategoricalAccuracy
LogCoshError
Poisson
KLDivergence
CosineSimilarity
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredError
MeanSquaredLogarithmicError
RootMeanSquaredError
BinaryCrossentropy
CategoricalCrossentropy
Precision
Recall
AUC
SparseCategoricalAccuracy
SparseTopKCategoricalAccuracy
SparseCategoricalCrossentropy
- Add
reset_metrics
argument totrain_on_batch
andtest_on_batch
. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only. - Add
model.reset_metrics()
method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops. - Rename
lr
tolearning_rate
for all optimizers. - Deprecate argument
decay
for all optimizers. For learning rate decay, useLearningRateSchedule
objects in tf.keras.
Breaking changes
- TensorBoard callback:
batch_size
argument is deprecated (ignored) when used with TF 2.0write_grads
is deprecated (ignored) when used with TF 2.0embeddings_freq
,embeddings_layer_names
,embeddings_metadata
,embeddings_data
are deprecated (ignored) when used with TF 2.0
- Change loss aggregation mechanism to sum over batch size. This may change reported loss values if you were using sample weighting or class weighting. You can achieve the old behavior by making sure your sample weights sum to 1 for each batch.
- Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass
metrics=['acc']
, your metric will be reported under the string "acc", not "accuracy", and inverselymetrics=['accuracy']
will be reported under the string "accuracy". - Change default recurrent activation to
sigmoid
(fromhard_sigmoid
) in all RNN layers.