Skip to content

Commit

Permalink
Getting up to date with Keras (#1)
Browse files Browse the repository at this point in the history
* First pass.

* 2nd pass.

* 3rd pass

* 4th pass

* Revert backend rnn.

* 5th pass

* Quick fixes

* Simplification

* Update Travis config to test on TF 2

* Fix some syntax error

* Fix travis issues again

* Fixes

* Unit test fixes

* Small fix.

* Tf 2: fix optimizer weights naming collision issue (keras-team#12466)

* rem get_session() call from multi gpu utils

* tiny fix

* optimizer fixes

* fix tempfile

* rem os.remove

* rem tmpdir

* ws fix

* Fix docstring of util function.

* Fix docstring style

* TF-2: Remove get_session() call in multi_gpu_utils.py (keras-team#12465)

* rem get_session() call from multi gpu utils

* tiny fix

* Fix in_top_k

* Simplify bidirectional test

* Move TensorBoard callback to v2 -- still need to fix some tests.

* Small fixes.

* Fix v1 tests.

* Fix PEP8.

* Fix initializers and update ops.

* Disable test for TF print

* Fix gradient test failure.

* Fix test_model_with_external_loss

* Small backend simplification

* Fix convrnn tests.

* Fix identity init

* Remove irrelevant tests.

* Fix conv2d_transpose

* Fix PEP8

* Disable multiprocessing tests.

* Fix tests.

* Fix conv_rnn bug with cntk/theano

* Fix TF1 test

* Adding Loss, LossFunctionWrapper, MeanSquaredError classes. (keras-team#12859)

* Adding Loss, LossFunctionWrapper, MeanSquaredError classes.

* Fixing formatting issues.

* Adding arguments list to MeanSquaredError.

* Fix abstract method

* Adding MeanAbsoluteError, MeanAbsolutePercentageError, MeanSquaredLogarithmicError and BinaryCrossentropy loss classes. (keras-team#12894)

* Adding MeanAbsoluteError loss

* Adding Binary Crossentropy loss.

* Adding MeanAbsolutePercentageError loss.

* Adding MeanSquaredLogarithmicError loss.

* Adding #Arguments for some classes.

* remove print statements

* Update image preprocessing.

* Update applications.

* Remove ResNeXt networks (bug) and  add tests.

* Adding CategoricalCrossentropy and SparseCategoricalCrossentropy Loss classes (keras-team#12903)

* Add categorical crossentropy loss.

* Add sparse categorical crossentropy loss.

* Adding support for `Loss` instances in model compile. (keras-team#12915)

* Updating training utils.

* training changes for supporting Loss instances.

* Add weight broadcasting.

* Saving test

* Add correctness test.

* Fix a number of tests.

* Remove outdated test

* Fix PEP8

* Change defaults of GRU and LSTM layers.

* Rename lr to learning_rate in optimizers

* Add missing loss classes

* Fix tests.

* Removing @symbolic from few tf backend ops.

* Creating helper function for broadcast weights.

* Adding Metric class.

* Adding Metric class.

* Adding sample weight unit test.

* Adding Mean metric class and unit tests.

* Addressed comments.

* Adding control dependency op.

* Adding no-op control dependencies to Theano and CNTK backends.

* Fixing tests for TF1.

* Fix doc string.

* Fix backend issues.

* Adding MeanMetricWrapper class.

* Adding MeanSquaredError metric.

* Framework changes for metrics part 1

* Metrics framework changes part 2

* Adding metrics correctness test.

* Fix integration tests

* Remove references to ResNeXt from docs.

* Prepare 2.2.5 release.

* Fix sklearn wrapper unit test in Python 3?

* Fix sklearn regressor test?

* Some Theano fixes.

* Make metrics compatible with Theano

* Theano fixes

* Fix

* Fixes

* Recompute steps_per_epoch after each epoch in traingin_generator (keras-team#13037)

* pep8 config in setup.cfg (keras-team#13196)

* Theano fixes

* Update optimizers for TF2. (keras-team#13246)

1. remove epsilon and decay for all optimizers
2. add iterations into weight list into RMSProp, Adagrad, Adadelta,
Nadam

* Fix results tracking for metrics in multi-output case

* sync changes to _TfDeviceCaptureOp (keras-team#13255)

* sync changes to _TfDeviceCaptureOp

* adjust for wider compatibility

* Documentation for `array_to_img`, `img_to_array` and `save_img` under `preprocessing.image` keras-team#12711 (keras-team#13252)

* Add docstring of `save_img()` in `keras/preprocessing/image.py`

Signed-off-by: Karel Ha <mathemage@gmail.com>

* Add docstring of `img_to_array()` in `keras/preprocessing/image.py`

Signed-off-by: Karel Ha <mathemage@gmail.com>

* Add docstring of `array_to_img()` in `keras/preprocessing/image.py`

Signed-off-by: Karel Ha <mathemage@gmail.com>

* Add metric API changes (keras-team#13256)

* Add metric API changes part 1

* Add metric API changes part 2

* Update metric __call__ calls to update state and result calls

* Changing metric call for output loss metric to update_state and result calls.

* Fix metrics support in Theano

* Fix PEP8

* Introduces fixes for tensor equality in TF 2.0

* Minor fixes

* Improve exception testing in test_training

* Improve test syntax

* Remove outdated tests

* Only run label smoothing logic when necessary

* Fix PEP8

* Update CI to run on TF 1.14 for TF1

* Disable a backend test for CNTK

* Fix docs test

* CNTK fixes

* Disable test that hangs Travis

* Disabled flaky cntk test

* Reduce test flakiness

* Disable CNTK SGD test

* Disable test causing Travis to hang

* Disable flaky CNTK test

* Disable test that hangs Travis

* Disable a couple more multiprocessing tests

* Add ability for Layer to track sublayer attributes

* Add support for layer attribute tracking (loss, updates, metrics) in layer subclasses, and standalone weight tracking

* Fix theano backend

* Fix PEP8

* Adding accuracy metric classes. (keras-team#13265)

* Adding Accuracy metric class.

* Adding BinaryAccuracy metric class.

* Adding CategoricalAccuracy metric class.

* Adding SparseCategoricalAccuracy metric class.

* Adding TopK Accuracy metric classes.

* Fixed review comments

* Add metrics Hinge, SquaredHinge, CategoricalHinge

* Add label conversion to hinge losses

* Adding LogCosh, Poisson, KLDivergence, crossentropy metrics. (keras-team#13271)

* Adding LogCosh, Poisson, KLDivergence metrics.

* Adding crossentropy  metrics.

* Add metrics CosineSimilarity, MeanAbsoluteError, MeanAbsolutePercentageError, MeanSquaredError, MeanSquaredLogarithmicError, RootMeanSquaredError.

* Reverse sign of cosine_similarity metric

* Adding TruePositives, TrueNegatives, FalsePositives, FalseNegatives metric classes. (keras-team#13280)

* Adding FalsePositive metric class.

* Adding TruePositives, TrueNegatives, FalseNegatives metric classes.

* Adding AUC, SensitivityAtSpecificity metrics. (keras-team#13289)

* Adding AUC, SensitivityAtSpecificity metrics.

* Fixed failing test.

* Add SpecificityAtSensitivity metric. (keras-team#13294)

* Add SpecificityAtSensitivity metric.

* Fixing some lint issues.

* Fixing some lint issues.

* Adding Precision, Recall, Mean IoU part 1.

* Adding Precision, Recall, Mean IoU part 1.

* Add MeanIoU metric.

* Add MeanIoU metric.

* Fix metrics reporting /  accumulation with fit_generator and evaluate_generator.

* Remove deprecated example script

* Remove deprecated example, fix conv filter example

* Update examples

* Fix some bugs

* Update examples

* Addressed PR comments.

* Fix Theano tests.

* Disable top_k metrics for TF1

* Reenable Precision and Recall with TF1

* Fix Py2 tests

* Skip metric tests for CNTK

* Fix py2 test

* Fix py2 test

* Update coverage threshold

* Add back CPU to multi_gpu_utils available devices

* Using K.is_tensor and K.is_variable (keras-team#13307)

is_tensor_or_variable(x) is undefined and replaced by
K.is_tensor(x) or K.is_variable(x).

Fixes keras-team#13306.

* Update lstm_seq2seq.py(from 22% to 87% acc) (keras-team#13269)

* keras-team#13266 Update lstm_seq2seq.py(from 22% to 87% acc)

I added the codes for applying one-hot encoding on the end of sentences about encoder_input_data, decoder_input_data, and decoder_target_data. I added an accuracy metric for model training. The original code has 22% accuracy, but the proposed code had 87% validation accuracy.

* Update lstm_seq2seq.py

I update code according to PEP8.

* Update lstm_seq2seq.py

Remove whitespace according to PEP 8.

* Update babi_rnn.py (keras-team#13263)

Change line 82 so the reg exp works as it says on token above in the comment
because else it gives - AttributeError: 'NoneType' object has no attribute 'strip'

* typo fixed (keras-team#13230)

* Correct the DepthwiseConv2d docstrings - output shape (keras-team#13225)

* fix in "Layer.compute_output_shape" description (keras-team#13210)

* Added batch_normalization in the numpy backend. (keras-team#11556)

* Finished the functions.

* Started doing the test function.

* Added the batch_normalization operation to the numpy backend.

* forgot an argument

* Complete the docs by adding data to multi-input/output example (keras-team#12775)

* Complete the docs by adding data to multi-input/output example

* Add seed for reproducibility

* Fix Travis SSL issue.

* keras-team#13239 Improved documentation for EarlyStopping/ReduceLROnPlateau, take validation_freq into account. (keras-team#13240)

* Added messages about the future of multi-backend Keras. (keras-team#13315)

* Added comments about the future of Keras.

* Changed msg.

* Fix sequence timeout deadlock (keras-team#13322)

* Add a test for deadlock after sequence worker timeout

* Call task_done even if the task timeouted

* catch dead worker warning

* fix line length

* Increase deadlock detection timeout to prevent flakiness

* Fix deprecation warnings related to TF v1

* Update README

* Add a link to the metrics document (keras-team#13334)

Link to the metrics document(/metrics) was missing in 'Compilation' section. Added one just as other explained arguments.

* Fix thread safety issue

* Correct spelling mistake (keras-team#13339)

* fix keras-team#13341 math_ops to K (keras-team#13342)

* Fix encoding error (keras-team#13355)

* Add utf-8 encoding

* Fix PEP8 error

* Fix PEP8 error

* Fix issue where the disable_tracking decorator obfuscates layer constructors.

* Fix yaml version compat issue

* Update local.py docstrings (keras-team#13373)

* Update local.py

stride value implies the input argument of 'stride', and dilation_rate implies the input argument of 'dilation rate' of Conv1D function.
It is more explicit to express as code rather than using words stride value, dilation value.
Or, at least both stride and dilation_rate should be written in code, not only dilation rate as before document

* Update local.py

mark as code snippet

* Allowed to return the image as a Jupyter Image only if the extension is not pdf (keras-team#13383). (keras-team#13384)

keras/utils/vis_utils.py

* Fix file leak in CSVLogger (keras-team#13378)

* Fix file leak in CSVLogger

* Update callbacks.py

* fix: `recurrent_activation` parameter's docstring (keras-team#13401)

* typo_fix (keras-team#13395)

* Prepare 2.3.1 release

* Added the default activation of convolutional LSTM in the docs. (keras-team#13409)

* Small refactors on the keras.utils module (keras-team#13388)

* Use .format calls for string interpolation on utils

* Use generators over listcomps whenever possible to save memory

* Bumped tf2 version to 2.0.0 (keras-team#13412)

* Change `batch_size` descriptions to proper ones (keras-team#13422)

* Change `batch_size` descriptions to proper ones

Since there're no gradients updated during `evaulate` and `predict` processes, changed their `batch_size` docstrings from `"Number of samples per gradient update"` to `"Number of samples per evaluation step"` and `"Number of samples to be predicted at once"`. (The sentence in fit remains unchanged.)

I hope this fix would change related auto-generated documents as well.

* Correct `callbacks` description docstrings

Corrected `callbacks` description docstrings in `evaluate_generator` and `predict_generator`: "List of callbacks to apply during training" -> "- during evaluation", "- during prediction".

* Update autogen.py (keras-team#13426)

fix duplicate module name for callbacks module

* Update io_utils.py (keras-team#13429)

I just fixed Numpy -> NumPy in HDF5Matrix class.

* Update pooling.py (keras-team#13467)

* Update pooling.py

Added Integer at the `pool_size` of `MaxPooling3D`

* Update pooling.py

Add Integer in `strides` and `pool_size` of 3D layers
Added "If None, it will default to `pool_size`." to be consistent with explanation of 1D, 2D layer

* Update pooling.py

`channels_first` ->`"channels_first"`
`channels_last` ->`"channels_last"`
  "channels_last"->`"channels_last"`

* Update core.py (keras-team#13472)

`channels_first` -> `'channels_first'`
`channels_last`, "channels_last" -> `'channels_last'`


data_format='channels_first' -> `data_format='channels_first'`
data_format='channels_last' -> `data_format='channels_last'`

* Fix h5py group naming while model saving (keras-team#13477)

* Update np_utils.py (keras-team#13481)

* Fix too many values to unpack error (keras-team#13511)

* fix too many values to unpack error

In the example script lstm_seq2seq_restore.py and lstm_seq2seq.py, when
parse the data using line.split("\t"), it will return 3 values rather than
2, a simple modification can fix it.

* add blankspace around operator

Co-authored-by: François Chollet <francois.chollet@gmail.com>
Co-authored-by: Fariz Rahman <farizrahman4u@gmail.com>
Co-authored-by: Pavithra Vijay <psv@google.com>
Co-authored-by: Victor Kovryzhkin <vik.kovrizhkin@gmail.com>
Co-authored-by: Philip May <eniak.info@gmail.com>
Co-authored-by: tanzhenyu <tanzheny@google.com>
Co-authored-by: Taylor Robie <taylorrobie@google.com>
Co-authored-by: Karel Ha <mathemage@gmail.com>
Co-authored-by: Sebastian Höffner <info@sebastian-hoeffner.de>
Co-authored-by: tykimos <adam.tykim@gmail.com>
Co-authored-by: Kostas <kvogiat@gmail.com>
Co-authored-by: Arnout Devos <arnoutdev@gmail.com>
Co-authored-by: Keunwoo Choi <gnuchoi+github@gmail.com>
Co-authored-by: Alexander Ivanov <avi2011class@yandex.ru>
Co-authored-by: Gabriel de Marmiesse <gabrieldemarmiesse@gmail.com>
Co-authored-by: Bharat Raghunathan <bharatraghunthan9767@gmail.com>
Co-authored-by: Hendrik Schreiber <hs@tagtraum.com>
Co-authored-by: Andrey Zakharevich <andreyzakharevich@gmail.com>
Co-authored-by: Shiv Dhar <shivdhar@gmail.com>
Co-authored-by: djstrong <djstrong@gmail.com>
Co-authored-by: fuzzythecat <fuzzy0427@gmail.com>
Co-authored-by: Naruu <esara2021@gmail.com>
Co-authored-by: ftesser <fabio.tesser@gmail.com>
Co-authored-by: Gregory Morse <gregory.morse@live.com>
Co-authored-by: Andrew Naguib <andrew@fci.helwan.edu.eg>
Co-authored-by: Haifeng Jin <jhfjhfj1@gmail.com>
Co-authored-by: Michelle Vivita <mvivita88@gmail.com>
Co-authored-by: Elton Viana <eltonvs@outlook.com>
Co-authored-by: Junyoung Kim <Junyoung.JK.Kim@gmail.com>
Co-authored-by: Denny-Hwang <48212469+Denny-Hwang@users.noreply.github.com>
Co-authored-by: Thibault Buhet <38053590+Tbuhet@users.noreply.github.com>
Co-authored-by: xemcerk <lisislzx@sina.com>
  • Loading branch information
1 parent c10d249 commit 2d1e944
Showing 111 changed files with 11,321 additions and 4,457 deletions.
2 changes: 1 addition & 1 deletion .coveragerc
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ exclude_lines =
# Don't complain if legacy support codes are not performed:
if original_keras_version == '1':

fail_under = 87
fail_under = 86
show_missing = True
omit =
keras/applications/*
28 changes: 18 additions & 10 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -6,15 +6,19 @@ cache:
matrix:
include:
- python: 3.6
env: KERAS_BACKEND=tensorflow TEST_MODE=INTEGRATION_TESTS PIL=Pillow
env: KERAS_BACKEND=tensorflow MODE=INTEGRATION_TESTS PIL=Pillow
- python: 3.6
env: KERAS_BACKEND=tensorflow TEST_MODE=PEP8_DOC PIL=Pillow
env: KERAS_BACKEND=tensorflow MODE=PEP8_DOC PIL=Pillow
- python: 3.6
env: KERAS_BACKEND=tensorflow TEST_MODE=API
env: KERAS_BACKEND=tensorflow MODE=API
- python: 2.7
env: KERAS_BACKEND=tensorflow
env: KERAS_BACKEND=tensorflow MODE=TF1
- python: 3.6
env: KERAS_BACKEND=tensorflow
env: KERAS_BACKEND=tensorflow MODE=TF1
- python: 2.7
env: KERAS_BACKEND=tensorflow MODE=TF2
- python: 3.6
env: KERAS_BACKEND=tensorflow MODE=TF2
- python: 2.7
env: KERAS_BACKEND=theano THEANO_FLAGS=optimizer=fast_compile MKL="mkl mkl-service" RUN_ONLY_BACKEND_TESTS=1
- python: 3.6
@@ -55,10 +59,14 @@ install:
- pip install -e .[tests] --progress-bar off

# install TensorFlow (CPU version).
- pip install tensorflow==1.13.1 --progress-bar off
- if [[ "$MODE" == "TF2" ]]; then
pip install tensorflow==2.0.0 --progress-bar off;
else
pip install tensorflow==1.14.0 --progress-bar off;
fi

# install cntk
- if [[ "$KERAS_BACKEND" == "cntk" ]] || [[ "$TEST_MODE" == "PEP8_DOC" ]] || [[ "$TEST_MODE" == "API" ]]; then
- if [[ "$KERAS_BACKEND" == "cntk" ]] || [[ "$MODE" == "PEP8_DOC" ]] || [[ "$MODE" == "API" ]]; then
./.travis/install_cntk.sh;
fi

@@ -81,11 +89,11 @@ script:
# set up keras backend
- sed -i -e 's/"backend":[[:space:]]*"[^"]*/"backend":\ "'$KERAS_BACKEND'/g' ~/.keras/keras.json;
- echo -e "Running tests with the following config:\n$(cat ~/.keras/keras.json)"
- if [[ "$TEST_MODE" == "INTEGRATION_TESTS" ]]; then
- if [[ "$MODE" == "INTEGRATION_TESTS" ]]; then
PYTHONPATH=$PWD:$PYTHONPATH py.test tests/integration_tests;
elif [[ "$TEST_MODE" == "PEP8_DOC" ]]; then
elif [[ "$MODE" == "PEP8_DOC" ]]; then
PYTHONPATH=$PWD:$PYTHONPATH py.test --pep8 -m pep8 -n0 && py.test tests/docs;
elif [[ "$TEST_MODE" == "API" ]]; then
elif [[ "$MODE" == "API" ]]; then
PYTHONPATH=$PWD:$PYTHONPATH pip install git+git://www.github.com/keras-team/keras.git && python update_api.py && pip install -e .[tests] --progress-bar off && py.test tests/test_api.py;
elif [[ "$RUN_ONLY_BACKEND_TESTS" == "1" ]]; then
PYTHONPATH=$PWD:$PYTHONPATH py.test tests/keras/backend/;
8 changes: 6 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -23,13 +23,13 @@ The more information you provide, the easier it is for us to validate that there

## Requesting a Feature

You can also use Github issues to request features you would like to see in Keras, or changes in the Keras API.
You can also use [Tensorflow Github issues](https://github.com/tensorflow/tensorflow/issues) to request features you would like to see in Keras, or changes in the Keras API.

1. Provide a clear and detailed explanation of the feature you want and why it's important to add. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on library for Keras. It is crucial for Keras to avoid bloating the API and codebase.

2. Provide code snippets demonstrating the API you have in mind and illustrating the use cases of your feature. Of course, you don't need to write any real code at this point!

3. After discussing the feature you may choose to attempt a Pull Request. If you're at all able, start writing some code. We always have more work to do than time to do it. If you can write some code then that will speed the process along.
3. After discussing the feature you may choose to attempt a Pull Request on tf.keras. If you're at all able, start writing some code. We always have more work to do than time to do it. If you can write some code then that will speed the process along.


---
@@ -45,6 +45,10 @@ You can also use Github issues to request features you would like to see in Kera

**Where should I submit my pull request?**

#### Note:

We are no longer adding new features to multi-backend Keras (we only fix bugs), as we are refocusing development efforts on tf.keras. If you are still interested in submitting a feature pull request, please direct it to tf.keras in the TensorFlow repository instead.

1. **Keras improvements and bugfixes** go to the [Keras `master` branch](https://github.com/keras-team/keras/tree/master).
2. **Experimental new features** such as layers and datasets go to [keras-contrib](https://github.com/farizrahman4u/keras-contrib). Unless it is a new feature listed in [Requests for Contributions](https://github.com/keras-team/keras/projects/1), in which case it belongs in core Keras. If you think your feature belongs in core Keras, you can submit a design doc to explain your feature and argue for it (see explanations below).

3 changes: 3 additions & 0 deletions PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
<!--
Please make sure you've read and understood our contributing guidelines;
https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md
Note:
We are no longer adding new features to multi-backend Keras (we only fix bugs), as we are refocusing development efforts on tf.keras. If you are still interested in submitting a feature pull request, please direct it to tf.keras in the TensorFlow repository instead.
-->

### Summary
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -22,6 +22,20 @@ Keras is compatible with: __Python 2.7-3.6__.

------------------

## Multi-backend Keras and tf.keras:

**At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to `tf.keras` in TensorFlow 2.0**. `tf.keras` is better maintained and has better integration with TensorFlow features (eager execution, distribution support and other).

Keras 2.2.5 was the last release of Keras implementing the 2.2.* API. It was the last release to only support TensorFlow 1 (as well as Theano and CNTK).

The current release is Keras 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by `tf.keras`.

Bugs present in multi-backend Keras will only be fixed until April 2020 (as part of minor releases).

For more information about the future of Keras, see [the Keras meeting notes](http://bit.ly/keras-meeting-notes).


------------------

## Guiding principles

19 changes: 11 additions & 8 deletions docs/autogen.py
Original file line number Diff line number Diff line change
@@ -33,9 +33,9 @@
def get_function_signature(function, method=True):
wrapped = getattr(function, '_original_function', None)
if wrapped is None:
signature = inspect.getargspec(function)
signature = inspect.getfullargspec(function)
else:
signature = inspect.getargspec(wrapped)
signature = inspect.getfullargspec(wrapped)
defaults = signature.defaults
if method:
args = signature.args[1:]
@@ -84,6 +84,8 @@ def post_process_signature(signature):
signature = 'keras.utils.' + '.'.join(parts[3:])
if parts[1] == 'backend':
signature = 'keras.backend.' + '.'.join(parts[3:])
if parts[1] == 'callbacks':
signature = 'keras.callbacks.' + '.'.join(parts[3:])
return signature


@@ -269,7 +271,7 @@ def add_np_implementation(function, docstring):


def read_file(path):
with open(path) as f:
with open(path, encoding='utf-8') as f:
return f.read()


@@ -326,7 +328,7 @@ def get_module_docstring(filepath):
Also finds the line at which the docstring ends.
"""
co = compile(open(filepath).read(), filepath, 'exec')
co = compile(open(filepath, encoding='utf-8').read(), filepath, 'exec')
if co.co_consts and isinstance(co.co_consts[0], six.string_types):
docstring = co.co_consts[0]
else:
@@ -347,8 +349,9 @@ def copy_examples(examples_dir, destination_dir):
module_path = os.path.join(examples_dir, file)
docstring, starting_line = get_module_docstring(module_path)
destination_file = os.path.join(destination_dir, file[:-2] + 'md')
with open(destination_file, 'w+') as f_out, \
open(os.path.join(examples_dir, file), 'r+') as f_in:
with open(destination_file, 'w+', encoding='utf-8') as f_out, \
open(os.path.join(examples_dir, file),
'r+', encoding='utf-8') as f_in:

f_out.write(docstring + '\n\n')

@@ -391,7 +394,7 @@ def generate(sources_dir):
readme = read_file(os.path.join(str(keras_dir), 'README.md'))
index = read_file(os.path.join(template_dir, 'index.md'))
index = index.replace('{{autogenerated}}', readme[readme.find('##'):])
with open(os.path.join(sources_dir, 'index.md'), 'w') as f:
with open(os.path.join(sources_dir, 'index.md'), 'w', encoding='utf-8') as f:
f.write(index)

print('Generating docs for Keras %s.' % keras.__version__)
@@ -457,7 +460,7 @@ def generate(sources_dir):
subdir = os.path.dirname(path)
if not os.path.exists(subdir):
os.makedirs(subdir)
with open(path, 'w') as f:
with open(path, 'w', encoding='utf-8') as f:
f.write(mkdown)

shutil.copyfile(os.path.join(str(keras_dir), 'CONTRIBUTING.md'),
2 changes: 0 additions & 2 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
@@ -73,8 +73,6 @@ nav:
- Baby RNN: examples/babi_rnn.md
- Baby MemNN: examples/babi_memnn.md
- CIFAR-10 CNN: examples/cifar10_cnn.md
- CIFAR-10 CNN-Capsule: examples/cifar10_cnn_capsule.md
- CIFAR-10 CNN with augmentation (TF): examples/cifar10_cnn_tfaugment2d.md
- CIFAR-10 ResNet: examples/cifar10_resnet.md
- Convolution filter visualization: examples/conv_filter_visualization.md
- Convolutional LSTM: examples/conv_lstm.md
10 changes: 2 additions & 8 deletions docs/templates/applications.md
Original file line number Diff line number Diff line change
@@ -12,7 +12,7 @@ Weights are downloaded automatically when instantiating a model. They are stored
- [Xception](#xception)
- [VGG16](#vgg16)
- [VGG19](#vgg19)
- [ResNet, ResNetV2, ResNeXt](#resnet)
- [ResNet, ResNetV2](#resnet)
- [InceptionV3](#inceptionv3)
- [InceptionResNetV2](#inceptionresnetv2)
- [MobileNet](#mobilenet)
@@ -181,8 +181,6 @@ model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=T
| [ResNet50V2](#resnet) | 98 MB | 0.760 | 0.930 | 25,613,800 | - |
| [ResNet101V2](#resnet) | 171 MB | 0.772 | 0.938 | 44,675,560 | - |
| [ResNet152V2](#resnet) | 232 MB | 0.780 | 0.942 | 60,380,648 | - |
| [ResNeXt50](#resnet) | 96 MB | 0.777 | 0.938 | 25,097,128 | - |
| [ResNeXt101](#resnet) | 170 MB | 0.787 | 0.943 | 44,315,560 | - |
| [InceptionV3](#inceptionv3) | 92 MB | 0.779 | 0.937 | 23,851,784 | 159 |
| [InceptionResNetV2](#inceptionresnetv2) | 215 MB | 0.803 | 0.953 | 55,873,736 | 572 |
| [MobileNet](#mobilenet) | 16 MB | 0.704 | 0.895 | 4,253,864 | 88 |
@@ -377,12 +375,10 @@ keras.applications.resnet.ResNet152(include_top=True, weights='imagenet', input_
keras.applications.resnet_v2.ResNet50V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
keras.applications.resnet_v2.ResNet101V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
keras.applications.resnet_v2.ResNet152V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
keras.applications.resnext.ResNeXt50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
keras.applications.resnext.ResNeXt101(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```


ResNet, ResNetV2, ResNeXt models, with weights pre-trained on ImageNet.
ResNet, ResNetV2 models, with weights pre-trained on ImageNet.

This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).

@@ -424,15 +420,13 @@ A Keras `Model` instance.

- `ResNet`: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
- `ResNetV2`: [Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)
- `ResNeXt`: [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431)

### License

These weights are ported from the following:

- `ResNet`: [The original repository of Kaiming He](https://github.com/KaimingHe/deep-residual-networks) under the [MIT license](https://github.com/KaimingHe/deep-residual-networks/blob/master/LICENSE).
- `ResNetV2`: [Facebook](https://github.com/facebook/fb.resnet.torch) under the [BSD license](https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE).
- `ResNeXt`: [Facebook AI Research](https://github.com/facebookresearch/ResNeXt) under the [BSD license](https://github.com/facebookresearch/ResNeXt/blob/master/LICENSE).

-----

19 changes: 17 additions & 2 deletions docs/templates/getting-started/functional-api-guide.md
Original file line number Diff line number Diff line change
@@ -85,6 +85,8 @@ The integers will be between 1 and 10,000 (a vocabulary of 10,000 words) and the
```python
from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model
import numpy as np
np.random.seed(0) # Set a random seed for reproducibility

# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
# Note that we can name any layer by passing it a "name" argument.
@@ -138,7 +140,11 @@ model.compile(optimizer='rmsprop', loss='binary_crossentropy',
We can train the model by passing it lists of input arrays and target arrays:

```python
model.fit([headline_data, additional_data], [labels, labels],
headline_data = np.round(np.abs(np.random.rand(12, 100) * 100))
additional_data = np.random.randn(12, 5)
headline_labels = np.random.randn(12, 1)
additional_labels = np.random.randn(12, 1)
model.fit([headline_data, additional_data], [headline_labels, additional_labels],
epochs=50, batch_size=32)
```

@@ -152,10 +158,19 @@ model.compile(optimizer='rmsprop',

# And trained it via:
model.fit({'main_input': headline_data, 'aux_input': additional_data},
{'main_output': labels, 'aux_output': labels},
{'main_output': headline_labels, 'aux_output': additional_labels},
epochs=50, batch_size=32)
```

To use the model for inferencing, use
```python
model.predict({'main_input': headline_data, 'aux_input': additional_data})
```
or alternatively,
```python
pred = model.predict([headline_data, additional_data])
```

-----

## Shared layers
2 changes: 1 addition & 1 deletion docs/templates/getting-started/sequential-model-guide.md
Original file line number Diff line number Diff line change
@@ -52,7 +52,7 @@ Before training a model, you need to configure the learning process, which is do

- An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the `Optimizer` class. See: [optimizers](/optimizers).
- A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as `categorical_crossentropy` or `mse`), or it can be an objective function. See: [losses](/losses).
- A list of metrics. For any classification problem you will want to set this to `metrics=['accuracy']`. A metric could be the string identifier of an existing metric or a custom metric function.
- A list of metrics. For any classification problem you will want to set this to `metrics=['accuracy']`. A metric could be the string identifier of an existing metric or a custom metric function. See: [metrics](/metrics).

```python
# For a multi-class classification problem
2 changes: 1 addition & 1 deletion examples/babi_rnn.py
Original file line number Diff line number Diff line change
@@ -79,7 +79,7 @@ def tokenize(sent):
>>> tokenize('Bob dropped the apple. Where is the apple?')
['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?']
'''
return [x.strip() for x in re.split(r'(\W+)?', sent) if x.strip()]
return [x.strip() for x in re.split(r'(\W+)', sent) if x.strip()]


def parse_stories(lines, only_supporting=False):
2 changes: 1 addition & 1 deletion examples/cifar10_cnn.py
Original file line number Diff line number Diff line change
@@ -56,7 +56,7 @@
model.add(Activation('softmax'))

# initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
opt = keras.optimizers.RMSprop(learning_rate=0.0001, decay=1e-6)

# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
Loading

0 comments on commit 2d1e944

Please sign in to comment.