This repository has been archived by the owner on Sep 30, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 52
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add fine-tuning hparams to collection documentation
PiperOrigin-RevId: 335597568
- Loading branch information
1 parent
ad11a2e
commit 44c0216
Showing
1,537 changed files
with
110,105 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
# Publisher android-studio | ||
|
||
Android Studio | ||
|
||
[![Icon URL]](https://www.gstatic.com/aihub/tfhub/publisher_logos/android-studio/logo.png) | ||
|
||
## [developer.android.com/studio](https://developer.android.com/studio) | ||
|
||
Android Studio provides the fastest tools for building apps on every type of | ||
Android device. |
123 changes: 123 additions & 0 deletions
123
assets/docs/android-studio/collections/ml-model-binding/1.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,123 @@ | ||
# Collection android-studio/ml-model-binding/1 | ||
|
||
Collection of TFLite models that can be used with Android Studio ML Model | ||
Binding. | ||
|
||
<!-- module-type: image-classification --> | ||
<!-- module-type: image-style-transfer --> | ||
|
||
## Overview | ||
|
||
Use TensorFlow Lite models from TF-Hub with ML Model Binding in Android Studio | ||
version 4.1 or later. ML Model Binding makes it easy for you to directly import | ||
`.tflite` model files and use them in your projects. Learn more about how to | ||
[import TFLite models in Android Studio](https://developer.android.com/studio/write/mlmodelbinding). | ||
|
||
Follow these steps to use a model in your Android Studio project: | ||
|
||
1. Install [Android Studio 4.1 Beta 1 or later](https://developer.android.com/studio/preview) | ||
2. Download the `.tflite` model file from the model details page. Pick a model | ||
with metadata if one is available. | ||
3. In Android Studio, open the TensorFlow Lite model import dialog in the File | ||
menu at **File > New > Other > TensorFlow Lite Model**. Select the `.tflite` | ||
model file that you downloaded. | ||
4. Click Finish. | ||
|
||
This imports the model file into your project and places it in the **ml/** | ||
folder. Clicking on the model file in your project will open the model viewer, | ||
which provides shows the following: | ||
|
||
* **Model:** High-level description of the model | ||
* **Tensors:** Description of input and output tensors | ||
* **Sample code:** Example of how to interface with the model in your app | ||
|
||
|
||
## Models | ||
|
||
<!-- A list of models in the collection --> | ||
<!-- (https://tfhub.dev/agripredict/lite-model/disease-classification/1) --> | ||
<!-- (https://tfhub.dev/bohemian-visual-recognition-alliance/lite-model/models/mushroom-identification_v1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/2) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/3) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/food_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/insects_V1/2) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/insects_V1/3) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/plants_V1/2) --> | ||
<!-- (https://tfhub.dev/google/lite-model/aiy/vision/classifier/plants_V1/3) --> | ||
<!-- (https://tfhub.dev/google/lite-model/cropnet/classifier/cassava_disease_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/transfer/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_africa_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_asia_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_europe_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_north_america_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_oceania_antarctica_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/landmarks_classifier_south_america_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/popular_us_products_V1/1) --> | ||
<!-- (https://tfhub.dev/google/lite-model/on_device_vision/classifier/popular_wine_V1/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/densenet/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite0/fp32/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite0/int8/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite1/fp32/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite1/int8/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite2/fp32/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite2/int8/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite3/fp32/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite3/int8/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite4/fp32/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/efficientnet/lite4/int8/2) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_resnet_v2/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v1_quant/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v2_quant/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v3/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v3_quant/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v4/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/inception_v4_quant/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_0.50_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_0.75_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.0_128/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.0_160/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.0_192/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.0_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.0_96/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mnasnet_1.3_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_128/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_128_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_160/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_160_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_192/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_192_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.25_224_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_128/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_128_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_160/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_160_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_192/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_192_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.50_224_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_128/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_128_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_160/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_160_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_192/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_192_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_224_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_128/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_128_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_160/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_160_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_192/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_192_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_224_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224_quantized/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/nasnet/large/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/nasnet/mobile/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/resnet_v2_101/1/metadata/1) --> | ||
<!-- (https://tfhub.dev/tensorflow/lite-model/squeezenet/1/metadata/1) --> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Publisher deepmind | ||
DeepMind | ||
|
||
[![Icon URL]](https://www.gstatic.com/aihub/deepmind_logo_120.png) | ||
|
||
## [www.deepmind.com](http://www.deepmind.com) | ||
|
||
DeepMind is a leader in cutting-edge AI research and its application for | ||
positive impact. The TensorFlow Hub modules we openly publish here are intended | ||
to allow everyone to benefit from our research and encourage others to apply our | ||
work to solve real-world problems. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# Module deepmind/bigbigan-resnet50/1 | ||
Unsupervised BigBiGAN image generation & representation learning model trained | ||
on ImageNet with a smaller (ResNet-50) encoder architecture. | ||
|
||
<!-- dataset: ImageNet (ILSVRC-2012-CLS) --> | ||
<!-- module-type: image-generator --> | ||
<!-- network-architecture: BigBiGAN --> | ||
<!-- fine-tunable: false --> | ||
<!-- format: hub --> | ||
|
||
|
||
[![Open Colab notebok]](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/bigbigan_with_tf_hub.ipynb) | ||
|
||
## Overview | ||
|
||
This is the unsupervised *BigBiGAN* image generator and representation learning | ||
model described in [1], corresponding to the penultimate row of Table 1 | ||
("ResNet (↑ Encoder LR)") and the "BigBiGAN / ResNet-50" rows of | ||
Table 2. | ||
|
||
#### Example use | ||
```python | ||
# Load BigBiGAN module. | ||
module = hub.Module('https://tfhub.dev/deepmind/bigbigan-resnet50/1') | ||
|
||
# Sample a batch of 8 random latent vectors (z) from the Gaussian prior. Then | ||
# call the generator on the latent samples to generate a batch of images with | ||
# shape [8, 128, 128, 3] and range [-1, 1]. | ||
z = tf.random.normal([8, 120]) # latent samples | ||
gen_samples = module(z, signature='generate') | ||
|
||
# Given a batch of 256x256 RGB images in range [-1, 1], call the encoder to | ||
# compute predicted latents z and other features (e.g. for use in downstream | ||
# recognition tasks). | ||
images = tf.placeholder(tf.float32, shape=[None, 256, 256, 3]) | ||
features = module(images, signature='encode', as_dict=True) | ||
|
||
# Get the predicted latent sample `z_sample` from the dict of features. | ||
# Other available features include `avepool_feat` and `bn_crelu_feat`, used in | ||
# the representation learning results. | ||
z_sample = features['z_sample'] # shape [?, 120] | ||
|
||
# Compute reconstructions of the input `images` by passing the encoder's output | ||
# `z_sample` back through the generator. Note that raw generator outputs are | ||
# half the resolution of encoder inputs (128x128). To get upsampled generator | ||
# outputs matching the encoder input resolution (256x256), instead use: | ||
# recons = module(z_sample, signature='generate', as_dict=True)['upsampled'] | ||
recons = module(z_sample, signature='generate') # shape [?, 128, 128, 3] | ||
``` | ||
|
||
See the [Colab notebook demo](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/bigbigan_with_tf_hub.ipynb) | ||
for more detailed example use. | ||
|
||
## References | ||
|
||
[1] Jeff Donahue and Karen Simonyan. | ||
[Large Scale Adversarial Representation Learning](https://arxiv.org/abs/1907.02544). | ||
*arxiv:1907.02544*, 2019. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# Module deepmind/bigbigan-revnet50x4/1 | ||
Unsupervised BigBiGAN image generation & representation learning model trained | ||
on ImageNet with a larger (RevNet-50x4) encoder architecture. | ||
|
||
<!-- dataset: ImageNet (ILSVRC-2012-CLS) --> | ||
<!-- module-type: image-generator --> | ||
<!-- network-architecture: BigBiGAN --> | ||
<!-- fine-tunable: false --> | ||
<!-- format: hub --> | ||
|
||
|
||
[![Open Colab notebok]](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/bigbigan_with_tf_hub.ipynb) | ||
|
||
## Overview | ||
|
||
This is the unsupervised *BigBiGAN* image generator and representation learning | ||
model described in [1], corresponding to the last row of Table 1 | ||
("RevNet x4 (↑ Encoder LR)") and the "BigBiGAN / RevNet-50 x4" rows of | ||
Table 2. | ||
|
||
#### Example use | ||
```python | ||
# Load BigBiGAN module. | ||
module = hub.Module('https://tfhub.dev/deepmind/bigbigan-revnet50x4/1') | ||
|
||
# Sample a batch of 8 random latent vectors (z) from the Gaussian prior. Then | ||
# call the generator on the latent samples to generate a batch of images with | ||
# shape [8, 128, 128, 3] and range [-1, 1]. | ||
z = tf.random.normal([8, 120]) # latent samples | ||
gen_samples = module(z, signature='generate') | ||
|
||
# Given a batch of 256x256 RGB images in range [-1, 1], call the encoder to | ||
# compute predicted latents z and other features (e.g. for use in downstream | ||
# recognition tasks). | ||
images = tf.placeholder(tf.float32, shape=[None, 256, 256, 3]) | ||
features = module(images, signature='encode', as_dict=True) | ||
|
||
# Get the predicted latent sample `z_sample` from the dict of features. | ||
# Other available features include `avepool_feat` and `bn_crelu_feat`, used in | ||
# the representation learning results. | ||
z_sample = features['z_sample'] # shape [?, 120] | ||
|
||
# Compute reconstructions of the input `images` by passing the encoder's output | ||
# `z_sample` back through the generator. Note that raw generator outputs are | ||
# half the resolution of encoder inputs (128x128). To get upsampled generator | ||
# outputs matching the encoder input resolution (256x256), instead use: | ||
# recons = module(z_sample, signature='generate', as_dict=True)['upsampled'] | ||
recons = module(z_sample, signature='generate') # shape [?, 128, 128, 3] | ||
``` | ||
|
||
See the [Colab notebook demo](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/bigbigan_with_tf_hub.ipynb) | ||
for more detailed example use. | ||
|
||
## References | ||
|
||
[1] Jeff Donahue and Karen Simonyan. | ||
[Large Scale Adversarial Representation Learning](https://arxiv.org/abs/1907.02544). | ||
*arxiv:1907.02544*, 2019. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
# Module deepmind/biggan-128/1 | ||
BigGAN image generator trained on 128x128 ImageNet. | ||
|
||
<!-- dataset: ImageNet (ILSVRC-2012-CLS) --> | ||
<!-- module-type: image-generator --> | ||
<!-- network-architecture: BigGAN --> | ||
<!-- fine-tunable: false --> | ||
<!-- format: hub --> | ||
|
||
|
||
[![Open Colab notebok]](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb) | ||
|
||
## Overview | ||
|
||
This is the 128x128 *BigGAN* image generator described in [1], corresponding to | ||
Row 3 in Table 2 (res. 128). | ||
|
||
#### Example use | ||
```python | ||
# Load BigGAN 128 module. | ||
module = hub.Module('https://tfhub.dev/deepmind/biggan-128/1') | ||
|
||
# Sample random noise (z) and ImageNet label (y) inputs. | ||
batch_size = 8 | ||
truncation = 0.5 # scalar truncation value in [0.02, 1.0] | ||
z = truncation * tf.random.truncated_normal([batch_size, 120]) # noise sample | ||
y_index = tf.random.uniform([batch_size], maxval=1000, dtype=tf.int32) | ||
y = tf.one_hot(y_index, 1000) # one-hot ImageNet label | ||
|
||
# Call BigGAN on a dict of the inputs to generate a batch of images with shape | ||
# [8, 128, 128, 3] and range [-1, 1]. | ||
samples = module(dict(y=y, z=z, truncation=truncation)) | ||
``` | ||
|
||
#### Note from the authors | ||
|
||
This work was conducted to advance the state of the art in | ||
generative adversarial networks for image generation. | ||
We are releasing the pre-trained generator to allow our work to be | ||
verified, which is standard practice in academia. | ||
It does not include the discriminator to minimize the potential for | ||
exploitation. | ||
|
||
## References | ||
|
||
[1] Andrew Brock, Jeff Donahue, and Karen Simonyan. | ||
[Large Scale GAN Training for High Fidelity Natural Image Synthesis](https://arxiv.org/abs/1809.11096). | ||
*arxiv:1809.11096*, 2018. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# Module deepmind/biggan-128/2 | ||
BigGAN image generator trained on 128x128 ImageNet. | ||
|
||
<!-- dataset: ImageNet (ILSVRC-2012-CLS) --> | ||
<!-- module-type: image-generator --> | ||
<!-- network-architecture: BigGAN --> | ||
<!-- fine-tunable: false --> | ||
<!-- format: hub --> | ||
|
||
|
||
[![Open Colab notebok]](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb) | ||
|
||
## Overview | ||
|
||
This is the 128x128 *BigGAN* image generator described in [1], corresponding to | ||
Row 3 in Table 2 (res. 128). | ||
|
||
#### Example use | ||
```python | ||
# Load BigGAN 128 module. | ||
module = hub.Module('https://tfhub.dev/deepmind/biggan-128/2') | ||
|
||
# Sample random noise (z) and ImageNet label (y) inputs. | ||
batch_size = 8 | ||
truncation = 0.5 # scalar truncation value in [0.02, 1.0] | ||
z = truncation * tf.random.truncated_normal([batch_size, 120]) # noise sample | ||
y_index = tf.random.uniform([batch_size], maxval=1000, dtype=tf.int32) | ||
y = tf.one_hot(y_index, 1000) # one-hot ImageNet label | ||
|
||
# Call BigGAN on a dict of the inputs to generate a batch of images with shape | ||
# [8, 128, 128, 3] and range [-1, 1]. | ||
samples = module(dict(y=y, z=z, truncation=truncation)) | ||
``` | ||
|
||
#### Note from the authors | ||
|
||
This work was conducted to advance the state of the art in | ||
generative adversarial networks for image generation. | ||
We are releasing the pre-trained generator to allow our work to be | ||
verified, which is standard practice in academia. | ||
It does not include the discriminator to minimize the potential for | ||
exploitation. | ||
|
||
## Changelog | ||
|
||
#### Version 1 | ||
|
||
* Initial release. | ||
|
||
#### Version 2 | ||
|
||
* Fixed race condition causing batch statistics for previous truncation value | ||
to be used on the first run call for a new truncation value. | ||
|
||
## References | ||
|
||
[1] Andrew Brock, Jeff Donahue, and Karen Simonyan. | ||
[Large Scale GAN Training for High Fidelity Natural Image Synthesis](https://arxiv.org/abs/1809.11096). | ||
*arxiv:1809.11096*, 2018. |
Oops, something went wrong.