55
55
56
56
# tfc.EntropyBottleneck
57
57
58
+
59
+ <table class =" tfo-notebook-buttons tfo-api " align =" left " >
60
+
61
+ <td >
62
+ <a target =" _blank " href =" https://github.com/tensorflow/compression/tree/master/tensorflow_compression/python/layers/entropy_models.py " >
63
+ <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
64
+ View source on GitHub
65
+ </a >
66
+ </td ></table >
67
+
68
+
69
+
58
70
## Class ` EntropyBottleneck `
59
71
60
72
Entropy bottleneck layer.
@@ -63,19 +75,9 @@ Inherits From: [`EntropyModel`](../tfc/EntropyModel.md)
63
75
64
76
### Aliases:
65
77
66
- * Class ` tfc.EntropyBottleneck `
67
78
* Class ` tfc.python.layers.entropy_models.EntropyBottleneck `
68
79
69
80
70
-
71
-
72
- <table class =" tfo-github-link " align =" left " >
73
- <a target =" _blank " href =" https://github.com/tensorflow/compression/tree/master/tensorflow_compression/python/layers/entropy_models.py " >
74
- <img src =" https://www.tensorflow.org/images/GitHub-Mark-32px.png " />
75
- View source on GitHub
76
- </a >
77
- </table >
78
-
79
81
<!-- Placeholder for "Used in" -->
80
82
81
83
This layer models the entropy of the tensor passing through it. During
@@ -117,64 +119,6 @@ which are only significant for compression and decompression. To use the
117
119
compression feature, the auxiliary loss must be minimized during or after
118
120
training. After that, the update op must be executed at least once.
119
121
120
- #### Arguments:
121
-
122
-
123
- * <b >` init_scale ` </b >: Float. A scaling factor determining the initial width of the
124
- probability densities. This should be chosen big enough so that the
125
- range of values of the layer inputs roughly falls within the interval
126
- [ ` -init_scale ` , ` init_scale ` ] at the beginning of training.
127
- * <b >` filters ` </b >: An iterable of ints, giving the number of filters at each layer of
128
- the density model. Generally, the more filters and layers, the more
129
- expressive is the density model in terms of modeling more complicated
130
- distributions of the layer inputs. For details, refer to the paper
131
- referenced above. The default is ` [3, 3, 3] ` , which should be sufficient
132
- for most practical purposes.
133
- * <b >` tail_mass ` </b >: Float, between 0 and 1. The bottleneck layer automatically
134
- determines the range of input values based on their frequency of
135
- occurrence. Values occurring in the tails of the distributions will not be
136
- encoded with range coding, but using a Golomb-like code. ` tail_mass `
137
- determines the amount of probability mass in the tails which will be
138
- Golomb-coded. For example, the default value of ` 2 ** -8 ` means that on
139
- average, one 256th of all values will use the Golomb code.
140
- * <b >` likelihood_bound ` </b >: Float. If positive, the returned likelihood values are
141
- ensured to be greater than or equal to this value. This prevents very
142
- large gradients with a typical entropy loss (defaults to 1e-9).
143
- * <b >` range_coder_precision ` </b >: Integer, between 1 and 16. The precision of the range
144
- coder used for compression and decompression. This trades off computation
145
- speed with compression efficiency, where 16 is the slowest but most
146
- efficient setting. Choosing lower values may increase the average
147
- codelength slightly compared to the estimated entropies.
148
- * <b >` data_format ` </b >: Either ` 'channels_first' ` or ` 'channels_last' ` (default).
149
- * <b >` trainable ` </b >: Boolean. Whether the layer should be trained.
150
- * <b >` name ` </b >: String. The name of the layer.
151
- * <b >` dtype ` </b >: ` DType ` of the layer's inputs, parameters, returned likelihoods, and
152
- outputs during training. Default of ` None ` means to use the type of the
153
- first input.
154
-
155
- Read-only properties:
156
- init_scale: See above.
157
- filters: See above.
158
- tail_mass: See above.
159
- likelihood_bound: See above.
160
- range_coder_precision: See above.
161
- data_format: See above.
162
- name: String. See above.
163
- dtype: See above.
164
- trainable_variables: List of trainable variables.
165
- non_trainable_variables: List of non-trainable variables.
166
- variables: List of all variables of this layer, trainable and non-trainable.
167
- updates: List of update ops of this layer.
168
- losses: List of losses added by this layer. Always contains exactly one
169
- auxiliary loss, which must be added to the training loss.
170
-
171
- #### Mutable properties:
172
-
173
-
174
- * <b >` trainable ` </b >: Boolean. Whether the layer should be trained.
175
- * <b >` input_spec ` </b >: Optional ` InputSpec ` object specifying the constraints on inputs
176
- that can be accepted by the layer.
177
-
178
122
<h2 id =" __init__ " ><code >__init__</code ></h2 >
179
123
180
124
<a target =" _blank " href =" https://github.com/tensorflow/compression/tree/master/tensorflow_compression/python/layers/entropy_models.py " >View source</a >
@@ -188,8 +132,24 @@ __init__(
188
132
)
189
133
```
190
134
135
+ Initializer.
191
136
192
137
138
+ #### Arguments:
139
+
140
+
141
+ * <b >` init_scale ` </b >: Float. A scaling factor determining the initial width of the
142
+ probability densities. This should be chosen big enough so that the
143
+ range of values of the layer inputs roughly falls within the interval
144
+ [ ` -init_scale ` , ` init_scale ` ] at the beginning of training.
145
+ * <b >` filters ` </b >: An iterable of ints, giving the number of filters at each layer
146
+ of the density model. Generally, the more filters and layers, the more
147
+ expressive is the density model in terms of modeling more complicated
148
+ distributions of the layer inputs. For details, refer to the paper
149
+ referenced above. The default is ` [3, 3, 3] ` , which should be sufficient
150
+ for most practical purposes.
151
+ * <b >` data_format ` </b >: Either ` 'channels_first' ` or ` 'channels_last' ` (default).
152
+ * <b >` **kwargs ` </b >: Other keyword arguments passed to superclass (` EntropyModel ` ).
193
153
194
154
195
155
0 commit comments