Skip to content

Commit 7657a2b

Browse files
A. Unique TensorFlowertensorflower-gardener
authored andcommitted
Update generated Python Op docs.
Change: 120988643
1 parent 9eaef1e commit 7657a2b

File tree

6 files changed

+317
-102
lines changed

6 files changed

+317
-102
lines changed

tensorflow/g3doc/api_docs/python/contrib.layers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ The variable creation is compatible with `tf.variable_scope` and so can be
109109
reused with `tf.variable_scope` or `tf.make_template`.
110110

111111
Most of the details of variable creation can be controlled by specifying the
112-
initializers (`weight_init` and `bias_init`) and which in collections to place
112+
initializers (`weight_init` and `bias_init`) and in which collections to place
113113
the created variables (`weight_collections` and `bias_collections`; note that
114114
the variables are always added to the `VARIABLES` collection). The output of
115115
the layer can be placed in custom collections using `output_collections`.

tensorflow/g3doc/api_docs/python/control_flow_ops.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -744,6 +744,14 @@ Asserts that the given condition is true.
744744
If `condition` evaluates to false, print the list of tensors in `data`.
745745
`summarize` determines how many entries of the tensors to print.
746746

747+
NOTE: To ensure that Assert executes, one usually attaches a dependency:
748+
749+
```python
750+
# Ensure maximum element of x is smaller or equal to 1
751+
assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
752+
x = tf.with_dependencies([assert_op], x)
753+
```
754+
747755
##### Args:
748756

749757

@@ -752,6 +760,12 @@ If `condition` evaluates to false, print the list of tensors in `data`.
752760
* <b>`summarize`</b>: Print this many entries of each tensor.
753761
* <b>`name`</b>: A name for this operation (optional).
754762

763+
##### Returns:
764+
765+
766+
* <b>`assert_op`</b>: An `Operation` that, when executed, raises a
767+
`tf.errors.InvalidArgumentError` if `condition` is not true.
768+
755769

756770
- - -
757771

tensorflow/g3doc/api_docs/python/framework.md

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -719,6 +719,9 @@ a collection several times. This function makes sure that duplicates in
719719
`names` are ignored, but it will not check for pre-existing membership of
720720
`value` in any of the collections in `names`.
721721

722+
`names` can be any iterable, but if `names` is a string, it is treated as a
723+
single collection name.
724+
722725
##### Args:
723726

724727

@@ -2498,6 +2501,157 @@ registered with a smaller priority than `G`.
24982501

24992502

25002503
## Other Functions and Classes
2504+
- - -
2505+
2506+
### `class tf.DeviceSpec` {#DeviceSpec}
2507+
2508+
Represents a (possibly partial) specification for a TensorFlow device.
2509+
2510+
`DeviceSpec`s are used throughout TensorFlow to describe where state is stored
2511+
and computations occur. Using `DeviceSpec` allows you to parse device spec
2512+
strings to verify their validity, merge them or compose them programmatically.
2513+
2514+
Example:
2515+
```python
2516+
# Place the operations on device "GPU:0" in the "ps" job.
2517+
device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
2518+
with tf.device(device_spec):
2519+
# Both my_var and squared_var will be placed on /job:ps/device:GPU:0.
2520+
my_var = tf.Variable(..., name="my_variable")
2521+
squared_var = tf.square(my_var)
2522+
```
2523+
2524+
If a `DeviceSpec` is partially specified, it will be merged with other
2525+
`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`
2526+
components defined in inner scopes take precedence over those defined in
2527+
outer scopes.
2528+
2529+
```python
2530+
with tf.device(DeviceSpec(job="train", )):
2531+
with tf.device(DeviceSpec(job="ps", device_type="GPU", device_index=0):
2532+
# Nodes created here will be assigned to /job:ps/device:GPU:0.
2533+
with tf.device(DeviceSpec(device_type="GPU", device_index=1):
2534+
# Nodes created here will be assigned to /job:train/device:GPU:1.
2535+
```
2536+
2537+
A `DeviceSpec` consists of 5 components -- each of
2538+
which is optionally specified:
2539+
2540+
* Job: The job name.
2541+
* Replica: The replica index.
2542+
* Task: The task index.
2543+
* Device type: The device type string (e.g. "CPU" or "GPU").
2544+
* Device index: The device index.
2545+
- - -
2546+
2547+
#### `tf.DeviceSpec.__init__(job=None, replica=None, task=None, device_type=None, device_index=None)` {#DeviceSpec.__init__}
2548+
2549+
Create a new `DeviceSpec` object.
2550+
2551+
##### Args:
2552+
2553+
2554+
* <b>`job`</b>: string. Optional job name.
2555+
* <b>`replica`</b>: int. Optional replica index.
2556+
* <b>`task`</b>: int. Optional task index.
2557+
* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
2558+
* <b>`device_index`</b>: int. Optional device index. If left
2559+
unspecified, device represents 'any' device_index.
2560+
2561+
2562+
- - -
2563+
2564+
#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string}
2565+
2566+
Construct a `DeviceSpec` from a string.
2567+
2568+
##### Args:
2569+
2570+
2571+
* <b>`spec`</b>: a string of the form
2572+
/job:<name>/replica:<id>/task:<id>/device:CPU:<id>
2573+
or
2574+
/job:<name>/replica:<id>/task:<id>/device:GPU:<id>
2575+
as cpu and gpu are mutually exclusive.
2576+
All entries are optional.
2577+
2578+
##### Returns:
2579+
2580+
A DeviceSpec.
2581+
2582+
2583+
- - -
2584+
2585+
#### `tf.DeviceSpec.job` {#DeviceSpec.job}
2586+
2587+
2588+
2589+
2590+
- - -
2591+
2592+
#### `tf.DeviceSpec.merge_from(dev)` {#DeviceSpec.merge_from}
2593+
2594+
Merge the properties of "dev" into this `DeviceSpec`.
2595+
2596+
##### Args:
2597+
2598+
2599+
* <b>`dev`</b>: a `DeviceSpec`.
2600+
2601+
2602+
- - -
2603+
2604+
#### `tf.DeviceSpec.parse_from_string(spec)` {#DeviceSpec.parse_from_string}
2605+
2606+
Parse a `DeviceSpec` name into its components.
2607+
2608+
##### Args:
2609+
2610+
2611+
* <b>`spec`</b>: a string of the form
2612+
/job:<name>/replica:<id>/task:<id>/device:CPU:<id>
2613+
or
2614+
/job:<name>/replica:<id>/task:<id>/device:GPU:<id>
2615+
as cpu and gpu are mutually exclusive.
2616+
All entries are optional.
2617+
2618+
##### Returns:
2619+
2620+
The `DeviceSpec`.
2621+
2622+
##### Raises:
2623+
2624+
2625+
* <b>`ValueError`</b>: if the spec was not valid.
2626+
2627+
2628+
- - -
2629+
2630+
#### `tf.DeviceSpec.replica` {#DeviceSpec.replica}
2631+
2632+
2633+
2634+
2635+
- - -
2636+
2637+
#### `tf.DeviceSpec.task` {#DeviceSpec.task}
2638+
2639+
2640+
2641+
2642+
- - -
2643+
2644+
#### `tf.DeviceSpec.to_string()` {#DeviceSpec.to_string}
2645+
2646+
Return a string representation of this `DeviceSpec`.
2647+
2648+
##### Returns:
2649+
2650+
a string of the form
2651+
/job:<name>/replica:<id>/task:<id>/device:<device_type>:<id>.
2652+
2653+
2654+
25012655
- - -
25022656

25032657
### `class tf.bytes` {#bytes}

tensorflow/g3doc/api_docs/python/index.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
* [`convert_to_tensor`](../../api_docs/python/framework.md#convert_to_tensor)
1111
* [`convert_to_tensor_or_indexed_slices`](../../api_docs/python/framework.md#convert_to_tensor_or_indexed_slices)
1212
* [`device`](../../api_docs/python/framework.md#device)
13+
* [`DeviceSpec`](../../api_docs/python/framework.md#DeviceSpec)
1314
* [`Dimension`](../../api_docs/python/framework.md#Dimension)
1415
* [`DType`](../../api_docs/python/framework.md#DType)
1516
* [`get_collection`](../../api_docs/python/framework.md#get_collection)
@@ -70,9 +71,11 @@
7071
* [`constant_initializer`](../../api_docs/python/state_ops.md#constant_initializer)
7172
* [`count_up_to`](../../api_docs/python/state_ops.md#count_up_to)
7273
* [`device`](../../api_docs/python/state_ops.md#device)
74+
* [`export_meta_graph`](../../api_docs/python/state_ops.md#export_meta_graph)
7375
* [`get_checkpoint_state`](../../api_docs/python/state_ops.md#get_checkpoint_state)
7476
* [`get_variable`](../../api_docs/python/state_ops.md#get_variable)
7577
* [`get_variable_scope`](../../api_docs/python/state_ops.md#get_variable_scope)
78+
* [`import_meta_graph`](../../api_docs/python/state_ops.md#import_meta_graph)
7679
* [`IndexedSlices`](../../api_docs/python/state_ops.md#IndexedSlices)
7780
* [`initialize_all_variables`](../../api_docs/python/state_ops.md#initialize_all_variables)
7881
* [`initialize_local_variables`](../../api_docs/python/state_ops.md#initialize_local_variables)
@@ -454,6 +457,7 @@
454457
* [`AdamOptimizer`](../../api_docs/python/train.md#AdamOptimizer)
455458
* [`add_queue_runner`](../../api_docs/python/train.md#add_queue_runner)
456459
* [`AggregationMethod`](../../api_docs/python/train.md#AggregationMethod)
460+
* [`audio_summary`](../../api_docs/python/train.md#audio_summary)
457461
* [`clip_by_average_norm`](../../api_docs/python/train.md#clip_by_average_norm)
458462
* [`clip_by_global_norm`](../../api_docs/python/train.md#clip_by_global_norm)
459463
* [`clip_by_norm`](../../api_docs/python/train.md#clip_by_norm)
@@ -462,7 +466,6 @@
462466
* [`Coordinator`](../../api_docs/python/train.md#Coordinator)
463467
* [`exponential_decay`](../../api_docs/python/train.md#exponential_decay)
464468
* [`ExponentialMovingAverage`](../../api_docs/python/train.md#ExponentialMovingAverage)
465-
* [`export_meta_graph`](../../api_docs/python/train.md#export_meta_graph)
466469
* [`FtrlOptimizer`](../../api_docs/python/train.md#FtrlOptimizer)
467470
* [`generate_checkpoint_state_proto`](../../api_docs/python/train.md#generate_checkpoint_state_proto)
468471
* [`global_norm`](../../api_docs/python/train.md#global_norm)
@@ -471,7 +474,6 @@
471474
* [`gradients`](../../api_docs/python/train.md#gradients)
472475
* [`histogram_summary`](../../api_docs/python/train.md#histogram_summary)
473476
* [`image_summary`](../../api_docs/python/train.md#image_summary)
474-
* [`import_meta_graph`](../../api_docs/python/train.md#import_meta_graph)
475477
* [`LooperThread`](../../api_docs/python/train.md#LooperThread)
476478
* [`merge_all_summaries`](../../api_docs/python/train.md#merge_all_summaries)
477479
* [`merge_summary`](../../api_docs/python/train.md#merge_summary)

tensorflow/g3doc/api_docs/python/state_ops.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1988,3 +1988,102 @@ The `Graph` that contains the values, indices, and shape tensors.
19881988

19891989

19901990

1991+
1992+
1993+
## Exporting and Importing Meta Graphs
1994+
1995+
- - -
1996+
1997+
### `tf.train.export_meta_graph(filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False)` {#export_meta_graph}
1998+
1999+
Returns `MetaGraphDef` proto. Optionally writes it to filename.
2000+
2001+
This function exports the graph, saver, and collection objects into
2002+
`MetaGraphDef` protocol buffer with the intension of it being imported
2003+
at a later time or location to restart training, run inference, or be
2004+
a subgraph.
2005+
2006+
##### Args:
2007+
2008+
2009+
* <b>`filename`</b>: Optional filename including the path for writing the
2010+
generated `MetaGraphDef` protocol buffer.
2011+
* <b>`meta_info_def`</b>: `MetaInfoDef` protocol buffer.
2012+
* <b>`graph_def`</b>: `GraphDef` protocol buffer.
2013+
* <b>`saver_def`</b>: `SaverDef` protocol buffer.
2014+
* <b>`collection_list`</b>: List of string keys to collect.
2015+
* <b>`as_text`</b>: If `True`, writes the `MetaGraphDef` as an ASCII proto.
2016+
2017+
##### Returns:
2018+
2019+
A `MetaGraphDef` proto.
2020+
2021+
2022+
- - -
2023+
2024+
### `tf.train.import_meta_graph(meta_graph_or_file)` {#import_meta_graph}
2025+
2026+
Recreates a Graph saved in a `MetaGraphDef` proto.
2027+
2028+
This function takes a `MetaGraphDef` protocol buffer as input. If
2029+
the argument is a file containing a `MetaGraphDef` protocol buffer ,
2030+
it constructs a protocol buffer from the file content. The function
2031+
then adds all the nodes from the `graph_def` field to the
2032+
current graph, recreates all the collections, and returns a saver
2033+
constructed from the `saver_def` field.
2034+
2035+
In combination with `export_meta_graph()`, this function can be used to
2036+
2037+
* Serialize a graph along with other Python objects such as `QueueRunner`,
2038+
`Variable` into a `MetaGraphDef`.
2039+
2040+
* Restart training from a saved graph and checkpoints.
2041+
2042+
* Run inference from a saved graph and checkpoints.
2043+
2044+
```Python
2045+
...
2046+
# Create a saver.
2047+
saver = tf.train.Saver(...variables...)
2048+
# Remember the training_op we want to run by adding it to a collection.
2049+
tf.add_to_collection('train_op', train_op)
2050+
sess = tf.Session()
2051+
for step in xrange(1000000):
2052+
sess.run(train_op)
2053+
if step % 1000 == 0:
2054+
# Saves checkpoint, which by default also exports a meta_graph
2055+
# named 'my-model-global_step.meta'.
2056+
saver.save(sess, 'my-model', global_step=step)
2057+
```
2058+
2059+
Later we can continue training from this saved `meta_graph` without building
2060+
the model from scratch.
2061+
2062+
```Python
2063+
with tf.Session() as sess:
2064+
new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')
2065+
new_saver.restore(sess, 'my-save-dir/my-model-10000')
2066+
# tf.get_collection() returns a list. In this example we only want the
2067+
# first one.
2068+
train_op = tf.get_collection('train_op')[0]
2069+
for step in xrange(1000000):
2070+
sess.run(train_op)
2071+
```
2072+
2073+
NOTE: Restarting training from saved `meta_graph` only works if the
2074+
device assignments have not changed.
2075+
2076+
##### Args:
2077+
2078+
2079+
* <b>`meta_graph_or_file`</b>: `MetaGraphDef` protocol buffer or filename (including
2080+
the path) containing a `MetaGraphDef`.
2081+
2082+
##### Returns:
2083+
2084+
A saver constructed from `saver_def` in `MetaGraphDef` or None.
2085+
2086+
A None value is returned if no variables exist in the `MetaGraphDef`
2087+
(i.e., there are no variables to restore).
2088+
2089+

0 commit comments

Comments
 (0)