From 93062d9096a057db85158bbda21d525c9efe9f56 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Tue, 6 Dec 2016 13:46:07 -0800 Subject: [PATCH] Update generated Python Op docs. Change: 141220667 --- tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md | 2 +- .../tf.contrib.legacy_seq2seq.sequence_loss_by_example.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md b/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md index 43c051d9ec9610..2350982acfa2f6 100644 --- a/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md +++ b/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md @@ -529,7 +529,7 @@ Weighted cross-entropy loss for a sequence of logits (per example). * `weights`: List of 1D batch-sized float-Tensors of the same length as logits. * `average_across_timesteps`: If set, divide the returned cost by the total label weight. -* `softmax_loss_function`: Function (inputs-batch, labels-batch) -> loss-batch +* `softmax_loss_function`: Function (labels-batch, inputs-batch) -> loss-batch to be used instead of the standard softmax (the default if this is None). * `name`: Optional name for this operation, default: "sequence_loss_by_example". diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md index fdb38a45eed878..a7b6c99c9a9f46 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md @@ -10,7 +10,7 @@ Weighted cross-entropy loss for a sequence of logits (per example). * `weights`: List of 1D batch-sized float-Tensors of the same length as logits. * `average_across_timesteps`: If set, divide the returned cost by the total label weight. -* `softmax_loss_function`: Function (inputs-batch, labels-batch) -> loss-batch +* `softmax_loss_function`: Function (labels-batch, inputs-batch) -> loss-batch to be used instead of the standard softmax (the default if this is None). * `name`: Optional name for this operation, default: "sequence_loss_by_example".