Skip to content

Commit

Permalink
Link to Universal Sentence Encoder paper from its TF-hub module page.
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 192634552
  • Loading branch information
TensorFlow Hub Authors authored and vbardiovskyg committed Apr 13, 2018
1 parent d7edc53 commit b203945
Showing 1 changed file with 9 additions and 1 deletion.
10 changes: 9 additions & 1 deletion docs/modules/google/universal-sentence-encoder/1.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ semantic similarity, and the results can be seen in the [example notebook](https
To learn more about text embeddings, refer to the [TensorFlow Embeddings](https://www.tensorflow.org/programmers_guide/embedding)
documentation. Our encoder differs from word level embedding models in that we
train on a number of natural language prediction tasks that require modeling the
meaning of word sequences rather than just individual words.
meaning of word sequences rather than just individual words. Details are
available in the paper "Universal Sentence Encoder" [1].

#### Example use

Expand Down Expand Up @@ -66,3 +67,10 @@ that can embed sentences. The Universal Sentence Encoder was partly trained with
custom text classification tasks in mind. It can be trained to perform a wide
variety of classification tasks often with a very small amount of labeled
examples.

## References

[1] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco,
Rhomni St. John, Noah Constant, Mario Guajardo-Céspedes, Steve Yuan, Chris Tar,
Yun-Hsuan Sung, Brian Strope, Ray Kurzweil. [Universal Sentence Encoder](https://arxiv.org/abs/1803.11175).
arXiv:1803.11175, 2018.

0 comments on commit b203945

Please sign in to comment.