|
5 | 5 |
|
6 | 6 | \usepackage{hyperref}
|
7 | 7 | \hypersetup{
|
8 |
| - colorlinks=true, |
9 |
| - citecolor=blue |
| 8 | + colorlinks=true, |
| 9 | + citecolor=blue |
10 | 10 | }
|
11 | 11 |
|
12 | 12 |
|
@@ -135,9 +135,49 @@ \section{A Neural Probabilistic Language Model} % (fold)
|
135 | 135 | \textbf{Goal of the paper:}
|
136 | 136 | Knowing the basic structure of a sentence, we should be able to create a new sentence by replacing parts of the old sentence with interchangeable elements.
|
137 | 137 |
|
| 138 | + \textbf{Challenges:} |
| 139 | + \begin{itemize} |
| 140 | + \item |
| 141 | + The main bottleneck for the neural computation is while computing the activations of the output layer |
| 142 | + \end{itemize} |
| 143 | + |
| 144 | + \textbf{Optimizations:} |
| 145 | + \begin{itemize} |
| 146 | + \item |
| 147 | + Data parallel processing (different processor working on a different subsets of data) and asynchronous processor usage of shared memory. |
| 148 | + \end{itemize} |
| 149 | + |
138 | 150 | % section a_neural_probabilistic_language_model (end)
|
139 | 151 |
|
140 | 152 |
|
| 153 | +\section{A Hierarchical Neural Autoencoder for Paragraphs and Documents} % (fold) |
| 154 | +\label{sec:a_hierarchical_neural_autoencoder_for_paragraphs_and_documents} |
| 155 | + |
| 156 | + \begin{itemize} |
| 157 | + \item |
| 158 | + Attempts to build a paragraph embedding from the underlying word and sentence embeddings, and then proceeds to encode the paragraph embedding in an attempt to reconstruct the original paragraph. |
| 159 | + \item |
| 160 | + For this to happen, we need to preserve, syntactic, semantic and discourse related properties while creating the embedded representation. |
| 161 | + \item |
| 162 | + Hierarchical LSTM utilized to preserve sentence structure. |
| 163 | + \item |
| 164 | + \end{itemize} |
| 165 | + |
| 166 | + \textbf{Implementation:} |
| 167 | + \begin{itemize} |
| 168 | + \item |
| 169 | + An LSTM layer to convert words into a vector representation of a sentence. Another LSTM layer after that to convert multiple sentences into a paragraph. |
| 170 | + \item |
| 171 | + Parameters are estimated by maximizing likelihood of outputs given inputs, similar to standard sequence-to-sequence models. |
| 172 | + \item |
| 173 | + Estimations are calculated using softmax functions to maximize the likelihood of the consituent words. |
| 174 | + \item |
| 175 | + Attention models using the hierarchical autoencoder could be utilized for dialogue systems, since it explicitly models for discourse. |
| 176 | + \end{itemize} |
| 177 | + |
| 178 | +% section a_hierarchical_neural_autoencoder_for_paragraphs_and_documents (end) |
| 179 | + |
| 180 | + |
141 | 181 | \newpage
|
142 | 182 |
|
143 | 183 | \bibliographystyle{unsrt}
|
|
0 commit comments