You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: project-report/cs698_project_report.tex
+9-4Lines changed: 9 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -78,9 +78,6 @@ \section{Goal} % (fold)
78
78
% section goal (end)
79
79
80
80
81
-
\newpage
82
-
83
-
84
81
\section{A Primer of Neural Net Models for NLP} % (fold)
85
82
\label{sec:a_primer_of_neural_net_models_for_nlp}
86
83
@@ -111,6 +108,7 @@ \section{A Primer of Neural Net Models for NLP} % (fold)
111
108
\section{A Neural Probabilistic Language Model} % (fold)
112
109
\label{sec:a_neural_probabilistic_language_model}
113
110
111
+
\textbf{Background:}
114
112
\begin{itemize}
115
113
\item
116
114
We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences.
This will ensure that semantically similar words end up with an almost equivalent feature vectors, called learned distributed feature vectors.
129
127
\item
130
-
128
+
A challenge with modeling discrete variables like a sentence structure as opposed to a continuous value is that the continuous valued function can be assumed to have some form of locality, but the same assumption cannot be made in case of discrete functions.
129
+
\item
130
+
N-gram models try to acheive a statistical modeling of languages by calculating the conditional probabilities of each possible word that can follow a set of $n$ preceding words.
131
+
\item
132
+
New sequences of words can be generated by effectively gluing together the popular combinations i.e. n-grams with very high frequency counts.
131
133
\end{itemize}
132
134
135
+
\textbf{Goal of the paper:}
136
+
Knowing the basic structure of a sentence, we should be able to create a new sentence by replacing parts of the old sentence with interchangeable elements.
0 commit comments