Natural Language Processing Specialization Link.
This specialization is a series of 4 courses offered by Deeplearning.ai focuses on cutting-edge NLP techniques.
Course 1: NLP with Classification and Vector Spaces Link
Objective: Using logistic regression, naïve Bayes and word vector to implement sentiment analysis, complete analogies and word translation.
Learn to extract features from text into numerical vectors, then build a sentiment analysis model for tweets using a logistic regression!
Assignment: Logistic Regression Link.
- Sentiment analysis of tweets with Logistc Regression.
Learn the theory behind Bayes' rule for conditional probabilities, then apply it toward building a Naive Bayes model for sentiment analysis
Assignment: Naive Bayes Link
- Sentiment analysis of tweets with Naive Bayes model.
understand vectors spaces, ways to ultilize euclidean distance and cosine similiarity to find similar and dissimilar items with embeddings and PCA.
Assignment: Vector Space Model & PCA Link.
- Ultilize vector space and vector embeddings to build a function that predict the name of capital the city given the country name.
- project also include building a functions that computes PCA.
Learn to transform word vectors and assign them to subsets using locality sensitive hashing, in order to perform machine translation and document search.
Assignment: Machine Translation and LSH (Local Sensitive Hashing) Link
Learn about autocorrect, minimum edit distance, and dynamic programming, then build a spellchecker to correct misspelled words!
Assignment: Autocorrection Link.
Learn about Markov and Hidden Markov Models as well as Viterbi Algorithm to create a part-of-speech(POS) tags for a Wall Street Journal text corpus.
Assignment: POS Tagging Link.
Learn about how N-gram language models work by calculating sequence probabilities, then build an autocomplete language model using a text corpus from Twitter!
Assignment: Autocomplete Link.
Learn methods to built word embedding from scarch.
Assignment: Word Embeddings Link.
