This is NLP_with_MentoringProgram with DeepNLP
Most of information is from NLP paper implementation with PyTorch
- Using the Naver sentiment movie corpus v1.0
- Hyper-parameter was arbitrarily selected.
Train ACC (120,000) | Validation ACC (30,000) | Test ACC (50,000) | |
---|---|---|---|
Baseline (Feed Forward) | 92.33% | - | 81.29% |
SenCNN | 92.22% | 86.81% | 86.48% |
SenCNN(Ryan) | 92.53% | - | 82.99% |
SenCNN(SM) | 92.3% | 84.98% | 84.42% |
SenCNN(JMKIM) | 94.56% | 86.268% | 85.851% |
CharCNN | - | - | - |
ConvRec | - | - | - |
VDCNN | - | - | - |
SAN | - | - | - |
- Convolutional Neural Networks for Sentence Classification (SenCNN)
- Character-level Convolutional Networks for Text Classification (CharCNN)
- Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers (as ConvRec)
- Very Deep Convolutional Networks for Text Classification (as VDCNN)
- A Structured Self-attentive Sentence Embedding (as SAN)
- Using the Question_pair from songys
- Hyper-parameter was arbitrarily selected.
- Most of approaches are in SNLI from Stanford
- Learning Sentence Similarity with Siamese Recurrent Architectures
- Fine-Tuned LM-Pretrained Transformer
- Effective Approaches to Attention-based Neural Machine Translation
- Attention Is All You Need
- Bi-directional attention flow for machine comprehension
- Deep contextualized word representations
- Improving Language Understanding by Generative Pre-Training
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Language Models are Unsupervised Multitask Learners
- Using the Naver nlp-challange corpus for NER
- Hyper-parameter was arbitrarily selected.
- Bidirectional LSTM-CRF Models for Sequence Tagging
- End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF
- Neural Architectures for Named Entity Recognition
- Character-Aware Neural Language Models