About this project
The main motivations for this project are as follows:
1. Maintain an up-to-date learning resource that integrates important information related to NLP research, such as:
state of the art results
emerging concepts and applications
new benchmark datasets
code/dataset releases
etc.
2. Create a friendly and open resource to help guide researchers and anyone interested to learn about modern techniques applied to NLPA collaborative project where expert researchers can suggest changes (e.g., incorporate SOTA results) based on their recent findings and experimental results
Table of Contents
IntroductionDistributed Representation
Contextualized Word Embeddings
RNN for word-level classification
RNN for sentence-level classification
Parallelized Attention: The Transformer
Deep Reinforced Models and Deep Unsupervised Learning
Reinforcement learning for sequence generation
Unsupervised sentence representation learning
Performance of Different Models on Different NLP Tasks