The process of transferring the knowledge of human-level
understanding of human interactions into a machine is the concept of
Natural Language Processing.
Human interactions hold huge information, involving various topics, tones, words, emotions, given that the interpretation differs vastly according to the geographical location and culture, thus resulting in a massive amount of data. Problem with conventional Machine Learning Models
The set of unstructured/unlabeled data are difficult to be processed by
machines, requiring extensive efforts, and varied models. It also requires higher computation power, more time, and handcrafting the features through careful analysis.
Some of the conventional machine learning models like Naïve Bayes,
decision tree along with BOW, n-grams, etc., were used to seize the
problem but with unexceptional results. Why Deep Learning Models?
The state-of-the-art deep learning models attempt to overcome the above-mentioned complications. It gets an in-depth representation of the language , identifies meaningful information in a text for further processing, and constructs features at different levels.
This survey discusses the state-of-the-art deep learning models to perform multiple elemental tasks in NLP, benchmark datasets used for these distinctive
tasks, the evaluation metrics to assess the models used, and provide a
comparative study of different models. 链接地址： https://medium.com/swlh/natural-language-processing-advancements-by-deep-learning-a-survey-b53032c4b5bc