I was looking for all the models being developed for NLP since word2vec till now. I was thinking of writing a detailed guide for the timeline of models in NLP as an article. Please help me here.
It would be great if answer given will be as:
Model name, year published, link to the paper and little summary about the model.
This repository contains landmark research papers in Natural Language Processing that came out in this century.
Efficient Estimation of Word Representations in Vector Space, Google
Distributed Representations of Words and Phrases, Google
Distributed Representations of Sentences and Documents, Google
Enriching Word Vectors with Subword Information, Facebook
Bag of Tricks for Efficient Text Classification, Facebook
Hierarchical Probabilistic Neural Network Language Model
A Scalable Hierarchical Distributed Language Model
BERT Pre-training of Deep Bidirectional Transformers for Language Understanding, Google
Language Models are Unsupervised Multitask Learners, OpenAI
Wav2Letter, Facebook
Misspelling Oblivious Word Embeddings, Facebook
refer to this repo: https://github.com/Akshat4112/NLP-research-papers
Related
I am developing a website, which will recommend recipes to the visitors based on their data. I am collecting data from their profile, website activity and facebook.
Currently I have data like [username/userId, rating of recipes, age, gender, type(veg/Non veg), cuisine(Italian/Chinese.. etc.)]. With respect to above features I want to recommend new recipes which they have not visited.
I have implemented ALS (alternating least squares) spark algorithm. In this we have to prepare csv which contains [userId,RecipesId,Rating] columns. Then we have to train this data and create the model by adjusting parameters like lamdas, Rank, iteration. This model generated recommendation, using pyspark
model.recommendProducts(userId, numberOfRecommendations)
The ALS algorithm accepts only three features userId, RecipesId, Rating. I am unable to include more features (like type, cuisine, gender etc.) apart from which I have mentioned above (userId, RecipesId, Rating). I want to include those features, then train the model and generate recommendations.
Is there any other algorithm in which I can include above parameters and generate recommendation.
Any help would be appreciated, Thanks.
Yes, there are couple of others algorithms. For your case, I would suggest that you Naive Bayes algorithm.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier
Since you are working on a web application, a JS solution, I guess, would come handy to you.
(simple) https://www.npmjs.com/package/bayes
or for example:
(a bit more powerful) https://www.npmjs.com/package/naivebayesclassifier
There are algorithms called recommender systems in machine learning. In this we have content based recommender systems. They are mainly used to recommend products/movies based on customer reviews. You can apply the same algorithm using customer reviews to recommend recipes. For better understanding of this algorithm refer this links:
https://www.youtube.com/watch?v=Bv6VkpvEeRw&list=PL0Smm0jPm9WcCsYvbhPCdizqNKps69W4Z&index=97
https://www.youtube.com/watch?v=2uxXPzm-7FY
You can go with powerful classification algorithms like
->SVM: works very well if you have more number of attributes.
->Logistic Regression: if you have huge data of customers.
You are looking for recommender systems using algorithms like collaborative filtering. I would suggest you to go through Prof.Andrew Ng's short videos on collaborative filtering algorithm and low-rank matrix factorization and also building recommender systems. They are a part of Coursera's Machine learning course offered by Stanford University.
The course link:
https://www.coursera.org/learn/machine-learning#%20
You can check week 9 for the content related to recommender systems.
I'm learning Machine Learning with Tensorflow. I've work with some dataset like Iris flower data and Boston House, but all those data's values was float.
Yes I'm looking for a dataset that contain data's values are in string format to practice. Can you give me some suggestions?
Thanks
I provide you just two easy-to-start places:
Tensorflow website has three very good tutorials to deal with word embedding, language modeling and sequence-to-sequence models. I don't have enough reputation to link them directly but you can easily find them here. They provide you with some tensorflow code to deal with human language
Moreover, if you want to build a model from scratch and you need only the dataset, try ntlk corpora. They are easy to download directly from the code.
Facebook's ParlAI project lists a good amount of datasets for Natural Language Processing tasks
IMDB's reviews dataset is also a classic example, also Amazon's reviews for sentiment analysis. If you take a look at kernels posted on Kaggle you'll get a lot of insights about the dataset and the task.
Dear all I am working on a project in which I have to categories research papers into their appropriate fields using titles of papers. For example if a phrase "computer network" occurs somewhere in then title then this paper should be tagged as related to the concept "computer network". I have 3 million titles of research papers. So I want to know how I should start. I have tried to use tf-idf but could not get actual results. Does someone know about a library to do this task easily? Kindly suggest one. I shall be thankful.
If you don't know categories in advance, than it's not classification, but instead clustering. Basically, you need to do following:
Select algorithm.
Select and extract features.
Apply algorithm to features.
Quite simple. You only need to choose combination of algorithm and features that fits your case best.
When talking about clustering, there are several popular choices. K-means is considered one of the best and has enormous number of implementations, even in libraries not specialized in ML. Another popular choice is Expectation-Maximization (EM) algorithm. Both of them, however, require initial guess about number of classes. If you can't predict number of classes even approximately, other algorithms - such as hierarchical clustering or DBSCAN - may work for you better (see discussion here).
As for features, words themselves normally work fine for clustering by topic. Just tokenize your text, normalize and vectorize words (see this if you don't know what it all means).
Some useful links:
Clustering text documents using k-means
NLTK clustering package
Statistical Machine Learning for Text Classification with scikit-learn and NLTK
Note: all links in this answer are about Python, since it has really powerful and convenient tools for this kind of tasks, but if you have another language of preference, you most probably will be able to find similar libraries for it too.
For Python, I would recommend NLTK (Natural Language Toolkit), as it has some great tools for converting your raw documents into features you can feed to a machine learning algorithm. For starting out, you can maybe try a simple word frequency model (bag of words) and later on move to more complex feature extraction methods (string kernels). You can start by using SVM's (Support Vector Machines) to classify the data using LibSVM (the best SVM package).
The fact, that you do not know the number of categories in advance, you could use a tool called OntoGen. The tool basically takes a set of texts, does some text mining, and tries to discover the clusters of documents. It is a semi-supervised tool, so you must guide the process a little, but it does wonders. The final product of the process is an ontology of topics.
I encourage you, to give it a try.
Are there any Artificial Intelligence algorithms which can be applied to improve Document Clustering results? The algorithm for clustering can be hierarchical or any other.
Thank You
The Wikipedia article on document clustering includes a link to a 2007 paper by Nicholas Andrews and Edward Fox from Virginia Tech called "Recent Developments in Document Clustering". I'm not sure specifically what you would class as an "Artificial Intelligence algorithm" but scanning the paper's contents shows that they look at vector space models, extensions to kmeans, generative algorithms, spectral clustering, dimensionality reduction, phase-based models, and comparative analysis. It's a pretty mathematically dense treatment but they are careful to include references to the algorithms they talk about.
Clustering is indeed a type of problem in the AI domain. And if you want to go one level down you may say it is in the Machine Learning field. In this sense AI does not improve document clustering, but solves it! Dumbledad mentions some basic alternatives but the type of data you have each time may be treated better with different algorithm. There are a lot of k-means based approaches for the problem. Careful seeding is needed in such a case. Spherical k-means (search for the paper of Dhillon) is a simple and standard approach. Other extensions are k-synthetic prototypes.
Subspace clustering is also a good try and in general if you want to go further than "document clustering" literature check for "clustering in high dimensional and sparse data spaces".
I am trying to implement a naive bayseian approach to find the topic of a given document or stream of words. Is there are Naive Bayesian approach that i might be able to look up for this ?
Also, i am trying to improve my dictionary as i go along. Initially, i have a bunch of words that map to a topics (hard-coded). Depending on the occurrence of the words other than the ones that are already mapped. And depending on the occurrences of these words i want to add them to the mappings, hence improving and learning about new words that map to topic. And also changing the probabilities of words.
How should i go about doing this ? Is my approach the right one ?
Which programming language would be best suited for the implementation ?
Existing Implementations of Naive Bayes
You would probably be better off just using one of the existing packages that supports document classification using naive Bayes, e.g.:
Python - To do this using the Python based Natural Language Toolkit (NLTK), see the Document Classification section in the freely available NLTK book.
Ruby - If Ruby is more of your thing, you can use the Classifier gem. Here's sample code that detects whether Family Guy quotes are funny or not-funny.
Perl - Perl has the Algorithm::NaiveBayes module, complete with a sample usage snippet in the package synopsis.
C# - C# programmers can use nBayes. The project's home page has sample code for a simple spam/not-spam classifier.
Java - Java folks have Classifier4J. You can see a training and scoring code snippet here.
Bootstrapping Classification from Keywords
It sounds like you want to start with a set of keywords that are known to cue for certain topics and then use those keywords to bootstrap a classifier.
This is a reasonably clever idea. Take a look at the paper Text Classication by Bootstrapping with Keywords, EM and Shrinkage by McCallum and Nigam (1999). By following this approach, they were able to improve classification accuracy from the 45% they got by using hard-coded keywords alone to 66% using a bootstrapped Naive Bayes classifier. For their data, the latter is close to human levels of agreement, as people agreed with each other about document labels 72% of the time.