I want to predict movie gross collections using data that is available before release, eg title, actors, director, studio, critic ratings, genre, etc. I found a way to numerically quantify most of these, but could not quantify title. The title conveys much useful information, such as if the movie is a sequel, length of title, sentiment associated, etc. How to extract these information quantitatively from title?
BoWs are the standard way to create text based features, though I wouldn't recommend it since movie titles are short and many of them contains out of context words, named entity.. which will make your feature vector even more sparse.
You can create a word2vec encoding of each word of the title and then take the mean vector of the title as your feature. My favorite tools to do it: gensim or Facebook fast Text
Related
I am trying to build a content based recommendation model but I am stuck on how to proceed with which algorithm to choose from. Basically, my features are user_id, user_age, gender, location, show_id, show_type(eg: series, movies etc.), show_duration, user_watched_duration, genre, rating. And my model has to predict top 5 recommendation shows for certain users.
The proportion of users to shows is hugh. Like there are only around 10k users but the shows that are mapped to each user is approx 150. So each user has an history of 150 shows mapped to them on an average. So total records are 10k x 150 = 15,00,000
Now, I am confused like which algorithm to proceed with this scenario. I read content based method is the ideal for my scenario. But when I checked SVD from surprise library, it is only taking 3 features as input DataSet - "user_id", "item_id", "rating" and fitting the model. But I have to consider fitting other features like user_watched_duration out of the show_duration and give preference if he/she has fully watched the show. And similarly, I want the model to consider recommending based on the Gender and age also. Like for example, for young men, (male < 20 years old) have watched a show and given a higher rating, then my recommendation to user's of similar category of young men should be given.
Can I use to train a normal classifical model like KNN for this? I tried to think of using sparse matrix using csr_matrix with row consisting of user_id and col consisting of show_id. And then transposing using (user_show_matrix.T * user_show_matrix) , so that I can use this to get counts of shows watched for that particular user. But the problem with this approach is that I cannot map other features with this, right?
So please suggest how to proceed. I already did data cleaning, label encoded categories etc. Will I be able to use any classification algorithms for this? Appreciate any references on similar approaches. Thank you!
Given a query and a document, I would like to compute a similarity score using Gensim doc2vec.
Each document consists of multiple fields (e.g., main title, author, publisher, etc)
For training, is it better to concatenate the document fields and treat each row as a unique document or should I split the fields and use them as different training examples?
For inference, should I treat a query like a document? Meaning, should I call the model (trained over the documents) on the query?
The right answer will depend on your data & user behavior, so you'll want to try several variants.
Just to get some initial results, I'd suggest combining all fields into a single 'document', for each potential query-result, and using the (fast-to-train) PV-DBOW mode (dm=0). That will let you start seeing results, doing either some informal assessment or beginning to compile some automatic assessment data (like lists of probe queries & docs that they "should" rank highly).
You could then try testing the idea of making the fields separate docs – either instead-of, or in addition-to, the single-doc approach.
Another option might be to create specialized word-tokens per field. That is, when 'John' appears in the title, you'd actually preprocess it to be 'title:John', and when in author, 'author:John', etc. (This might be in lieu of, or in addition to, the naked original token.) That could enhance the model to also understand the shifting senses of each token, depending on the field.
Then, providing you have enough training data, & choose other model parameters well, your search interface might also preprocess queries similarly, when the user indicates a certain field, and get improved results. (Or maybe not: it's just an idea to be tried.)
In all cases, if you need precise results – exact matches of well-specified user queries – more traditional searches like exact DB matches/greps, or full-text reverse-indexes, will outperform Doc2Vec. But when queries are more approximate, and results need filling-out with near-in-meaning-even-if-not-in-literal-tokens results, a fuzzier vector document representation may be helpful.
I am having a hard time understanding the process of building a bag-of-words. This will be a multiclass classfication supervised machine learning problem wherein a webpage or a piece of text is assigned to one category from multiple pre-defined categories. Now the method that I am familiar with when building a bag of words for a specific category (for example, 'Math') is to collect a lot of webpages that are related to Math. From there, I would perform some data processing (such as remove stop words and performing TF-IDF) to obtain the bag-of-words for the category 'Math'.
Question: Another method that I am thinking of is to instead search in google for something like 'List of terms related to Math' to build my bag-of-words. I would like to ask if this is method is okay?
Another question: In the context of this question, does bag-of-words and corpus mean the same thing?
Thank you in advance!
This is not what bag of words is. Bag of words is the term to describe a specific way of representing a given document. Namely, a document (paragraph, sentence, webpage) is represented as a mapping of form
word: how many times this word is present in a document
for example "John likes cats and likes dogs" would be represented as: {john: 1, likes: 2, cats: 1, and: 1, dogs: 1}. This kind of representation can be easily fed into typical ML methods (especially if one assumes that total vocabulary is finite so we end up with numeric vectors).
Note, that this is not about "creating a bag of words for a category". Category, in typical supervised learning would consist of multiple documents, and each of them independently is represented as a bag of words.
In particular this invalidates your final proposal of asking google for words that are related to category - this is not how typical ML methods work. You get a lot of documents, represent them as bag of words (or something else) and then perform statistical analysis (build a model) to figure out the best set of rules to discriminate between categories. These rules usually will not be simply "if the word X is present, this is related to Y".
I am building a recommendation system for dishes. Consider a user eating french fries and rates it a 5. Then I want to give a good rating to all the ingredients that the dish is made of. In the case of french fires the linked words should be "fried" "potato" "junk food" "salty" and so on. From the word Tsatsiki I want to extract "Cucumbers", "Yoghurt" "Garlic". From Yoghurt I want to extract Milk product, From Cucumbers vegetable and so on.
What is this problem called in Natural Language Processing and is there a way to address it?
I have no data at all, and I am thinking of building web crawler that analyzes the web for the dish. I would like it to be as little Ad-Hoc as possible and not necessarily in English. Is there a way, maybe in within deep learning to do the thing? I would not only a dish to be linked to the ingredients but also a category: junk food, vegetarian, Italian food and so on.
This type of problem is called ontology engineering or ontology building. For an example of a large ontology and how it's structured, you might check out something like YAGO. It seems like you are going to be building a boutique ontology for food and then overlaying a rating's system. I don't know of any ontologies out there of the form you're looking for, but there are relevant things out there you should take a look at, for example, this OWL-based food ontology and this recipe ontology.
Do you have a recipe like that:
Ingredients:
*Cucumbers
*Garlic
*Yoghurt
or like that:
Grate a cucumber or chop it. Add garlic and yoghurt.
If the former, your features have already been extracted. The next step would be to convert to a vector recommend other recipes. The simplest way would be to do (unsupervised) clustering of recipes.
If the latter, I suspect you can get away with a simple rule of thumb. Firstly, use a part-of-speech tagger to extract all the nouns in the recipe. This would extract all the ingredients and a bit more (e.g. kitchen appliances, cutlery, etc). Look up the nouns in a database of food ingredients database such as this one.
It's known how collaborative filtering (CF) is used for movie, music, book recommendations. In the paper 'Collaborative Topic Modeling for Recommending Scientific Articles' among other things authors show an example of collaborative filtering applied to ~5,500 users and ~17,000 scientific articles. With ~200,000 user-item pairs, the user-article matrix is obviously highly sparse.
What if you do collaborative filtering with matrix factorization for, say, all news articles shared on Twitter? The matrix will be even sparser (than that in the scientific articles case) which makes CF not very applicable. Of course, we can do some content-aware analysis (taking into account, the text of an article), but that's not my focus. Or we can potentially limit our time window (focus, say, on all news articles shared in the last day or week) to make the user-article matrix denser. Any other ideas how to fight the fact that the matrix is very sparse? What are the results in research in the area of CF for news article recommendations? Thanks a lot in advance!
You might try using an object-to-object collaborative filter instead of a user-to-object filter. Age out related pairs (and low-incidence pairs) over time since they're largely irrelevant in your use case anyway.
I did some work on the Netflix Prize back in the day, and quickly found that I could significantly outperform the base model with regard to predicting which items were users' favorites. Unfortunately, since it's basically a rank model rather than a scalar predictor, I didn't have RMSE values to compare.
I know this method works because I wrote a production version of this same system. My early tests showed that, given a task wherein 50% of users' top-rated movies were deleted, the object-to-object model correctly predicted (i.e., "replaced") about 16x more of users' actual favorites than a basic slope-one model. Plus the table size is manageable. From there it's easy to include a profitability weight against the sort order, etc. depending on your application.
Hope this helps! I have a working version in production but am still looking for beta clients to bang on the system... if anyone has time to give it a run I'd love to hear from you.
Jeb Stone, PhD
www.selloscope.com