Clustering Based-On Multi-Word Similarity [closed] - machine-learning

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to implement clustering for bank transaction data. The dataset contains columns about Vendor and MCC which are string. There are too much distinct values in those columns, I want to make a clustering depending on some metrics such as cosine similarity for Vendor or MCC. ( For example 'Hotel A' and 'Hotel B' can be in the same cluster. ) I think Levenshtein distance is not sufficient for this.
I think about finding a corpus for MCC and create a model for find similarity between the words. Is this method good for this problem? If not, how can I handle with those columns? If yes, is there a corpus for this?
Data source: https://data.world/oklahoma/purchase-card-fiscal-year

I've done something similar to this problem using GloVe word embeddings.
One way to cluster a categorical text feature is to convert each unique value into an average word vector (after removing stopwords). Then you can compare the vectors via cosine similarity, and use clustering methods based on the similarity matrix. If this approach is too computationally complex, convert the values to vectors and get top-n closest items by cosine similarity.

Related

Text classification for unlabeled data [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I want to classify data into two classes based on parameters given. My data is publications from two different sources and I want to classify it into "match" or "non-match"; when comparing the dataset1 with dataset2. The datasets are unlabeled text data which contain five attributes (id, title, authors, venue, year) so if i apply unsupervised algorithms, it will not produce my target classes. On the other hand, supervised algorithms need to labelled data which is unavailable and time consumed.
What is the best and easiest method to do that in python?
The best, easiest and AFAIK the optimal method is as follows:
Use clustering algorithms like K-Means, to cluster your data points into 2 clusters.
Now, manually examine a few samples of one of the cluster and label it accordingly.
Assume you randomly picked 10 data points from the first cluster and they fall in the match class. Now all you need to do is label all the data points in this cluster as match and label all the data points in the other cluster as non-match.
This would give you the required classification.

Please tell me how to split a numerical columns or node in Decision Tree [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Please tell me how to split a node that has a numerical value, like suppose my parent node is temperature and it has numerical values 45.20, 33.10, 11.00, etc. How should I split such kind of numerical values? If I have a categorical column like temperature having a low and high value, I will split it low on the left side and high on the right side. But how should I split the column if it is numeric?
There are discretization methods for converting numerical features into categories e.g. for using in Decision Trees. There are many supervised and unsupervised algorithms, from a simple Binning to Information Theoretic approaches like what Fayyad & Irani proposed. Follow this tutorial to learn how to discretize your features. The algorithm by Fayyad and Irani is explained in this course.
Disclaimer: I am the instructor of that course.

What are the algorithms which could be sued to match sentences? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Let's say we have a list of 50 sentences and we have an input sentence. How can i choose the closest sentence to the input sentence from the list?
I have tried many methods/algorithms such as averaging word2vec vector representations of each token of the sentence and then cosine similarity of result vectors.
For example I want the algorithm to give a high similarity score between "what is the definition of book?" and "please define book".
I am looking for a method (probably a combinations of methods) which
1. looks for semantics
2. looks for syntax
3. gives different weights for different tokens with different role (e.g. in the first example 'what' and 'is' should get lower weights)
I know this might be a bit general but any suggestion is appreciated.
Thanks,
Amir
before counting a distance between sentences, you need to clean them,
For that:
A Lemmatization of your words is needed to get the root of each word, so your sentence "what is the definition of book" woul be "what be the definition of bood"
You need to delete all preposition, verb to be and all Word without meaning, like : "what be the definition of bood" would be "definintion book"
And then you transform you sentences into vectors of number by using tf-idf method or wordToVec.
Finnaly you can count the distance between your sentences by using cosine between the vectors, so if the cosine is small so the your two sentences are similar.
Hop that will help
Your sentences are too sparse to compare the two documents directly. Aggressive morphological transformations (such as stemming, lemmatization, etc) might help some, but will probably fall short given your examples.
What you could do is compare the 'search results' of the 2 sentences in a large document collection with a number of methods. According to the distributional hypothesis similar sentences should occur in similar context (see Distributional hypothesis, but also Rocchio's algorithm, co-occurrence and word2vec). Those context (when gathered in a smart way) could be large enough to do some comparison (such as cosine similarity).

Data Mining - K nearest neighbor [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
This is my homework. I'm not asking you to do my homework here, I need a hint to keep going.
I know what is K nearest neighbor algorithm however I always seen it on graphs not like this. Can you guys tell me what I should do? I've been trying to figure out how to start doing this but I could not. I would appreciate a small hint from you guys.
This assignment helps you understand the steps in KNN.
KNN is based on distances. Find the K nearest neighbors and then maybe vote for a classification problem.
Your training data can be considered as (x1,x2, y) : age and profit are features (x1, x2) while BUY or NOT BUY is the label/output y.
To apply Knn you need to calculate the distance, which is based on features. Since the two features share different units ( year, USD), you should convert them into non-unit features which is called normalization, part 4.1 in your handout. After that, the feature vector will look like (-0.4,-0.8). The number should be between -1 and 0 if the suggested formula in part 4.1 is used.
Then use the normalized feature to calculate the distances (Euclidean in the handout) between every training data point and your interested company ( normalized as well). This is required in 4.2.
Last step should be to pick K nearest neighbor and decide BUY or NOT BUY judging from the outputs of those neighbors. ( a simple voting maybe?)

Decision tree learning: Basic idea [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As given in the textbook Machine Learning by Tom M. Mitchell, the first statement about decision tree states that, "Decision tree leaning is a method for approximating discrete valued functions". Could someone kindly elaborate this statement, probably even justify it with an example. Thanks in advance :) :)
In a simple example, consider observations rows with two attributes; the training data contains classification (discrete values) based on a combination of those attributes. The learning phase has to determine which attributes to consider in which order, so that it can effectively do well in achieving the desired modelling.
For instance, consider a model that will answer "What should I order for dinner?" given the inputs of desired price range, cuisine, and spiciness. The training data will contain your history from a variety of restaurant experiences. The model will have to determine which is most effective in reaching a good entrée classification: eliminate restaurants based on cuisine first, then consider price, and finally tune the choice according to Scoville units; or perhaps check the spiciness first and start by dump choices that aren't spicy enough before going on to the other two factors.
Does that explain what you need?

Resources