What classifier to use while performing unsupervised learning - image-processing

I am new to Machine learning and I have this basic question. As I am weak in Math part of the algorithm I find it difficult to understand this.
When you are given a task to design a classifier(keep it simple -- a 2 class classifier) using unsupervised learning(no training samples), how to decide what type of classifier(linear or non-linear) to use? If we do not know this, then the importance on feature selection(which means indirectly knowing what the data set is) becomes very critical.
Am I thinking in the right direction or is there something big that I dont know. Insight into this topic is greatly appreciated.

classification is by definition a "supervised learning" problem. such models require examples of points within given classes to understand how to separate the classes from one another. if you are simply looking for relationships between unlabeled data points, you're solving an unsupervised problem. look into clustering algorithms. k-means is where a lot of people start.
hope this helps!

This is a huge problem. Yes, the term "clustering" is the best entry point for googling about that, but I understand that you want to train a classifier, where "training" means optimizing an objective function with parameters. The first choice is definitely not discriminative classifiers (such as linear ones), because with them, the standard maximum likelihood (ML) objective does not work without labels. If you absolutely want to use linear classifiers, then you have to tweak the ML objective, or better use another objective (approximating the classifier risk). But an easier choice is to rather look at generative models, such as HMMs, Naive Bayes, Latent Dirichlet Allocation, ... for which the ML objective works without labels.

Related

How to handle imbalanced data in general

I have been working on the case study where data is highly imbalanced. we have been taught we can handle the imbalanced data by either under sampling the majority class or over sampling the minority class.
I wanted to ask if there is any other way/method that can be used to handle imbalanced data?
this question is more on conceptual side than programming.
for example,
I was thinking if we could put some weight on minority class (conceptually) to make the model emphasize on identifying pattern in minority class.
I don't know how that can be done but this concept theoretically should work.
feel free to put crazy ideas too.
Your weights idea is not far off. This can be done. In fact, most sklearn Models give you the option to specify class weights. This however is often not enough for very extreme cases (e.g. 95%/5% split or more extreme).
There are specific oversampling techniques such as SMOTE (and related techniques) which go a step further than classic oversampling and generate synthetic samples based on a K Nearest Neighbors Algorithm.
If the classes are extremely imbalanced a "classic" classification approach may not be enough and you might have to look into anomaly detection algorithms.
Just use strong consistent classifier. See https://arxiv.org/abs/2201.08528
Generally speaking I think that you need, before diving into the technical solution (under/upsampling, smote ..) consider the business KPI you are predicting and whether there is a proxy that could help reduce the disparity rate between the classes.
You can think also about the models that have weights parameters and could penalize the majority class
You can check this article, it explains from a conceptual point of view how to deal with imbalanced data in general.

Classifiers of machine learning for different parameters of dataset

Why behaviour of different classifier differ for different data?
Based on what parameters we can decide the good classifier for particular dataset?
For some dataset naive bayes gives better accuracy than SVM classifier
and for other dataset SVM performs better than naive bayes. Why is it
so? What is the reason?
Those are completely different classifiers. If you would have one classifier which is allways better than the other one. Why would you need the "bad one" then?
First google hit about when SVM's are not the best choice:
https://www.quora.com/For-what-kind-of-classification-problems-is-SVM-a-bad-approach
There is no general answer for this question. To understand which classifier to be used when you will need to understand the algorithm behind the classification procedure.
For instance logistic regression assumes a normal distribution of y and is generally useful when a particular parameter is not a uniquely deciding factor however combined weightage of the factors make a difference, for instance in text classification.
Decision tree on the other hand splits on the basis of parameter which gives most information. So if you have a set of parameters which is highly correlated with the label, then it makes more sense to use decision tree based classifiers.
SVM, work based on identifying adequate hyperplanes. These are generally useful when it is not possible to classify data in one plane but projecting them into higher plane classifies them easily. This is a nice tutorial on SVM https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93
In short the only way to learn which classifier will be better in which situation is to understand how they work, and then figure out if they are best for your situation.
Another, crude way will be try every classifier and pick the best one, but i don't think you are interested in that.

When to use supervised or unsupervised learning?

Which are the fundamental criterias for using supervised or unsupervised learning?
When is one better than the other?
Is there specific cases when you can only use one of them?
Thanks
If you a have labeled dataset you can use both. If you have no labels you only can use unsupervised learning.
It´s not a question of "better". It´s a question of what you want to achieve. E.g. clustering data is usually unsupervised – you want the algorithm to tell you how your data is structured. Categorizing is supervised since you need to teach your algorithm what is what in order to make predictions on unseen data.
See 1.
On a side note: These are very broad questions. I suggest you familiarize yourself with some ML foundations.
Good podcast for example here: http://ocdevel.com/podcasts/machine-learning
Very good book / notebooks by Jake VanderPlas: http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb
Depends on your needs. If you have a set of existing data including the target values that you wish to predict (labels) then you probably need supervised learning (e.g. is something true or false; or does this data represent a fish or cat or a dog? Simply put - you already have examples of right answers and you are just telling the algorithm what to predict). You also need to distinguish whether you need a classification or regression. Classification is when you need to categorize the predicted values into given classes (e.g. is it likely that this person develops a diabetes - yes or no? In other words - discrete values) and regression is when you need to predict continuous values (1,2, 4.56, 12.99, 23 etc.). There are many supervised learning algorithms to choose from (k-nearest neighbors, naive bayes, SVN, ridge..)
On contrary - use the unsupervised learning if you don't have the labels (or target values). You're simply trying to identify the clusters of data as they come. E.g. k-Means, DBScan, spectral clustering..)
So it depends and there's no exact answer but generally speaking you need to:
Collect and see you data. You need to know your data and only then decide which way you choose or what algorithm will best suite your needs.
Train your algorithm. Be sure to have a clean and good data and bear in mind that in case of unsupervised learning you can skip this step as you don't have the target values. You test your algorithm right away
Test your algorithm. Run and see how well your algorithm behaves. In case of supervised learning you can use some training data to evaluate how well is your algorithm doing.
There are many books online about machine learning and many online lectures on the topic as well.
Depends on the data set that you have.
If you have target feature in your hand then you should go for supervised learning. If you don't have then it is a unsupervised based problem.
Supervised is like teaching the model with examples. Unsupervised learning is mainly used to group similar data, it plays a major role in feature engineering.
Thank you..

Machine learning algorithm for few samples and features

I am intended to do a yes/no classifier. The problem is that the data does not come from me, so I have to work with what I have been given. I have around 150 samples, each sample contains 3 features, these features are continuous numeric variables. I know the dataset is quite small. I would like to make you two questions:
A) What would be the best machine learning algorithm for this? SVM? a neural network? All that I have read seems to require a big dataset.
B)I could make the dataset a little bit bigger by adding some samples that do not contain all the features, only one or two. I have read that you can use sparse vectors in this case, is this possible with every machine learning algorithm? (I have seen them in SVM)
Thanks a lot for your help!!!
My recommendation is to use a simple and straightforward algorithm, like decision tree or logistic regression, although, the ones you refer to should work equally well.
The dataset size shouldn't be a problem, given that you have far more samples than variables. But having more data always helps.
Naive Bayes is a good choice for a situation when there are few training examples. When compared to logistic regression, it was shown by Ng and Jordan that Naive Bayes converges towards its optimum performance faster with fewer training examples. (See section 4 of this book chapter.) Informally speaking, Naive Bayes models a joint probability distribution that performs better in this situation.
Do not use a decision tree in this situation. Decision trees have a tendency to overfit, a problem that is exacerbated when you have little training data.

Is supervised learning synonymous to classification and unsupervised learning synonymous to clustering?

I am a beginner in machine learning and recently read about supervised and unsupervised machine learning. It looks like supervised learning is synonymous to classification and unsupervised learning is synonymous to clustering, is it so?
No.
Supervised learning is when you know correct answers (targets). Depending on their type, it might be classification (categorical targets), regression (numerical targets) or learning to rank (ordinal targets) (this list is by no means complete, there might be other types that I either forgot or unaware of).
On the contrary, in unsupervised learning setting we don't know correct answers, and we try to infer, learn some structure from data. Be it cluster number or low-dimensional approximation (dimensionality reduction, actually, one might think of clusterization as of extreme 1D case of dimensionality reduction). Again, this might be far away from completeness, but the general idea is about hidden structure, that we try to discover from data.
Supervised learning is when you have labeled training data. In other words, you have a well-defined target to optimize your method for.
Typical (supervised) learning tasks are classification and regression: learning to predict categorial (classification), numerical (regression) values or ranks (learning to rank).
Unsupservised learning is an odd term. Because most of the time, the methods aren't "learning" anything. Because what would they learn from? You don't have training data?
There are plenty of unsupervised methods that don't fit the "learning" paradigm well. This includes dimensionality reduction methods such as PCA (which by far predates any "machine learning" - PCA was proposed in 1901, long before the computer!). Many of these are just data-driven statistics (as opposed to parameterized statistics). This includes most cluster analysis methods, outlier detection, ... for understanding these, it's better to step out of the "learning" mindset. Many people have trouble understanding these approaches, because they always think in the "minimize objective function f" mindset common in learning.
Consider for example DBSCAN. One of the most popular clustering algorithms. It does not fit the learning paradigm well. It can nicely be interpreted as a graph-theoretic construct: (density-) connected components. But it doesn't optimize any objective function. It computes the transitive closure of a relation; but there is no function maximized or minimized.
Similarly APRIORI finds frequent itemsets; combinations of items that occur more than minsupp times, where minsupp is a user parameter. It's an extremely simple definition; but the search space can be painfully large when you have large data. The brute-force approach just doesn't finish in acceptable time. So APRIORI uses a clever search strategy to avoid unnecessary hard disk accesses, computations, and memory. But there is no "worse" or "better" result as in learning. Either the result is correct (complete) or not - nothing to optimize on the result (only on the algorithm runtime).
Calling these methods "unsupervised learning" is squeezing them into a mindset that they don't belong into. They are not "learning" anything. Neither optimizes a function, or uses labels, or uses any kind of feedback. They just SELECT a certain set of objects from the database: APRIORI selects columns that frequently have a 1 at the same time; DBSCAN select connected components in a density graph. Either the result is correct, or not.
Some (but by far not all) unsupervised methods can be formalized as an optimization problem. At which point they become similar to popular supervised learning approaches. For example k-means is a minimization problem. PCA is a minimization problem, too - closely related to linear regression, actually. But it is the other way around. Many machine learning tasks are transformed into an optimization problem; and can be solved with general purpose statistical tools, which just happen to be highly popular in machine learning (e.g. linear programming). All the "learning" part is then wrapped into the way the data is transformed prior to feeding it into the optimizer. And in some cases, like for PCA, a non-iterative way to compute the optimum solution was found (in 1901). So in these cases, you don't need the usual optimization hammer at all.

Resources