Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a pretty simple question. However I have searched extensively and am unable to find the answer. Is a genetic algorithm considered to be a form of unsupervised learning? I know that the algorithms evolves independently, however the fitness of each individual in the population is regularly measured (supervised?).
The objective of my algorithm is to optimize a set of heuristic weights via a genetic algorithm.
Thank you for your help!
—
Genetic Algorithms can be used for both supervised and unsupervised learning, e.g.:
Unsupervised Genetic Algorithm Deployed for Intrusion Detection, (2008).
Zorana Banković,
Slobodan Bojanić,
Octavio Nieto,
Atta Badii.
If you have labeled training data or tagged examples, then you are using supervised training.
From http://en.wikipedia.org/wiki/Unsupervised_learning
In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learning and reinforcement learning.
From which it's pretty clear that genetic algorithms are not unsupervised as they are measured against a fitness criteria. Individual mutations may not be supervised, but the system as a whole is supervised as mutations are either removed or built upon based on the resulting fitness they give the algorithm.
From http://en.wikipedia.org/wiki/Reinforcement_learning
Reinforcement learning is an area of machine learning inspired by behaviorist psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, statistics, and genetic algorithms.
Which would sort of suggest that genetic algorithms are considered to fall under reinforcement learning.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
As per Pedro Domingos in his famous paper "A Few Useful Things to Know about Machine Learning" he writes Machine learning systems automatically learn programs from data.
But from my experience we r giving algorithms like ANN or SVM etc.
My question is how it is automating automation?
Could someone put some light with example.
When you develop a machine learning algorithm, with ANN or SVM or whatever, you don't say to your programming how to solve your problem, you are telling him how to learn to solve the problem.
SVM or ANN are ways to learn a solution to a problem, but not how to solve a problem.
So when people say "Machine learning systems automatically learn programs from data", they are saying that you never programmed a solution to your problem, but rather letting the computer learning to do so.
To quote wikipedia : "Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed"
https://en.wikipedia.org/wiki/Machine_learning
[Edit]
For example let's take one of the most simple machine learning algorithm, the linear regression in a 2D space.
The aim of this algorithm is to learn a linear function given a dataset of (x,y), so when you given your system a new x you get an approximation of what the real y would be.
But when you code a linear regression you never specify the linear function y = ax+b. What you code is a way for the program to deduce it from the dataset.
The linear function y=ax+b is the solution to your problem, the linear regression code is the way you are going to learn that solution.
https://en.wikipedia.org/wiki/Linear_regression
Machine Learning development helps to improve business operations as well as improve business scalability. A number of ML algorithms and artificial intelligence tools have gained tremendous popularity in the community of business analytics. There has been a rise in machine learning market due to faster and cheaper computational processing, easy availability of data as well as affordable data storage.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I practiced already some machine learning aspects, and developed some small projects. Nowadays some blogs, articles, open posts talk about deep learning. I get interested to see practically what the difference between machine learning and deep learning is, and maybe to learn a new approaches/ techniques called deep learning. I read few blogs, but conceptually I see that deep learning is a subset of machine learning, and it’s nothing more than Neural networks with multiple layers!!
I am however stunned and perplexed to recognize if it is the only difference between machine learning and deep learning !!!
What is the merit to think of deep learning and not machine learning if we want only talk about neural networks? so if it is, why not call it neural networks, or deep neural networks to distinguish some classification ?
Is there a real difference than that I mentioned?
Does there any practical example showing a significant difference letting us to make these different notions?
Deep learning is set of ML patterns and tactics to increase accuracy of classical ML algorithms, like MLP, Naïve Bayes classifier, etc.
One of the earliest and easiest of such tactics – adding hidden layers to increase network’s learning capacity. One of recent - convolutional autoencoder
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have learned several classifiers in Machine learning - Decision tree, Neural network, SVM, Bayesian classifier, K-NN...etc.
Can anyone please help to understand when I should prefer one of the classifier over other - for example - in which situation(nature of data sets, etc) I should prefer decision tree over neural net OR which situation SVM might work better than Bayesian ??
Sorry if this is not a good place to post this question.
Thanks.
This is EXTREMELY related to the nature of the dataset. There are several meta-learning approaches that will tell you which classifier to use, but generaly there isn't a golden rule.
If you're data is easily separable (easy to distinguish entries from different classes), perhaps decision-trees or SVMs (with a linear kernel) are good enough. However, if your data needs to be transformed into other [higher] dimensional spaces, kernel-based classifiers might work well, such as RBF SVMs. SVMs also work better with non-redundant, independent features. When combinations between features are needed, artificial neural networks and bayesian classifiers work good as well.
Yet again, this is highly subjective and strongly depends on your feature set. For instance, having a single feature that is highly correlated with the class might determine which classifier works best. That said, overall, the no-free-lunch theorem says that no classifier is better for everything, but SVMs are generally regarded as the current best bet on binary classification.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Recently I learnt the bayesian linear regression model, but what I'm confused is that in which situation we should use the linear regression, and when to use the bayesian version. How about the performance of these two?
And is the bayesian logistic regression and logistic regression the same? I read a paper about using bayesian probit regression to predict ads CTR, I just wonder why using bayesian version?
In your two cases, linear regression and logistic regression, the Bayesian version uses the statistical analysis within the context of Bayesian inference, e.g., Bayesian linear regression.
Per wikipedia,
This (ordinary linear regression) is a frequentist approach, and it assumes that there are enough measurements to say something meaningful. In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters.
The usual way of Bayesian analysis (adding the Bayesian taste):
Figure out the likelihood function of the data.
Choose a prior distribution over all unknown parameters.
Use Bayes theorem to find the posterior distribution over all parameters.
Why Bayesian version? [1]
Bayesian models more flexible, handles more complex models.
Bayesian model selection probably superior (BIC/AIC).
Bayesian hierarchical models easier to extend to many levels.
Philosophical differences (compared to frequentist analysis).
Bayesian analysis more accurate in small samples (but then may depend on
priors).
Bayesian models can incorporate prior information
This hosts some good lecture slides about Bayesian analysis.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Where is ANN classification (regression) better than SVM? Some real-world examples?
There are many applications where they're better, many applications where they're comparable, many applications where they are worse. It also depends on who you ask. It is hard to say this type of data or that type of data/application.
An example where ANN, in particular convolutional neural networks, work better than SVMs would be digit classification on MNIST. Another such case is the work of Geoff Hinton's group on speech recognition using Deep Belief Networks
Recently I have read a paper of proving the theoretical equivalence between ANN and SVM. However, ANN is usually slower than SVM.
I am just finishing some out-of-the-box comparison between support vector machines and neural networks on several popular regression- and classification datasets - first results in short: svms learn fast and predict slow - neural networks learn slow but predict fast and have very lightweight models. Concerning accuracy/loss, both methods seem to be on par.
It will largely depend as both have different tradeoffs and design criteria. There has been some work to show the relationship and some say equivalence as seen in other answers to this question. Below is another reference which draws links between these two techniques in machine learning:
Ronan Collobert and Samy Bengio. 2004. Links between perceptrons, MLPs
and SVMs. In Proceedings of the twenty-first international
conference on Machine learning (ICML '04). ACM, New York, NY, USA,
23-. DOI: https://doi.org/10.1145/1015330.1015415