Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is the difference between Machine Learning and Computer Vision?
I am studying Machine learning now, for 1 week and still don't know what is different between them?
Will you prefer axe to cut an apple? even a simple knife is enough for it!
Will you prefer sword to sew a pyjama? a short needle is enough for it!
Same is the case of comments made above.
Computer vision do deals with image recognition too, but you don't need it for simple face recognition project. It is a basic project of machine learning and is available on many GitHub kind of websites for free. So, you don't need to learn "computer vision" especially to build a face recognition system.
Computer vision is a good field, but machine learning is sufficient for face recognition!
Generally speaking computer vision is a field that uses some machine learning techniques to solve problems related to the field, that is, making a computer recognize images and identify what's in them!
Machine learning:
Machine learning is the science of making computers learn and act like humans by feeding data and information without being explicitly programmed.
Example:
When we coming to the computer, Writing a peace of code or program and telling the computer step by step to do. But ML we don't do that, the system learns on its own. We just provide the past data(called labelled data) and the system learns during the process what is known as training process, we tell the system the system the outcome are right or wrong, that feedback is taken by system and it corrects itself and that's who its learns, it gives the correct output of the most of the cases. Obviously it is not 100% correct but aim is to get as accurate as possible.
Computer vision:
Computer vision is nothing but dealing with the digital images and videos in the computer. Computer vision is evolving rapidly day-by-day. Its one of the reason is deep learning. When we talk about computer vision, a term convolutional neural network( abbreviated as CNN) comes in our mind because CNN is heavily used here. Examples of CNN in computer vision are face recognition, image classification etc. It is similar to the basic neural network. CNN also have learn able parameter like neural network i.e, weights, biases etc.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
As per Pedro Domingos in his famous paper "A Few Useful Things to Know about Machine Learning" he writes Machine learning systems automatically learn programs from data.
But from my experience we r giving algorithms like ANN or SVM etc.
My question is how it is automating automation?
Could someone put some light with example.
When you develop a machine learning algorithm, with ANN or SVM or whatever, you don't say to your programming how to solve your problem, you are telling him how to learn to solve the problem.
SVM or ANN are ways to learn a solution to a problem, but not how to solve a problem.
So when people say "Machine learning systems automatically learn programs from data", they are saying that you never programmed a solution to your problem, but rather letting the computer learning to do so.
To quote wikipedia : "Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed"
https://en.wikipedia.org/wiki/Machine_learning
[Edit]
For example let's take one of the most simple machine learning algorithm, the linear regression in a 2D space.
The aim of this algorithm is to learn a linear function given a dataset of (x,y), so when you given your system a new x you get an approximation of what the real y would be.
But when you code a linear regression you never specify the linear function y = ax+b. What you code is a way for the program to deduce it from the dataset.
The linear function y=ax+b is the solution to your problem, the linear regression code is the way you are going to learn that solution.
https://en.wikipedia.org/wiki/Linear_regression
Machine Learning development helps to improve business operations as well as improve business scalability. A number of ML algorithms and artificial intelligence tools have gained tremendous popularity in the community of business analytics. There has been a rise in machine learning market due to faster and cheaper computational processing, easy availability of data as well as affordable data storage.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I practiced already some machine learning aspects, and developed some small projects. Nowadays some blogs, articles, open posts talk about deep learning. I get interested to see practically what the difference between machine learning and deep learning is, and maybe to learn a new approaches/ techniques called deep learning. I read few blogs, but conceptually I see that deep learning is a subset of machine learning, and it’s nothing more than Neural networks with multiple layers!!
I am however stunned and perplexed to recognize if it is the only difference between machine learning and deep learning !!!
What is the merit to think of deep learning and not machine learning if we want only talk about neural networks? so if it is, why not call it neural networks, or deep neural networks to distinguish some classification ?
Is there a real difference than that I mentioned?
Does there any practical example showing a significant difference letting us to make these different notions?
Deep learning is set of ML patterns and tactics to increase accuracy of classical ML algorithms, like MLP, Naïve Bayes classifier, etc.
One of the earliest and easiest of such tactics – adding hidden layers to increase network’s learning capacity. One of recent - convolutional autoencoder
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am new at the domain of machine learning and i have noticed that there are a lot of algorithms/ set of algorithms that can be used: SVM, decision trees, naive bayes, perceptron etc...
That is why I wonder which algorithm should one use for solving which issue? In other words which algorithm solves which problem class?
So my question is if you know a good web site or book that focuses on this algorithm selection problematic?
Any help would be appreciated. Thx in advance.
Horace
Take Andrew Ng's machine learning course on coursera. It's beautifully put together, explains the differences between different types of ML algorithm, gives advice on when to use each algorithm, and contains material useful for practioners as well as maths if you want it. I'm in the process of learning machine learning myself and this has been by far the most useful resource.
(Another piece of advice you might find useful is to consider learning python. This is based on a mistake I made of not starting to learn python at an earlier stage and ruling out the many books, web pages, sdks, etc that are python based. As it turns out, python is pretty easy to pick up, and from my own personal observations at least, widely used in the machine learning and data science communities.)
scikit-learn.org published this infographic, that can be helpful, even when you're not using sklearn library.
#TooTone: In my opinion Machine Learning in Action could help the OP with deciding on which technique to use for a particular problem, as the book gives a clear classification of the different ML algorithms and pros, cons, and "works with" for each of them. I do agree the code is somewhat hard to read, especially for people not used to matrix operations. There is years of research condensed into a 10 line Python program, so be prepared that understanding it will take a day (for me at least).
It is very hard answer the question “which algorithm for which issue?”
That ability comes with a lot of experience and knowledge. So I suggest, you should read few good books about machine learning. Probably, following book would be a good starting point.
Machine Learning: A Probabilistic Perspective
Once you have some knowledge about machine learning, you can work on couple of simple machine learning problems. Iris flower dataset is a good starting point. It consists of several features belonging to three types of Iris species. Initially develop a simple machine learning model (such as Logistic Regression) to classify Iris species and gradually you could move to more advanced models such as Neural Networks.
As a simple starting place I consider what inputs I have and what outputs I want, which often narrows down choices in any situation. For example, if I have categories, rather than numbers and a target category for each input, decision trees are a good idea. If I have no target, I can only do clustering. If I have numerical inputs and a numerical output I could use neural networks or other types of regression. I could also use decision trees that generate regression equations. There are further questions to be asked after this, but it's a good place to start.
Following DZone Refcard might also helpful .. http://refcardz.dzone.com/refcardz/machine-learning-predictive. But you will have to dig in to each in detail eventually.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
According to several people on StackOverflow Bayesian filtering is better than Neural Networks for detecting spam.
According to the literature I've read that shouldn't be the case. Please explain!
There is no mathematical proof or explanation that can explain why the applications of Neural Networks have not been as good at detecting spam as Bayesian filters. This does not mean that Neural Networks would not produce similar or better results, but the time it would take for one to tweak the Neural Network topology and train it to get even approximately the same results as a Bayesian filter is simply not justified. At the end of the day, people care about results and minimizing the time/effort achieving those results. When it comes to spam detection, Bayesian filters get you the best results with the least amount of effort and time. If the spam detection system using Bayesian filters detects 99% of the spam correctly, then there is very little incentive for people to spend a lot of time adjusting Neural Networks just so they can eek out an extra 0.5% or so.
"According to the literature I've read that shouldn't be the case."
It's technically correct. If properly configured, a Neural Network would get as good or even better results than the Bayesian filters, but its the cost/benefit ratio that makes the difference and ultimately the trend.
Neural Networks works mostly as black box approach. You determine your inputs and outputs. After that finding suitable architecture (2 hidden layer Multi layer perceptron , RBF network etc) is done mostly empirically. There are suggestions for determining architecture but they are, well suggestions.
This is good for some problems since we, domain analyst, do not have enough information about problem itself. Ability of NN to find an answer is a wanted thing.
Bayesian Network is on the other hand is designed mostly by domain analyst. Since spam classification is a well known problem, a domain analyst can tweak architecture more easily. Bayesian network would get better results more easily in this way.
Also most NNs are not very good with changing features therefore almost always need to be RE-trained,
an expensive operation.
Bayesian network on the other hand may only change probabilities.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently building a Neural Network library. I have constructed it as an object graph for simplicity. I am wondering if anyone can quantify the performance benefits of going to an array based approach. What I have now works very good for building networks of close to arbitrary complexity. Regular (backpropped) networks as well as recurrent networks are supported. I am considering having trained networks "compile" into some "simpler" form such as arrays.
I just wanted to see if anyone out there had any practical advice or experience building neural networks that deployed well into production. Is there any benefit to having the final product be array based instead of object graph based?
P.S Memory footprint is less important than speed.
People have started using GPGPU techniques in AI, and having your neural net in matrix form could leverage the much faster matrix ops in your typical graphics card.
This all depends on what language you are using - I assume you are using a C derivative.
In my implementations I've found the object graph approach far superior. There is some tradeoff in speed, but the ease of maintenance outweighs the object lookup calls. This all depends on whether you're looking for training speed or solving speed as well... I'm assuming you are most worried about training speed?
You can always end up micro-optimizing some of the object call issues if need be.
Considering your secondary motive of sub-netting the networks, I think it's even more important to be object based - it makes it much easier to take out portions of the work.
However you implement it, you must never forget:
http://xkcd.com/534/
It's been a while, but I recall that speed is usually only an issue during training of the Neural Network.
I don't have any personal experience writing such a library, but I can link you to some popular open-source projects which you could perhaps learn from. (Personally I would just use one of these existing libraries.)
Fast Artificial Neural Network Library
NeuronDotNet