Can we use Deep learning techniques in binary classification? - machine-learning

Recently, I started reading about the deep learning. Mainly the weights are pre-trained using unsupervised RBM network and after that, they use neural network networks with many hidden layers to address their task.
So my question is, Whether we can use DNN with for 2 class classification problem.
Thanks to the people who are going to respond.

Yes, you can do that with a simple logistic regression on top of your hidden layers (whatever you choose for that, RBMs other autoencoders).

Absolutely Yes!
As mentioned by Thomas you can use a Logistic Regression as you output layers. Also, another approach is you can use a Sotmax layer with two classes as you output layers.
Good Luck!

Related

Binary Classification with Neural Networks?

I have a dataset of the order of MxN. I want to perform a binary classifcation on this dataset using neural networks. I was looking into Recurrent Neural Networks. Although, LSTM's can be used for AutoEncoders, I am not sure if they can be used for classification (I am trying to do a binary classification). I am very new to neural networks and deep learning models and i am not really sure if there is a way of achieving binary classification with neural networks. I tried Bernouli RBM on my dataset. I am not sure how to use this model to perform classification. I also found out Pipeline(). Again, I am not sure how to achieve my goal.
Any help would be greatly appreciated.
Ok, something doesn't stack up. If you have unlabelled data and you want to classify it you must take a look at K-Means (http://scikit-learn.org/stable/modules/clustering.html#k-means).
Regarding LSTMs classification: You run your input through the RNN layers and take the last output and feed it into some Conv / Fully-connected layers to take care of classification as you know it.

Using Caffe to classify "hand-crafted" image features

Does it make any sense to perform feature extraction on images using, e.g., OpenCV, then use Caffe for classification of those features?
I am asking this as opposed to the traditional way of passing the images directly to Caffe, and letting Caffe do the extraction and classification procedures.
Yes, it does make sense, but it may not be the first thing you want to try:
If you have already extracted hand-crafted features that are suitable for your domain, there is a good chance you'll get satisfactory results by using an easier-to-use machine learning tool (e.g. libsvm).
Caffe can be used in many different ways with your features. If they are low-level features (e.g. Histogram of Gradients), then several convolutional layers may be able to extract the appropriate mid-level features for your problem. You may also use caffe as an alternative non-linear classifier (instead of SVM). You have the freedom to try (too) many things, but my advice is to first try a machine learning method with a smaller meta-parameter space, especially if you're new to neural nets and caffe.
Caffe is a tool for training and evaluating deep neural networks. It is quite a versatile tool allowing for both deep convolutional nets as well as other architectures.
Of course it can be used to process pre-computed image features.

Building a Tetris AI using Neuroevolution

I am planning to create a Tetris AI using artificial neural network and train it with genetic algorithm for a project in my high school computer science class. I have a basic understanding of how an ANN works and how to implement it with a genetic algorithm. I have already written a working Neural Network based on this tutorial and I'm currently working on a genetic algorithm.
My questions are:
Which GA model is better for this situation (Tetris), and why?
What should I use for input for the neural network? Because currently, the method I'm using is to simply convert the state of the board (the pieces) into a one dimensional array and feed it into the neural network? Is there a better approach?
What should the size (number of layers, neurons per layer) the neural network be?
Are there any good sources of information that can help me?
Thank you!
Similar task was already solved by Google, but they solved it for all kinds of Atari games - https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf.
Carefully read this article and all of the related articles too
This is a reinforcement learning task, in my opinion the hardest task in ML domain. So there will be no short answer for your questions - except that probably you shouldn't use GA heuristic at all and rely on reinforcements methods.

How to train an unsupervised neural network such as RBM?

Is this process correct?
Suppose We have a bunch of data such as MNIST.
We just feed all these data(without label) to RBM and resample each data from trained model.
Then output can be treated as new data for classification.
Do I understand it correctly?
What is the purpose of using RBM?
You are correct, RBMs are a form of unsupervised learning algorithm that are commonly used to reduce the dimensionality of your feature space. Another common approach is to use autoencoders.
RBMs are trained using the contrastive divergence algorithm. The best overview of this algorithm comes from Geoffrey Hinton who came up with it.
https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf
A great paper about how unsupervised learning improves performance can be found at http://jmlr.org/papers/volume11/erhan10a/erhan10a.pdf. The paper shows that unsupervised learning provides better generalization and filters (if using CRBMs)

Using Weka for Game Playing

I am doing a project where I have neural networks (or other algorithms) play each other in poker. After each win or loss, I want the neural network (or other algorithm) to update in response to the error of the loss (how this is calculated is unimportant here).
Weka is very nice and I don't want to reinvent the wheel. However, Weka's API seems primarily designed to train from a dataset. Game playing doesn't use a dataset. Rather, the network plays, and then I want it to update itself based on its loss.
Is it possible to use the Weka API to update a network instead of a dataset but on one instance and do this over and over again? I'm I thinking about this right?
The other idea I also want to implement is use a genetic algorithm to update the weights in a neural network, instead of the backpropogation algorithm. As far as I can tell, there is no way to manually specify the weights of a neural network in Weka. This, of course, is vital if using a genetic algorithm for this purpose.
Please help :) Thank you.
Normally weka learning algorithms are batch learning algoritms. What you need are incremental classifier.
From weka docs
Most classifiers need to see all the data before they can be trained, e.g., J48 or SMO. But there are also schemes that can be trained in an incremental fashion, not just in batch mode. All classifiers implementing the weka.classifiers.UpdateableClassifier interface are able to process data in such a way.
See UpdateableClassifier interface to which classifiers implement it.
Also you may look MOA Massive Online Analysis tool which is closely related with weka and all of its classifiers are incremental due to constraints of online learning.
Weka, as far as I can tell, does not do online learning (which is what you're asking about).
It might be better to investigate using competitive analysis for your game.
You may have to reinvent the wheel here. I don't think it's a bad use of time.
I'm currently implementing a learning classifier system, which is pretty simple. I'd also advise looking into these kinds of algorithms. There is an implementation on the internet, but I still prefer to code my own.

Resources