Can a LSTM problem be expressed as FFNN one?
LSTM neural networks simply look in the past. But I can also take some (or many) past values and use them as input features of a FFNN.
In this way, could FFNN replace LSTM Networks? Why should I prefer LSTM over FFNN if I can take past values and use them as input features?
LSTM is also a feed forward neural network with Memory Cell and recurrent connection. LSTM is an optimized NN algorithm since it can handle the problem of vanishing and exploring gradients and it can handle the long term dependencies. Obviously, you can use a FFNN by customizing the input layer information with a valid Neural Network architecture, this is not a replacement of LSTM.
Related
I learned that neural networks are only good at answering Yes or No questions, such as "is this a person, is this a car, is this an apple" etc.
But I see examples of ANNs finding matches to faces of people in a crowded place and being used for traditional machine vision applications, such as sub-pixel template matching.
Is this just a product of a combination of ANN and traditional matching techniques. Such as recognizing which features are matching to a known template using an ANN, and then figuring out where those keypoints are in the image using good old image processing? Or is it possible to get something other than a Yes or a No response from a network?
Yes it is possible to get a range of answers from an Artificial Neural Network. It depends on how you set up your Neurons.
Artificial Neural Networks make decisions by being trained using examples with known solutions, usually thousands of cases where the inputs and expected outputs are known.
They get "trained" by recursively adjusting each Neurons weight by comparing its output to the expected output.
Your first layer of Neurons is your inputs. Your last layer is your outputs. If your last layer has 2 Neurons, then you will get one of two outputs.
There is no limit to how many inputs and outputs an Artificial Neural Network can have. Check out these diagrams:
Here is a repository for an Artificial Neural Network I created that predicts the output of an XOR gate. Hope this helps!
Here is a truth table for an XOR gate for clarity.
[UPDATE]
To explicitly answer your question about Image Classification, I believe Artificial Neural Networks are a good approach.
Here is an article I found helpful in understanding the implementation of an Image Classifier. You can also experiment with Tenserflow with is a GUI Neural Network application which is an intuitive approach to understating how Neural Networks work.
I understand all the computational steps of training a neural network with gradient descent using forwardprop and backprop, but I'm trying to wrap my head around why they work so much better than logistic regression.
For now all I can think of is:
A) the neural network can learn it's own parameters
B) there are many more weights than simple logistic regression thus allowing for more complex hypotheses
Can someone explain why a neural network works so well in general? I am a relative beginner.
Neural Networks can have a large number of free parameters (the weights and biases between interconnected units) and this gives them the flexibility to fit highly complex data (when trained correctly) that other models are too simple to fit. This model complexity brings with it the problems of training such a complex network and ensuring the resultant model generalises to the examples it’s trained on (typically neural networks require large volumes of training data, that other models don't).
Classically logistic regression has been limited to binary classification using a linear classifier (although multi-class classification can easily be achieved with one-vs-all, one-vs-one approaches etc. and there are kernalised variants of logistic regression that allow for non-linear classification tasks). In general therefore, logistic regression is typically applied to more simple, linearly-separable classification tasks, where small amounts of training data are available.
Models such as logistic regression and linear regression can be thought of as simple multi-layer perceptrons (check out this site for one explanation of how).
To conclude, it’s the model complexity that allows neural nets to solve more complex classification tasks, and to have a broader application (particularly when applied to raw data such as image pixel intensities etc.), but their complexity means that large volumes of training data are required and training them can be a difficult task.
Recently Dr. Naftali Tishby's idea of Information Bottleneck to explain the effectiveness of deep neural networks is making the rounds in the academic circles.
His video explaining the idea (link below) can be rather dense so I'll try to give the distilled/general form of the core idea to help build intuition
https://www.youtube.com/watch?v=XL07WEc2TRI
To ground your thinking, vizualize the MNIST task of classifying the digit in the image. For this, I am only talking about simple fully-connected neural networks (not Convolutional NN as is typically used for MNIST)
The input to a NN contains information about the output hidden inside of it. Some function is needed to transform the input to the output form. Pretty obvious.
The key difference in thinking needed to build better intuition is to think of the input as a signal with "information" in it (I won't go into information theory here). Some of this information is relevant for the task at hand (predicting the output). Think of the output as also a signal with a certain amount of "information". The neural network tries to "successively refine" and compress the input signal's information to match the desired output signal. Think of each layer as cutting away at the unneccessary parts of the input information, and
keeping and/or transforming the output information along the way through the network.
The fully-connected neural network will transform the input information into a form in the final hidden layer, such that it is linearly separable by the output layer.
This is a very high-level and fundamental interpretation of the NN, and I hope it will help you see it clearer. If there are parts you'd like me to clarify, let me know.
There are other essential pieces in Dr.Tishby's work, such as how minibatch noise helps training, and how the weights of a neural network layer can be seen as doing a random walk within the constraints of the problem.
These parts are a little more detailed, and I'd recommend first toying with neural networks and taking a course on Information Theory to help build your understanding.
Consider you have a large dataset and you want to build a binary classification model for that, Now you have two options that you have pointed out
Logistic Regression
Neural Networks ( Consider FFN for now )
Each node in a neural network will be associated with an activation function for example let's choose Sigmoid since Logistic regression also uses sigmoid internally to make decision.
Let's see how the decision of logistic regression looks when applied on the data
See some of the green spots present in the red boundary?
Now let's see the decision boundary of neural network (Forgive me for using a different color)
Why this happens? Why does the decision boundary of neural network is so flexible which gives more accurate results than Logistic regression?
or the question you asked is "Why neural networks works so well ?" is because of it's hidden units or hidden layers and their representation power.
Let me put it this way.
You have a logistic regression model and a Neural network which has say 100 neurons each of Sigmoid activation. Now each neuron will be equivalent to one logistic regression.
Now assume a hundred logistic units trained together to solve one problem versus one logistic regression model. Because of these hidden layers the decision boundary expands and yields better results.
While you are experimenting you can add more number of neurons and see how the decision boundary is changing. A logistic regression is same as a neural network with single neuron.
The above given is just an example. Neural networks can be trained to get very complex decision boundaries
Neural networks allow the person training them to algorithmically discover features, as you pointed out. However, they also allow for very general nonlinearity. If you wish, you can use polynomial terms in logistic regression to achieve some degree of nonlinearity, however, you must decide which terms you will use. That is you must decide a priori which model will work. Neural networks can discover the nonlinear model that is needed.
'Work so well' depends on the concrete scenario. Both of them do essentially the same thing: predicting.
The main difference here is neural network can have hidden nodes for concepts, if it's propperly set up (not easy), using these inputs to make the final decission.
Whereas linear regression is based on more obvious facts, and not side effects. A neural network should de able to make more accurate predictions than linear regression.
Neural networks excel at a variety of tasks, but to get an understanding of exactly why, it may be easier to take a particular task like classification and dive deeper.
In simple terms, machine learning techniques learn a function to predict which class a particular input belongs to, depending on past examples. What sets neural nets apart is their ability to construct these functions that can explain even complex patterns in the data. The heart of a neural network is an activation function like Relu, which allows it to draw some basic classification boundaries like:
Example classification boundaries of Relus
By composing hundreds of such Relus together, neural networks can create arbitrarily complex classification boundaries, for example:
Composing classification boundaries
The following article tries to explain the intuition behind how neural networks work: https://medium.com/machine-intelligence-report/how-do-neural-networks-work-57d1ab5337ce
Before you step into neural network see if you have assessed all aspects of normal regression.
Use this as a guide
and even before you discard normal regression - for curved type of dependencies - you should strongly consider kernels with SVM
Neural networks are defined with an objective and loss function. The only process that happens within a neural net is to optimize for the objective function by reducing the loss function or error. The back propagation helps in finding the optimized objective function and reach our output with an output condition.
What is the difference between training a RNN and a simple neural networks? Can RNN be trained using feed forward and backward method?
Thanks ahead!
The difference is recurrence. Thus RNN cannot be easily trained as if you try to compute gradient - you will soon figure out that in order to get a gradient on n'th step - you need to actually "unroll" your network history for n-1 previous steps. This technique, known as BPTT (backpropagation through time) is exactly this - direct application of backpropagation to RNN. Unfortunately this is both computationaly expensive as well as mathematically challenging (due to vanishing/exploding gradients). People are creating workaround on many levels, by for example introduction of specific types of RNN which can be efficiently trained (LSTM, GRU), or by modification of training procedure (such as gradient clamping). To sum up - theoreticaly you can do "typical" backprop in the mathematical sense, from programming perspective - this requires more work as you need to "unroll" your network through history. This is computationaly expensive, and hard to optimize in the mathematical sense.
I'm new to study recurrent neural networks and now confused by the parameters in RNNLib. Specifically, I don't understand the hidden Block, hidden size, input Block, subsample size and stuffs with mdl. In my experience, I just had input vectors, one lstm hidden layer and softmax output layer. Why does the block seem like a matrix?
RNNLib implements a novel type of RNN, so-called "Multidimensional recurrent neural network". Following reference on RNNLib page explains that : Alex Graves, Santiago Fernández and Jürgen Schmidhuber.Multidimensional recurrent neural networks International Conference on Artificial Neural Networks, September 2007,Porto. This extension is designed for processing images, video and so on. As explained in the paper:
"The basic idea of MDRNNs is to replace the single recurrent connection found in standard
RNNs with as many recurrent connections as there are dimensions in the data.
During the forward pass, at each point in the data sequence, the hidden layer of the network
receives both an external input and its own activations from one step back along
all dimensions"
I think, that is the reason why you have ability to use multidimensional input. If you want to use RNNLib as usual one-dimensional RNN, just specify one dimension for input and LSTM block.
MDL stands for "Minimum Description length" cost function, used for approximation of Bayesian inference (a method for regularizing NN). If you want to use that, its best to read original references, provided on RNNLib website. Otherwise, I think, it can be just ignored.
I am trying to train a simple feedforward neural network using a genetic algorithm, however it is proving fairly inefficient because isomorphic neural networks appear different to the genetic algorithm.
It is possible to have multiple neural networks, which behave the same way, but have their neurons ordered in a different way from left to right and across levels. To the genetic algorithms those networks' genotypes will appear completely different. Therefore any attempt to do crossover is pointless and the GA ends up being as effective as hill climbing.
Can you recommend a way to normalize the networks so they appear more transparent to the genetic algorithm?
I would call crossover in this context "inefficient", rather than "pointless". One way to address the duplication you mention might be to sort the hidden layer neurons in some canonical order, and use this order during crossover, which might at least reduce the duplication encountered in hidden weight space.
Also, you might fit the output layer weights by a more direct method than genetic algorithms. You don't say what performance metric is being used, but many common measures have fairly straightforward optimizations. So, as an example, you might generate a new hidden layer using genetic operators, then fit the output layer by logistic regression, and have the GA evaluate the total network.