After taking bunch of online courses and reading many papers I started playing with neural-net but to my surprise it fails to generalize a simple bitwise AND operation.
Inputs:
Inp#1 - randomly generated number between 0-15, scaled down to (0,1)
Inp#2 - 16 bit randomly generated unsigned int scaled down to (0,1)
# Code snippet
int in1 = (int)rand()%16;
int in2 = (int)rand()%(0x0010000);
in[0] = (fann_type)(in1/100.0); // not to worry about float roundup
in[1] = (fann_type)(in2/100000.0); // not to worry about float roundup
Outputs:
Out#1 = -1 if the corresponding bit specified by index inp#1 in inp#2 value is 0, otherwise 1
# Code snippet
int out1 = (in2 & (1<<in1)) ? 1 : -1;
out[0] = (fann_type)out1;
Network: tried many different variations, below is example
A. 1 hidden layer with 30 neurons,
Activation Function (hidden): sigmoid,
Activation Function (output): sigmoid_symmetric (tanh),
Training method: RPROP
Learning rate: 0.7 (default)
Momentum: 0.0 (default)
RPROP Increase factor: 1.2 (default)
RPROP Decrease factor: 0.5 (default)
RPROP Minimum Step-size: 0 (default)
RPROP Maximum Step-size: 50 (default)
B. 3 hidden layers each having 30 neurons, with the same params as in A
C. tried the same networks also with scaling inputs to (-1,1) and using tanh for also hidden layer.
Data Sets: 5000 samples for training, 5000 for testing and 5000 for validation. Tried even bigger datasets, no success
# examples from training set
0.040000 0.321600
-1
0.140000 0.625890
1
0.140000 0.039210
-1
0.010000 0.432830
1
0.100000 0.102220
1
Process: the network trained with training set and monitored the MSE of test data in parallel to avoid possible overfitting.
Libraries: used multiple, but mostly tried with fann and used fanntool for gui.
Any ideas? Can upload the datasets if any particular interest.
If I understand your setup, you try to do something like:
have a network of architecture 2-X-X-X-1 (where X - hidden units) - thus 2 inputs, one output
model bitwise function over inputs
If this is true, this is extremely peculiar problem, and a very bad choice of architecture. Neural networks are not magical hats, they are very big family of models. What you try to do has no characteristics, which is expected from function to model by NN. It is completely non smooth in the input, it has lots of discontinuities, it is actually a bunch of if-else clauses.
What you should do? You should express your inputs as bits, thus you should have 32 inputs, 16 binary inputs per number, then it will learn your function without any problems. You encoded inputs in a very specific manner (by taking its decimal representation) and expect your network to model decomposition to binary and then operation on top of it. NN will learn it, but you might need quite complex network to achieve such operation - again, the whole reason is the fact that you provided your network with suboptimal representation and build a very simple network, which was originally designed to approximate smooth functions.
Related
I have attempted to program my own LSTM (long short term memory) neural network. I would like to verify that the basic functionality is working. I have implemented a Back propagation through time BPTT algorithm to train a single cell network.
Should a single cell LSTM network be able to learn a simple sequence, or are more than one cells necessary? The network does not seem to be able to learn a simple sequence such as 1 0 0 0 1 0 0 0 1 0 0 0 1.
I am sending the the sequence 1's and 0's one by one, in order, into the network, and feeding it forward. I record each output for the sequence.
After running the whole sequence through the LSTM cell, I feed the mean error signals back into the cell, saving the weight changes internal to the cell, in a seperate collection, and after running all the errors one by one through and calculating the new weights after each error, I average the new weights together to get the new weight, for each weight in the cell.
Am i doing something wrong? I would very appreciate any advice.
Thank you so much!
Having only one cell (one hidden unit) is not a good idea even if you are just testing the correctness of your code. You should try 50 even for such simple problem. This paper here: http://arxiv.org/pdf/1503.04069.pdf gives you very clear gradient rules for updating the parameters. Having said that, there is no need to implement your own even if your dataset and/or the problem you are working on is new LSTM. Pick from the existing library (Theano, mxnet, Torch etc...) and modify from there I think is a easier way, given that it's less error prone and it supports gpu computing which is essential for training lstm within a reasonable amount of time.
I haven't tried 1 hidden unit before, but I am sure 2 or 3 hidden units will work for sequence 0,1,0,1,0,1. It is not necessarily the more the cells, the better the result. Training difficulty also increases with the number of cells.
You said you averaged new weights together to get the new weight. Does that mean you run many training sessions and take the average of the trained weights?
There are many possibilities your LSTM did not work, even if you implemented it correctly. The weights are not easy to train by simple gradient descent.
Here are my suggestion for weight optimization.
Using Momentum method for gradient descent.
Add some gaussian noise to your training set to prevent overfitting.
using adaptive learning rates for each unit.
Maybe you can take a look at Coursera's course Neural Network offered by Toronto University, and discuss with people there.
Or you can take a look at other examples on GitHub. For instance :
https://github.com/JANNLab/JANNLab/tree/master/examples/de/jannlab/examples
The best way to test an LSTM implementation (after gradient checking) is to try it out on the toy memory problems described in the original LSTM paper itself.
The best one that I often use is the 'Addition Problem':
We give a sequence of tuples of the form (value, mask). Value is a real valued scalar number between 0 and 1. Mask is a binary value - either 0 or 1.
0.23, 0
0.65, 0
...
0.86, 0
0.13, 1
0.76, 0
...
0.34, 0
0.43, 0
0.12, 1
0.09, 0
..
0.83, 0 -> 0.125
In the entire sequence of such tuples (usually of length 100), only 2 tuples should have mask as 1, the rest of the tuples should have the mask as 0. The target at the final time step is the a average of the two values for which the mask was 1. The outputs at all other time steps, other than the last one is ignored. The values and the positions of the mask are arbitrarily chosen. Thus, this simple task shows if your implementation can actually remember things over long periods of time.
I developed a image processing program that identifies what a number is given an image of numbers. Each image was 27x27 pixels = 729 pixels. I take each R, G and B value which means I have 2187 variables from each image (+1 for the intercept = total of 2188).
I used the below gradient descent formula:
Repeat {
θj = θj−α/m∑(hθ(x)−y)xj
}
Where θj is the coefficient on variable j; α is the learning rate; hθ(x) is the hypothesis; y is real value and xj is the value of variable j. m is the number of training sets. hθ(x), y are for each training set (i.e. that's what the summation sign is for). Further the hypothesis is defined as:
hθ(x) = 1/(1+ e^-z)
z= θo + θ1X1+θ2X2 +θ3X3...θnXn
With this, and 3000 training images, I was able to train my program in just over an hour and when tested on a cross validation set, it was able to identify the correct image ~ 67% of the time.
I wanted to improve that so I decided to attempt a polynomial of degree 2.
However the number of variables jumps from 2188 to 2,394,766 per image! It takes me an hour just to do 1 step of gradient descent.
So my question is, how is this vast number of variables handled in machine learning? On the one hand, I don't have enough space to even hold that many variables for each training set. On the other hand, I am currently storing 2188 variables per training sample, but I have to perform O(n^2) just to get the values of each variable multiplied by another variable (i.e. the polynomial to degree 2 values).
So any suggestions / advice is greatly appreciated.
try to use some dimensionality reduction first (PCA, kernel PCA, or LDA if you are classifying the images)
vectorize your gradient descent - with most math libraries or in matlab etc. it will run much faster
parallelize the algorithm and then run in on multiple CPUs (but maybe your library for multiplying vectors already supports parallel computations)
Along with Jirka-x1's answer, I would first say that this is one of the key differences in working with image data than say text data for ML: high dimensionality.
Second... this is a duplicate, see How to approach machine learning problems with high dimensional input space?
The parity function is a function from a vector of n bits and outputs 1 if the sum is odd and 0 otherwise. This can be viewed as a classification task, where the n input are the features.
Is there any machine learning algorithm which would be able to learn this function? Clearly random decision forests would not succeed, since any strict subset of features has no predictive power. Also, I believe no neural network of a fixed depth would succeed, since computing the parity function is not in the complexity class AC0.
Polynomial SVMs can do this.
Encode zeros as 1 and ones as -1.
For n variables (bits), you need a polynomial kernel of degree n.
When the kernel is computed, it also implicitly computes the value x1 * x2 * ... * xn (where xi is the i-th input variable).
If the result is -1, you have an odd number of ones, otherwise you have an even number of ones.
If I'm not mistaken, Neural Networks should also be able to compute it. As far as I remember, Neural Networks with 2 hidden layers and sigmoid units are able to learn any arbitrary function.
What about Gaussian Process Classification? You can train your model by n-dimensional input vector and 1-dimensional parity bit output. Then for any test input you can ask for a prediction. You can check this online book.
http://www.gaussianprocess.org/gpml/chapters/
Chapter 3 addresses the classification problem.
Neural Networks can represent and learn the parity function with a single hidden layer with the same number of neurons as inputs. The fact that the parity function is not in AC0 is a fact about circuits of boolean gates, but multi-layer perceptrons (as commonly used) can have real-valued weights, which makes all the difference.
An example of an explicit solution would the following, assuming n inputs, n hidden units and a sign activation function (described for example in [1]):
Set all weights in the first layer to 1. This means that the pre-activation before the addition of the bias is the same for all hidden units and equal to the number of 1s in the input
Set the bias in the first hidden unit to -0.5, the threshold for the second hidden unit to -1.5, for the third hidden unit to -2.5 etc. This means that if there is no 1 in the input and the pre-activations before the addition of the bias are 0, the pre-activation after the addition of the bias is negative for all hidden units and the sign function will return a 0 for all hidden units. If there is a single 1 in the input, only the pre-activation of the first hidden unit will be positive and there will therefore be a single hidden unit that will send a 1 to the output. In general, if there are k 1s in the input, the first k hidden units will send a 1 to the output, the rest a zero.
Set the weights that connect the hidden units to the output +1, -1, +1, -1 etc. This means if there is no 1 in the input, the output will be 0. If there is a single 1 in the input, the output will be +1. If there are two 1s in the input, the output will be again +1-1=0 and so on.
That solves the parity problem.
Your were asking, however, about machine learning algorithms that can learn this function. According to the section "Parity" in [2], the answer is that at least for small n, back-propagation on a single-layer neural network can learn the function and, in fact, it actually learns a network very similar to the one described above.
[1] Franco, Leonardo, and Sergio A. Cannas. "Generalization properties of modular networks: implementing the parity function." IEEE transactions on neural networks 12.6 (2001): 1306-1313.
[2] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. Learning internal representations by error propagation. No. ICS-8506. California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
I am moving my first steps in neural networks and to do so I am experimenting with a very simple single layer, single output perceptron which uses a sigmoidal activation function. I am updating my weights on-line each time a training example is presented using:
weights += learningRate * (correct - result) * {input,1}
Here weights is a n-length vector which also contains the weight from the bias neuron (- threshold), result is the result as computed by the perceptron (and processed using the sigmoid) when given the input, correct is the correct result and {input,1} is the input augmented with 1 (the fixed input from the bias neuron). Now, when I try to train the perceptron to perform logic AND, the weights don't converge for a long time, instead they keep growing similarly and they maintain a ratio of circa -1.5 with the threshold, for instance the three weights are in sequence:
5.067160008240718 5.105631826680446 -7.945513136885797
...
8.40390853077094 8.43890306970281 -12.889540730182592
I would expect the perceptron to stop at 1, 1, -1.5.
Apart from this problem, which looks like connected to some missing stopping condition in the learning, if I try to use the identity function as activation function, I get weight values oscillating around:
0.43601272528257057 0.49092558197172703 -0.23106430854347537
and I obtain similar results with tanh. I can't give an explanation to this.
Thank you
Tunnuz
It is because the sigmoid activation function doesn't reach one (or zero) even with very highly positive (or negative) inputs. So (correct - result) will always be non-zero, and your weights will always get updated. Try it with the step function as the activation function (i.e. f(x) = 1 for x > 0, f(x) = 0 otherwise).
Your average weight values don't seem right for the identity activation function. It might be that your learning rate is a little high -- try reducing it and see if that reduces the size of the oscillations.
Also, when doing online learning (aka stochastic gradient descent), it is common practice to reduce the learning rate over time so that you converge to a solution. Otherwise your weights will continue to oscillate.
When trying to analyze the behavior of the perception, it helps to also look at correct and result.
Is it a good practice to use sigmoid or tanh output layers in Neural networks directly to estimate probabilities?
i.e the probability of given input to occur is the output of sigmoid function in the NN
EDIT
I wanted to use neural network to learn and predict the probability of a given input to occur..
You may consider the input as State1-Action-State2 tuple.
Hence the output of NN is the probability that State2 happens when applying Action on State1..
I Hope that does clear things..
EDIT
When training NN, I do random Action on State1 and observe resultant State2; then teach NN that input State1-Action-State2 should result in output 1.0
First, just a couple of small points on the conventional MLP lexicon (might help for internet searches, etc.): 'sigmoid' and 'tanh' are not 'output layers' but functions, usually referred to as "activation functions". The return value of the activation function is indeed the output from each layer, but they are not the output layer themselves (nor do they calculate probabilities).
Additionally, your question recites a choice between two "alternatives" ("sigmoid and tanh"), but they are not actually alternatives, rather the term 'sigmoidal function' is a generic/informal term for a class of functions, which includes the hyperbolic tangent ('tanh') that you refer to.
The term 'sigmoidal' is probably due to the characteristic shape of the function--the return (y) values are constrained between two asymptotic values regardless of the x value. The function output is usually normalized so that these two values are -1 and 1 (or 0 and 1). (This output behavior, by the way, is obviously inspired by the biological neuron which either fires (+1) or it doesn't (-1)). A look at the key properties of sigmoidal functions and you can see why they are ideally suited as activation functions in feed-forward, backpropagating neural networks: (i) real-valued and differentiable, (ii) having exactly one inflection point, and (iii) having a pair of horizontal asymptotes.
In turn, the sigmoidal function is one category of functions used as the activation function (aka "squashing function") in FF neural networks solved using backprop. During training or prediction, the weighted sum of the inputs (for a given layer, one layer at a time) is passed in as an argument to the activation function which returns the output for that layer. Another group of functions apparently used as the activation function is piecewise linear function. The step function is the binary variant of a PLF:
def step_fn(x) :
if x <= 0 :
y = 0
if x > 0 :
y = 1
(On practical grounds, I doubt the step function is a plausible choice for the activation function, but perhaps it helps understand the purpose of the activation function in NN operation.)
I suppose there an unlimited number of possible activation functions, but in practice, you only see a handful; in fact just two account for the overwhelming majority of cases (both are sigmoidal). Here they are (in python) so you can experiment for yourself, given that the primary selection criterion is a practical one:
# logistic function
def sigmoid2(x) :
return 1 / (1 + e**(-x))
# hyperbolic tangent
def sigmoid1(x) :
return math.tanh(x)
what are the factors to consider in selecting an activation function?
First the function has to give the desired behavior (arising from or as evidenced by sigmoidal shape). Second, the function must be differentiable. This is a requirement for backpropagation, which is the optimization technique used during training to 'fill in' the values of the hidden layers.
For instance, the derivative of the hyperbolic tangent is (in terms of the output, which is how it is usually written) :
def dsigmoid(y) :
return 1.0 - y**2
Beyond those two requriements, what makes one function between than another is how efficiently it trains the network--i.e., which one causes convergence (reaching the local minimum error) in the fewest epochs?
#-------- Edit (see OP's comment below) ---------#
I am not quite sure i understood--sometimes it's difficult to communicate details of a NN, without the code, so i should probably just say that it's fine subject to this proviso: What you want the NN to predict must be the same as the dependent variable used during training. So for instance, if you train your NN using two states (e.g., 0, 1) as the single dependent variable (which is obviously missing from your testing/production data) then that's what your NN will return when run in "prediction mode" (post training, or with a competent weight matrix).
You should choose the right loss function to minimize.
The squared error does not lead to the maximum likelihood hypothesis here.
The squared error is derived from a model with Gaussian noise:
P(y|x,h) = k1 * e**-(k2 * (y - h(x))**2)
You estimate the probabilities directly. Your model is:
P(Y=1|x,h) = h(x)
P(Y=0|x,h) = 1 - h(x)
P(Y=1|x,h) is the probability that event Y=1 will happen after seeing x.
The maximum likelihood hypothesis for your model is:
h_max_likelihood = argmax_h product(
h(x)**y * (1-h(x))**(1-y) for x, y in examples)
This leads to the "cross entropy" loss function.
See chapter 6 in Mitchell's Machine Learning
for the loss function and its derivation.
There is one problem with this approach: if you have vectors from R^n and your network maps those vectors into the interval [0, 1], it will not be guaranteed that the network represents a valid probability density function, since the integral of the network is not guaranteed to equal 1.
E.g., a neural network could map any input form R^n to 1.0. But that is clearly not possible.
So the answer to your question is: no, you can't.
However, you can just say that your network never sees "unrealistic" code samples and thus ignore this fact. For a discussion of this (and also some more cool information on how to model PDFs with neural networks) see contrastive backprop.