Unable to approximate the sine function using a neural network - machine-learning

I am trying to approximate the sine() function using a neural network I wrote myself. I have tested my neural network on a simple OCR problem already and it worked, but I am having trouble applying it to approximate sine(). My problem is that during training my error converges on exactly 50%, so I'm guessing it's completely random.
I am using one input neuron for the input (0 to PI), and one output neuron for the result. I have a single hidden layer in which I can vary the number of neurons but I'm currently trying around 6-10.
I have a feeling the problem is because I am using the sigmoid transfer function (which is a requirement in my application) which only outputs between 0 and 1, while the output for sine() is between -1 and 1. To try to correct this I tried multiplying the output by 2 and then subtracting 1, but this didn't fix the problem. I'm thinking I have to do some kind of conversion somewhere to make this work.
Any ideas?

Use a linear output unit.
Here is a simple example using R:
set.seed(1405)
x <- sort(10*runif(50))
y <- sin(x) + 0.2*rnorm(x)
library(nnet)
nn <- nnet(x, y, size=6, maxit=40, linout=TRUE)
plot(x, y)
plot(sin, 0, 10, add=TRUE)
x1 <- seq(0, 10, by=0.1)
lines(x1, predict(nn, data.frame(x=x1)), col="green")

When you train the network, you should normalize the target (the sin function) to the range [0,1], then you can keep the sigmoid transfer function.
sin(x) in [-1,1] => 0.5*(sin(x)+1) in [0,1]
Train data:
input target target_normalized
------------------------------------
0 0 0.5
pi/4 0.70711 0.85355
pi/2 1 1
...
Note that that we mapped the target before training. Once you train and simulate the network, you can map back the output of the net.
The following is a MATLAB code to illustrate:
%% input and target
input = linspace(0,4*pi,200);
target = sin(input) + 0.2*randn(size(input));
% mapping
[targetMinMax,mapping] = mapminmax(target,0,1);
%% create network (one hidden layer with 6 nodes)
net = newfit(input, targetMinMax, [6], {'tansig' 'tansig'});
net.trainParam.epochs = 50;
view(net)
%% training
net = init(net); % init
[net,tr] = train(net, input, targetMinMax); % train
output = sim(net, input); % predict
%% view prediction
plot(input, mapminmax('reverse', output, mapping), 'r', 'linewidth',2), hold on
plot(input, target, 'o')
plot(input, sin(input), 'g')
hold off
legend({'predicted' 'target' 'sin()'})

There is no reason your network shouldn't work, although 6 is definitely on the low side for approximating a sine wave. I'd try at least 10 maybe even 20.
If that doesn't work then I think you need to give more detail about your system. i.e. the learning algorithm (back-propagation?), the learning rate etc.

I get the same behavior if use vanilla gradient descent. Try using a different training algorithm.
As far as the Java applet is concerned, I did notice something interesting: it does converge if I use a "bipolar sigmoid" and I start with some non-random weights (such as results from a previous training using a Quadratic function).

Related

LSTM vs. Hidden Layer Training in Tensorflow

I am messing around with LSTMs and have a conceptual question. I created a matrix of bogus data on the following rules:
For each 1-D list in the matrix:
If previous element is less than 10, then this next element is the previous one plus 1.
Else, this element is sin(previous element)
This way, it is a sequence that is pretty simply based on the previous information. I set up an LSTM to learn the recurrence and ran it to train on the lists one at a time. I have an LSTM layer followed by a fully connected feed-forward layer. It learns the +1 step very easily, but has trouble with the sin step. It will seemingly pick a random number between -1 and 1 when making the next element when the previous one was greater than 10. My question is this: is the training only modifying the variables in my fully connected feed forward layer? Is that why it can't learn the non-linear sin function?
Here's the code snippet in question:
lstm = rnn_cell.LSTMCell(lstmSize)
y_ = tf.placeholder(tf.float32, [None, OS])
outputs, state = rnn.rnn(lstm, x, dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
last = tf.gather(outputs, int(outputs.get_shape()[0]) - 1)
weights = tf.Variable(tf.truncated_normal([lstmSize, OS]))
bias = tf.Variable(tf.constant(0.1, shape=[OS]))
y = tf.nn.elu(tf.matmul(last, weights) + bias)
error = tf.reduce_mean(tf.square(tf.sub(y_, y)))
train_step = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(error)
The error and shape organization seems to be correct, at least in the sense that it does learn the +1 step quickly without crashing. Shouldn't the LSTM be able to handle the non-linear sin function? It seems almost trivially easy, so my guess is that I set something up wrong and the LSTM isn't learning anything.

Calculating the error difference in neural network

I am somewhat new to neural networks and I need some help to understand the basics. I am trying to create a single neuron with two inputs, with a bias and an output.
The process that happens is like this,
output = w1 * x + w2 * y + bias * wb
So here x and y are the inputs and w1,w2,wb are the weights and bias is 0.5
After that the output goes through the sigmoid function.
sout = S(output)
For testing I am trying to make the neuron act as 'and' and 'or' gates.
So my questions are,
So to calculate the difference between the target and the outputs do I have to run the target(0 or 1) also through a sigmoid function and calculate the difference between them?
or do I just have to calculate the difference between the target (0 or 1) and the output which comes through a sigmoid function?
Also the variation of error in both 'and' and 'or' functions are different as the epoch progresses. The 'and' function error variation is awkward but the 'or' function error variation is acceptable. Why is the 'and' function giving such a wired chart of error going both up and down?
The or error chart
The and error chart
Thanks
The delta to calculate is the second one you proposed. You pass your input (x,y) through the network and take the difference between the associated output and the target value (0 or 1). This assumes your are attempting to perform a binary classification task where the target value would be either 0 or 1.

BackPropagation Neuron Network Approach - Design

I am trying to make a digit recognition program. I shall feed a white/black image of a digit and my output layer will fire the corresponding digit (one neuron shall fire, out of the 0 -> 9 neurons in the Output Layer). I finished implementing a Two-dimensional BackPropagation Neuron Network. My topology sizes are [5][3] -> [3][3] -> 1[10]. So it's One 2-D Input Layer, One 2-D Hidden Layer and One 1-D Output Layer. However I am getting weird and wrong results (Average Error and Output Values).
Debugging at this stage is kind of time consuming. Therefore, I would love to hear if this is the correct design so I continue debugging. Here are the flow steps of my implementation:
Build the Network: One Bias on each Layer except on the Output Layer (No Bias). A Bias's output value is always = 1.0, however its Connections Weights get updated on each pass like all other neurons in the network. All Weights range 0.000 -> 1.000 (no negatives)
Get Input data (0 | OR | 1) and set nth value as the nth Neuron Output Value in the input layer.
Feed Forward: On each Neuron 'n' in every Layer (except the Input Layer):
Get result of SUM (Output Value * Connection Weight) of connected Neurons
from previous layer towards this nth Neuron.
Get TanHyperbolic - Transfer Function - of this SUM as Results
Set Results as the Output Value of this nth Neuron
Get Results: Take Output Values of Neurons in the Output Layer
BackPropagation:
Calculate Network Error: on the Output Layer, get SUM Neurons' (Target Values - Output Values)^2. Divide this SUM by the size of the Output Layer. Get its SquareRoot as Result. Compute Average Error = (OldAverageError * SmoothingFactor * Result) / (SmoothingFactor + 1.00)
Calculate Output Layer Gradients: for each Output Neuron 'n', nth Gradient = (nth Target Value - nth Output Value) * nth Output Value TanHyperbolic Derivative
Calculate Hidden Layer Gradients: for each Neuron 'n', get SUM (TanHyperbolic Derivative of a weight going from this nth Neuron * Gradient of the destination Neuron) as Results. Assign (Results * this nth Output Value) as the Gradient.
Update all Weights: Starting from the hidden Layer and back to the Input Layer, for nth Neuron: Compute NewDeltaWeight = (NetLearningRate * nth Output Value * nth Gradient + Momentum * OldDeltaWeight). Then assign New Weight as (OldWeight + NewDeltaWeight)
Repeat process.
Here is my attempt for digit number seven. The outputs are Neuron # zero and Neuron # 6. Neuron six should be carrying 1 and Neuron # zero should be carrying 0. In my results, all Neuron other than six are carrying the same value (# zero is a sample).
Sorry for the long post. If you know this then you probably know how cool it is and how large it is to be in a single post. Thank you in advance
Softmax with log-loss is typically used for multiclass output layer activation function. You have multiclass/multinomial: with the 10 possible digits comprising the 10 classes.
So you can try changing your output layer activation function to softmax
http://en.wikipedia.org/wiki/Softmax_function
Artificial neural networks
In neural network simulations, the
softmax function is often implemented at the final layer of a network
used for classification. Such networks are then trained under a log
loss (or cross-entropy) regime, giving a non-linear variant of
multinomial logistic regression.
Let us know what effect that has. –

Does it makes any sense that weights and threshold are growing proportionally when training my perceptron?

I am moving my first steps in neural networks and to do so I am experimenting with a very simple single layer, single output perceptron which uses a sigmoidal activation function. I am updating my weights on-line each time a training example is presented using:
weights += learningRate * (correct - result) * {input,1}
Here weights is a n-length vector which also contains the weight from the bias neuron (- threshold), result is the result as computed by the perceptron (and processed using the sigmoid) when given the input, correct is the correct result and {input,1} is the input augmented with 1 (the fixed input from the bias neuron). Now, when I try to train the perceptron to perform logic AND, the weights don't converge for a long time, instead they keep growing similarly and they maintain a ratio of circa -1.5 with the threshold, for instance the three weights are in sequence:
5.067160008240718 5.105631826680446 -7.945513136885797
...
8.40390853077094 8.43890306970281 -12.889540730182592
I would expect the perceptron to stop at 1, 1, -1.5.
Apart from this problem, which looks like connected to some missing stopping condition in the learning, if I try to use the identity function as activation function, I get weight values oscillating around:
0.43601272528257057 0.49092558197172703 -0.23106430854347537
and I obtain similar results with tanh. I can't give an explanation to this.
Thank you
Tunnuz
It is because the sigmoid activation function doesn't reach one (or zero) even with very highly positive (or negative) inputs. So (correct - result) will always be non-zero, and your weights will always get updated. Try it with the step function as the activation function (i.e. f(x) = 1 for x > 0, f(x) = 0 otherwise).
Your average weight values don't seem right for the identity activation function. It might be that your learning rate is a little high -- try reducing it and see if that reduces the size of the oscillations.
Also, when doing online learning (aka stochastic gradient descent), it is common practice to reduce the learning rate over time so that you converge to a solution. Otherwise your weights will continue to oscillate.
When trying to analyze the behavior of the perception, it helps to also look at correct and result.

What is the role of the bias in neural networks? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?
For example, when mapping the AND function, when I use two inputs and one output, it does not give the correct weights. However, when I use three inputs (one of which is a bias), it gives the correct weights.
I think that biases are almost always helpful. In effect, a bias value allows you to shift the activation function to the left or right, which may be critical for successful learning.
It might help to look at a simple example. Consider this 1-input, 1-output network that has no bias:
The output of the network is computed by multiplying the input (x) by the weight (w0) and passing the result through some kind of activation function (e.g. a sigmoid function.)
Here is the function that this network computes, for various values of w0:
Changing the weight w0 essentially changes the "steepness" of the sigmoid. That's useful, but what if you wanted the network to output 0 when x is 2? Just changing the steepness of the sigmoid won't really work -- you want to be able to shift the entire curve to the right.
That's exactly what the bias allows you to do. If we add a bias to that network, like so:
...then the output of the network becomes sig(w0*x + w1*1.0). Here is what the output of the network looks like for various values of w1:
Having a weight of -5 for w1 shifts the curve to the right, which allows us to have a network that outputs 0 when x is 2.
A simpler way to understand what the bias is: it is somehow similar to the constant b of a linear function
y = ax + b
It allows you to move the line up and down to fit the prediction with the data better.
Without b, the line always goes through the origin (0, 0) and you may get a poorer fit.
Here are some further illustrations showing the result of a simple 2-layer feed forward neural network with and without bias units on a two-variable regression problem. Weights are initialized randomly and standard ReLU activation is used. As the answers before me concluded, without the bias the ReLU-network is not able to deviate from zero at (0,0).
Two different kinds of parameters can
be adjusted during the training of an
ANN, the weights and the value in the
activation functions. This is
impractical and it would be easier if
only one of the parameters should be
adjusted. To cope with this problem a
bias neuron is invented. The bias
neuron lies in one layer, is connected
to all the neurons in the next layer,
but none in the previous layer and it
always emits 1. Since the bias neuron
emits 1 the weights, connected to the
bias neuron, are added directly to the
combined sum of the other weights
(equation 2.1), just like the t value
in the activation functions.1
The reason it's impractical is because you're simultaneously adjusting the weight and the value, so any change to the weight can neutralize the change to the value that was useful for a previous data instance... adding a bias neuron without a changing value allows you to control the behavior of the layer.
Furthermore the bias allows you to use a single neural net to represent similar cases. Consider the AND boolean function represented by the following neural network:
(source: aihorizon.com)
w0 corresponds to b.
w1 corresponds to x1.
w2 corresponds to x2.
A single perceptron can be used to
represent many boolean functions.
For example, if we assume boolean values
of 1 (true) and -1 (false), then one
way to use a two-input perceptron to
implement the AND function is to set
the weights w0 = -3, and w1 = w2 = .5.
This perceptron can be made to
represent the OR function instead by
altering the threshold to w0 = -.3. In
fact, AND and OR can be viewed as
special cases of m-of-n functions:
that is, functions where at least m of
the n inputs to the perceptron must be
true. The OR function corresponds to
m = 1 and the AND function to m = n.
Any m-of-n function is easily
represented using a perceptron by
setting all input weights to the same
value (e.g., 0.5) and then setting the
threshold w0 accordingly.
Perceptrons can represent all of the
primitive boolean functions AND, OR,
NAND ( 1 AND), and NOR ( 1 OR). Machine Learning- Tom Mitchell)
The threshold is the bias and w0 is the weight associated with the bias/threshold neuron.
The bias is not an NN term. It's a generic algebra term to consider.
Y = M*X + C (straight line equation)
Now if C(Bias) = 0 then, the line will always pass through the origin, i.e. (0,0), and depends on only one parameter, i.e. M, which is the slope so we have less things to play with.
C, which is the bias takes any number and has the activity to shift the graph, and hence able to represent more complex situations.
In a logistic regression, the expected value of the target is transformed by a link function to restrict its value to the unit interval. In this way, model predictions can be viewed as primary outcome probabilities as shown:
Sigmoid function on Wikipedia
This is the final activation layer in the NN map that turns on and off the neuron. Here also bias has a role to play and it shifts the curve flexibly to help us map the model.
A layer in a neural network without a bias is nothing more than the multiplication of an input vector with a matrix. (The output vector might be passed through a sigmoid function for normalisation and for use in multi-layered ANN afterwards, but that’s not important.)
This means that you’re using a linear function and thus an input of all zeros will always be mapped to an output of all zeros. This might be a reasonable solution for some systems but in general it is too restrictive.
Using a bias, you’re effectively adding another dimension to your input space, which always takes the value one, so you’re avoiding an input vector of all zeros. You don’t lose any generality by this because your trained weight matrix needs not be surjective, so it still can map to all values previously possible.
2D ANN:
For a ANN mapping two dimensions to one dimension, as in reproducing the AND or the OR (or XOR) functions, you can think of a neuronal network as doing the following:
On the 2D plane mark all positions of input vectors. So, for boolean values, you’d want to mark (-1,-1), (1,1), (-1,1), (1,-1). What your ANN now does is drawing a straight line on the 2d plane, separating the positive output from the negative output values.
Without bias, this straight line has to go through zero, whereas with bias, you’re free to put it anywhere.
So, you’ll see that without bias you’re facing a problem with the AND function, since you can’t put both (1,-1) and (-1,1) to the negative side. (They are not allowed to be on the line.) The problem is equal for the OR function. With a bias, however, it’s easy to draw the line.
Note that the XOR function in that situation can’t be solved even with bias.
When you use ANNs, you rarely know about the internals of the systems you want to learn. Some things cannot be learned without a bias. E.g., have a look at the following data: (0, 1), (1, 1), (2, 1), basically a function that maps any x to 1.
If you have a one layered network (or a linear mapping), you cannot find a solution. However, if you have a bias it's trivial!
In an ideal setting, a bias could also map all points to the mean of the target points and let the hidden neurons model the differences from that point.
Modification of neuron WEIGHTS alone only serves to manipulate the shape/curvature of your transfer function, and not its equilibrium/zero crossing point.
The introduction of bias neurons allows you to shift the transfer function curve horizontally (left/right) along the input axis while leaving the shape/curvature unaltered.
This will allow the network to produce arbitrary outputs different from the defaults and hence you can customize/shift the input-to-output mapping to suit your particular needs.
See here for graphical explanation:
http://www.heatonresearch.com/wiki/Bias
In a couple of experiments in my masters thesis (e.g. page 59), I found that the bias might be important for the first layer(s), but especially at the fully connected layers at the end it seems not to play a big role.
This might be highly dependent on the network architecture / dataset.
If you're working with images, you might actually prefer to not use a bias at all. In theory, that way your network will be more independent of data magnitude, as in whether the picture is dark, or bright and vivid. And the net is going to learn to do it's job through studying relativity inside your data. Lots of modern neural networks utilize this.
For other data having biases might be critical. It depends on what type of data you're dealing with. If your information is magnitude-invariant --- if inputting [1,0,0.1] should lead to the same result as if inputting [100,0,10], you might be better off without a bias.
Bias determines how much angle your weight will rotate.
In a two-dimensional chart, weight and bias can help us to find the decision boundary of outputs.
Say we need to build a AND function, the input(p)-output(t) pair should be
{p=[0,0], t=0},{p=[1,0], t=0},{p=[0,1], t=0},{p=[1,1], t=1}
Now we need to find a decision boundary, and the ideal boundary should be:
See? W is perpendicular to our boundary. Thus, we say W decided the direction of boundary.
However, it is hard to find correct W at first time. Mostly, we choose original W value randomly. Thus, the first boundary may be this:
Now the boundary is parallel to the y axis.
We want to rotate the boundary. How?
By changing the W.
So, we use the learning rule function: W'=W+P:
W'=W+P is equivalent to W' = W + bP, while b=1.
Therefore, by changing the value of b(bias), you can decide the angle between W' and W. That is "the learning rule of ANN".
You could also read Neural Network Design by Martin T. Hagan / Howard B. Demuth / Mark H. Beale, chapter 4 "Perceptron Learning Rule"
In simpler terms, biases allow for more and more variations of weights to be learnt/stored... (side-note: sometimes given some threshold). Anyway, more variations mean that biases add richer representation of the input space to the model's learnt/stored weights. (Where better weights can enhance the neural net’s guessing power)
For example, in learning models, the hypothesis/guess is desirably bounded by y=0 or y=1 given some input, in maybe some classification task... i.e some y=0 for some x=(1,1) and some y=1 for some x=(0,1). (The condition on the hypothesis/outcome is the threshold I talked about above. Note that my examples setup inputs X to be each x=a double or 2 valued-vector, instead of Nate's single valued x inputs of some collection X).
If we ignore the bias, many inputs may end up being represented by a lot of the same weights (i.e. the learnt weights mostly occur close to the origin (0,0).
The model would then be limited to poorer quantities of good weights, instead of the many many more good weights it could better learn with bias. (Where poorly learnt weights lead to poorer guesses or a decrease in the neural net’s guessing power)
So, it is optimal that the model learns both close to the origin, but also, in as many places as possible inside the threshold/decision boundary. With the bias we can enable degrees of freedom close to the origin, but not limited to origin's immediate region.
In neural networks:
Each neuron has a bias
You can view bias as a threshold (generally opposite values of threshold)
Weighted sum from input layers + bias decides activation of a neuron
Bias increases the flexibility of the model.
In absence of bias, the neuron may not be activated by considering only the weighted sum from the input layer. If the neuron is not activated, the information from this neuron is not passed through rest of the neural network.
The value of bias is learnable.
Effectively, bias = — threshold. You can think of bias as how easy it is to get the neuron to output a 1 — with a really big bias, it’s very easy for the neuron to output a 1, but if the bias is very negative, then it’s difficult.
In summary: bias helps in controlling the value at which the activation function will trigger.
Follow this video for more details.
Few more useful links:
geeksforgeeks
towardsdatascience
Expanding on zfy's explanation:
The equation for one input, one neuron, one output should look:
y = a * x + b * 1 and out = f(y)
where x is the value from the input node and 1 is the value of the bias node;
y can be directly your output or be passed into a function, often a sigmoid function. Also note that the bias could be any constant, but to make everything simpler we always pick 1 (and probably that's so common that zfy did it without showing & explaining it).
Your network is trying to learn coefficients a and b to adapt to your data.
So you can see why adding the element b * 1 allows it to fit better to more data: now you can change both slope and intercept.
If you have more than one input your equation will look like:
y = a0 * x0 + a1 * x1 + ... + aN * 1
Note that the equation still describes a one neuron, one output network; if you have more neurons you just add one dimension to the coefficient matrix, to multiplex the inputs to all nodes and sum back each node contribution.
That you can write in vectorized format as
A = [a0, a1, .., aN] , X = [x0, x1, ..., 1]
Y = A . XT
i.e. putting coefficients in one array and (inputs + bias) in another you have your desired solution as the dot product of the two vectors (you need to transpose X for the shape to be correct, I wrote XT a 'X transposed')
So in the end you can also see your bias as is just one more input to represent the part of the output that is actually independent of your input.
To think in a simple way, if you have y=w1*x where y is your output and w1 is the weight, imagine a condition where x=0 then y=w1*x equals to 0.
If you want to update your weight you have to compute how much change by delw=target-y where target is your target output. In this case 'delw' will not change since y is computed as 0. So, suppose if you can add some extra value it will help y = w1x + w01, where bias=1 and weight can be adjusted to get a correct bias. Consider the example below.
In terms of line slope, intercept is a specific form of linear equations.
y = mx + b
Check the image
image
Here b is (0,2)
If you want to increase it to (0,3) how will you do it by changing the value of b the bias.
For all the ML books I studied, the W is always defined as the connectivity index between two neurons, which means the higher connectivity between two neurons.
The stronger the signals will be transmitted from the firing neuron to the target neuron or Y = w * X as a result to maintain the biological character of neurons, we need to keep the 1 >=W >= -1, but in the real regression, the W will end up with |W| >=1 which contradicts how the neurons are working.
As a result, I propose W = cos(theta), while 1 >= |cos(theta)|, and Y= a * X = W * X + b while a = b + W = b + cos(theta), b is an integer.
Bias acts as our anchor. It's a way for us to have some kind of baseline where we don't go below that. In terms of a graph, think of like y=mx+b it's like a y-intercept of this function.
output = input times the weight value and added a bias value and then apply an activation function.
The term bias is used to adjust the final output matrix as the y-intercept does. For instance, in the classic equation, y = mx + c, if c = 0, then the line will always pass through 0. Adding the bias term provides more flexibility and better generalisation to our neural network model.
The bias helps to get a better equation.
Imagine the input and output like a function y = ax + b and you need to put the right line between the input(x) and output(y) to minimise the global error between each point and the line, if you keep the equation like this y = ax, you will have one parameter for adaptation only, even if you find the best a minimising the global error it will be kind of far from the wanted value.
You can say the bias makes the equation more flexible to adapt to the best values

Resources