Is the Bias unit in neural networks always one? - machine-learning

I have been studying Neural Networks for a couple of weeks and noticed that all guides and documentation either never mentioned the Bias unit and/or always assumed it to be 1.
Is there any reason or cases where we want a bias unit not to be 1?
Or have it as an adjustable parameter in the network?
Edit:I'm sorry, i'm new to stack overflow and found similar questions so I thought this was a good place to ask, thank you for correcting me.
Edit: When people refer to bias they are in most cases referring to the bias_weight:
Bias&BiasUnit
The bias unit is also the reason we get the equation for the bias Δb in back-propagation as:
Δb = ΔY * 1 (the * 1 is just normally left out as it has no effect on the equation)
Hope that clears thinks up.

This question is better suited for cross-validation or maybe data-science (not about code at all).
I think you have a misunderstanding, the bias term is a trainable parameter that is also learned and updated during training.
I think I know what is the source to your confusion (correct me if I'm wrong). In many places, the bias term is incorporated into the input vector x as a constant 1 element.
So if we have the following input:
The output for some operation can be written as:
Where the trained parameters are:
But it can also be written in the following way:
But, despite the fact that we have the constant 1 in the input, since is still one of the trainable parameters, the bias can still be anything.

Related

Why does random initialization of weights in neural network work?

The question of why the weights of a neural network cannot be initialized as 0's has been asked plenty of times. The answer is straightforward: zero initial weights would result in all nodes in a layer learning the same thing, hence the symmetry has to be broken.
However, what I failed to comprehend is that, why would initializing the weights to some random numbers close to zero work. Even more advanced initialization techniques such as Xavier modify only the variance, which remains close to zero. Some answers in the linked question point to the existence of multiple local optima, but I seriously doubt the validity of this argument because of the followings:
The (usual) cost function of an individual logistic regression has a unique minimum. Nonetheless this insight may not generalizable to more than one node, so let's forget it for now.
Assume for the sake of argument that multiple local optima exist. Then shouldn't the proper randomization technique be Monte-Carlo-ish-ly over the entire domain of possible weights, rather than some random epsilons about zero? What's stopping the weights from converging again after a couple of iterations? The only rationale I can think of is that there exists a global maximum at the origin and all local optima are nicely spread 'radially' so that a tiny perturbation in any direction is sufficient to move you down the gradient towards different local optima, which is highly improbable.
PS1: I am asking the question here in the main Stack Overflow site because my reference is here.
PS2: The answer to why the variance of the initial weights are scaled this way can be found here. However, it did not address my question of why random initialization would work at all due to the possibility of converging weights, or rather, the weights would 'diverge' to 'learn' different features.
You've hit the main reason: we need the kernels to differ so that the kernels (nodes) differentiate their learning.
First of all, random initialization doesn't always work; depending on how closely you've tuned your model structure and hyper-parameters, sometimes the model fails to converge; this is obvious from the loss function in the early iterations.
For some applications, there are local minima. However, in practical use, the happy outgrowth of problem complexity is that those minima have very similar accuracy. In short, it doesn't matter which solution we find, so long as we find one. For instance, in image classification (e.g. the ImageNet contest), there are many features useful in identifying photos. As with (simpler) PCA, when we have a sets of features that correlate highly with the desired output and with each other, it doesn't matter which set we use. Those features are cognate to the kernels of a CNN.

Neural network input with exponential decay

Often, to improve learning rates, inputs to a neural network are preprocessed by scaling and shifting to be between -1 and 1. I'm wondering though if that's a good idea with an input whose graph would be exponentially decaying. For instance, if I had an input with integer values 0 to 100 distributed with the majority of inputs being 0 and smaller values being more common than large values, with 99 being very rare.
Seems that scaling them and shifting wouldn't be ideal, since now the most common value would be -1. How is this type of input best dealt with?
Consider you're using a sigmoid activation function which is symmetric around the origin:
The trick to speed up convergence is to have the mean of the normalized data set be 0 as well. The choice of activation function is important because you're not only learning weights from the input to the first hidden layer, i.e. normalizing the input is not enough: the input to the second hidden layer/output is learned as well and thus needs to obey the same rule to be consequential. In the case of non-input layers this is done by the activation function. The much cited Efficient Backprop paper by Lecun summarizes these rules and has some nice explanations as well which you should look up. Because there's other things like weight and bias initialization that one should consider as well.
In chapter 4.3 he gives a formula to normalize the inputs in a way to have the mean close to 0 and the std deviation 1. If you need more sources, this is great faq as well.
I don't know your application scenario but if you're using symbolic data and 0-100 is ment to represent percentages, then you could also apply softmax to the input layer to get better input representations. It's also worth noting that some people prefer scaling to [.1,.9] instead of [0,1]
Edit: Rewritten to match comments.

Things to try when Neural Network not Converging

One of the most popular questions regarding Neural Networks seem to be:
Help!! My Neural Network is not converging!!
See here, here, here, here and here.
So after eliminating any error in implementation of the network, What are the most common things one should try??
I know that the things to try would vary widely depending on network architecture.
But tweaking which parameters (learning rate, momentum, initial weights, etc) and implementing what new features (windowed momentum?) were you able to overcome some similar problems while building your own neural net?
Please give answers which are language agnostic if possible. This question is intended to give some pointers to people stuck with neural nets which are not converging..
If you are using ReLU activations, you may have a "dying ReLU" problem. In short, under certain conditions, any neuron with a ReLU activation can be subject to a (bias) adjustment that leads to it never being activated ever again. It can be fixed with a "Leaky ReLU" activation, well explained in that article.
For example, I produced a simple MLP (3-layer) network with ReLU output which failed. I provided data it could not possibly fail on, and it still failed. I turned the learning rate way down, and it failed more slowly. It always converged to predicting each class with equal probability. It was all fixed by using a Leaky ReLU instead of standard ReLU.
If we are talking about classification tasks, then you should shuffle examples before training your net. I mean, don't feed your net with thousands examples of class #1, after thousands examples of class #2, etc... If you do that, your net most probably wouldn't converge, but would tend to predict last trained class.
I had faced this problem while implementing my own back prop neural network. I tried the following:
Implemented momentum (and kept the value at 0.5)
Kept the learning rate at 0.1
Charted the error, weights, input as well as output of each and every neuron, Seeing the data as a graph is more helpful in figuring out what is going wrong
Tried out different activation function (all sigmoid). But this did not help me much.
Initialized all weights to random values between -0.5 and 0.5 (My network's output was in the range -1 and 1)
I did not try this but Gradient Checking can be helpful as well
If the problem is only convergence (not the actual "well trained network", which is way to broad problem for SO) then the only thing that can be the problem once the code is ok is the training method parameters. If one use naive backpropagation, then these parameters are learning rate and momentum. Nothing else matters, as for any initialization, and any architecture, correctly implemented neural network should converge for a good choice of these two parameters (in fact, for momentum=0 it should converge to some solution too, for a small enough learning rate).
In particular - there is a good heuristic approach called "resillient backprop" which is in fact parameterless appraoch, which should (almost) always converge (assuming correct implementation).
after you've tried different meta parameters (optimization / architecture), the most probable place to look at is - THE DATA
as for myself - to minimize fiddling with meta parameters, i keep my optimizer automated - Adam is by opt-of-choice.
there are some rules of thumb regarding application vs architecture... but its really best to crunch those on your own.
to the point:
in my experience, after you've debugged the net (the easy debugging), and still don't converge or get to an undesired local minima, the usual suspect is the data.
weather you have contradictory samples or just incorrect ones (outliers), a small amount can make the difference from say 0.6-acc to (after cleaning) 0.9-acc..
a smaller but golden (clean) dataset is much better than a big slightly dirty one...
with augmentation you can tweak results even further.

Having trouble understanding neural networks

I am trying to use a neural network to solve a problem. I learned about them from the Machine Learning course offered on Coursera, and was happy to find that FANN is a Ruby implementation of neural networks, so I didn't have to re-invent the airplane.
However, I'm not really understanding why FANN is giving me such strange output. Based on what I learned from the class,
I have a set of training data that's results of matches. The player is given a number, their opponent is given a number, and the result is 1 for a win and 0 for a loss. The data is a little noisy because of upsets, but not terribly so. My goal is to find which rating gaps are more prone to upsets - for instance, my intuition tells me that lower-rated matches tend to entail more upsets because the ratings are less accurate.
So I got a training set of about 100 examples. Each example is (rating, delta) => 1/0. So it's a classification problem, but not really one that I think lends itself to a logistic regression-type chart, and a neural network seemed more correct.
My code begins
training_data = RubyFann::TrainData.new(:inputs => inputs, :desired_outputs => outputs)
I then set up the neural network with
network = RubyFann::Standard.new(
:num_inputs=>2,
:hidden_neurons=>[8, 8, 8, 8],
:num_outputs=>1)
In the class, I learned that a reasonably default is to have each hidden layer with the same number of units. Since I don't really know how to work this or what I'm doing yet, I went with the default.
network.train_on_data(training_data, 1000, 1, 0.15)
And then finally, I went through a set of sample input ratings in increments and, at each increment, increased delta until the result switched from being > 0.5 to < 0.5, which I took to be about 0 and about 1, although really they were more like 0.45 and 0.55.
When I ran this once, it gave me 0 for every input. I ran it again twice with the same data and got a decreasing trend of negative numbers and an increasing trend of positive numbers, completely opposite predictions.
I thought maybe I wasn't including enough features, so I added (rating**2 and delta**2). Unfortunately, then I started getting either my starting delta or my maximum delta for every input every time.
I don't really understand why I'm getting such divergent results or what Ruby-FANN is telling me, partly because I don't understand the library but also, I suspect, because I just started learning about neural networks and am missing something big and obvious. Do I not have enough training data, do I need to include more features, what is the problem and how can I either fix it or learn how to do things better?
What about playing a little with parameters? At first I would highly recommend only two layers..there should be mathematical proof somewhere that it is enough for many problems. If you have too many neurons your NN will not have enough epochs to really learn something.. so you can also play with number of epochs as well as gama..I think that in your case it's 0.15 ..if you use a little bigger value your NN should learn a little bit faster(don't be afraid to try 0.3 or even 0.7), right value of gama usually depends on weight's intervals or input normalization.
Your NN shows such a different results most probably because in each run there is new initialization and then there is totally different network and it will learn in different way as the previous one(different weights will have higher values so different parts of NN will learn same things).
I am not familiar with this library I am just writing some experiences with NN. Hope something from these will help..

Why do we have to normalize the input for an artificial neural network? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
Why do we have to normalize the input for a neural network?
I understand that sometimes, when for example the input values are non-numerical a certain transformation must be performed, but when we have a numerical input? Why the numbers must be in a certain interval?
What will happen if the data is not normalized?
It's explained well here.
If the input variables are combined linearly, as in an MLP [multilayer perceptron], then it is
rarely strictly necessary to standardize the inputs, at least in theory. The
reason is that any rescaling of an input vector can be effectively undone by
changing the corresponding weights and biases, leaving you with the exact
same outputs as you had before. However, there are a variety of practical
reasons why standardizing the inputs can make training faster and reduce the
chances of getting stuck in local optima. Also, weight decay and Bayesian
estimation can be done more conveniently with standardized inputs.
In neural networks, it is good idea not just to normalize data but also to scale them. This is intended for faster approaching to global minima at error surface. See the following pictures:
Pictures are taken from the coursera course about neural networks. Author of the course is Geoffrey Hinton.
Some inputs to NN might not have a 'naturally defined' range of values. For example, the average value might be slowly, but continuously increasing over time (for example a number of records in the database).
In such case feeding this raw value into your network will not work very well. You will teach your network on values from lower part of range, while the actual inputs will be from the higher part of this range (and quite possibly above range, that the network has learned to work with).
You should normalize this value. You could for example tell the network by how much the value has changed since the previous input. This increment usually can be defined with high probability in a specific range, which makes it a good input for network.
There are 2 Reasons why we have to Normalize Input Features before Feeding them to Neural Network:
Reason 1: If a Feature in the Dataset is big in scale compared to others then this big scaled feature becomes dominating and as a result of that, Predictions of the Neural Network will not be Accurate.
Example: In case of Employee Data, if we consider Age and Salary, Age will be a Two Digit Number while Salary can be 7 or 8 Digit (1 Million, etc..). In that Case, Salary will Dominate the Prediction of the Neural Network. But if we Normalize those Features, Values of both the Features will lie in the Range from (0 to 1).
Reason 2: Front Propagation of Neural Networks involves the Dot Product of Weights with Input Features. So, if the Values are very high (for Image and Non-Image Data), Calculation of Output takes a lot of Computation Time as well as Memory. Same is the case during Back Propagation. Consequently, Model Converges slowly, if the Inputs are not Normalized.
Example: If we perform Image Classification, Size of Image will be very huge, as the Value of each Pixel ranges from 0 to 255. Normalization in this case is very important.
Mentioned below are the instances where Normalization is very important:
K-Means
K-Nearest-Neighbours
Principal Component Analysis (PCA)
Gradient Descent
When you use unnormalized input features, the loss function is likely to have very elongated valleys. When optimizing with gradient descent, this becomes an issue because the gradient will be steep with respect some of the parameters. That leads to large oscillations in the search space, as you are bouncing between steep slopes. To compensate, you have to stabilize optimization with small learning rates.
Consider features x1 and x2, where range from 0 to 1 and 0 to 1 million, respectively. It turns out the ratios for the corresponding parameters (say, w1 and w2) will also be large.
Normalizing tends to make the loss function more symmetrical/spherical. These are easier to optimize because the gradients tend to point towards the global minimum and you can take larger steps.
Looking at the neural network from the outside, it is just a function that takes some arguments and produces a result. As with all functions, it has a domain (i.e. a set of legal arguments). You have to normalize the values that you want to pass to the neural net in order to make sure it is in the domain. As with all functions, if the arguments are not in the domain, the result is not guaranteed to be appropriate.
The exact behavior of the neural net on arguments outside of the domain depends on the implementation of the neural net. But overall, the result is useless if the arguments are not within the domain.
I believe the answer is dependent on the scenario.
Consider NN (neural network) as an operator F, so that F(input) = output. In the case where this relation is linear so that F(A * input) = A * output, then you might choose to either leave the input/output unnormalised in their raw forms, or normalise both to eliminate A. Obviously this linearity assumption is violated in classification tasks, or nearly any task that outputs a probability, where F(A * input) = 1 * output
In practice, normalisation allows non-fittable networks to be fittable, which is crucial to experimenters/programmers. Nevertheless, the precise impact of normalisation will depend not only on the network architecture/algorithm, but also on the statistical prior for the input and output.
What's more, NN is often implemented to solve very difficult problems in a black-box fashion, which means the underlying problem may have a very poor statistical formulation, making it hard to evaluate the impact of normalisation, causing the technical advantage (becoming fittable) to dominate over its impact on the statistics.
In statistical sense, normalisation removes variation that is believed to be non-causal in predicting the output, so as to prevent NN from learning this variation as a predictor (NN does not see this variation, hence cannot use it).
The reason normalization is needed is because if you look at how an adaptive step proceeds in one place in the domain of the function, and you just simply transport the problem to the equivalent of the same step translated by some large value in some direction in the domain, then you get different results. It boils down to the question of adapting a linear piece to a data point. How much should the piece move without turning and how much should it turn in response to that one training point? It makes no sense to have a changed adaptation procedure in different parts of the domain! So normalization is required to reduce the difference in the training result. I haven't got this written up, but you can just look at the math for a simple linear function and how it is trained by one training point in two different places. This problem may have been corrected in some places, but I am not familiar with them. In ALNs, the problem has been corrected and I can send you a paper if you write to wwarmstrong AT shaw.ca
On a high level, if you observe as to where normalization/standardization is mostly used, you will notice that, anytime there is a use of magnitude difference in model building process, it becomes necessary to standardize the inputs so as to ensure that important inputs with small magnitude don't loose their significance midway the model building process.
example:
√(3-1)^2+(1000-900)^2 ≈ √(1000-900)^2
Here, (3-1) contributes hardly a thing to the result and hence the input corresponding to these values is considered futile by the model.
Consider the following:
Clustering uses euclidean or, other distance measures.
NNs use optimization algorithm to minimise cost function(ex. - MSE).
Both distance measure(Clustering) and cost function(NNs) use magnitude difference in some way and hence standardization ensures that magnitude difference doesn't command over important input parameters and the algorithm works as expected.
Hidden layers are used in accordance with the complexity of our data. If we have input data which is linearly separable then we need not to use hidden layer e.g. OR gate but if we have a non linearly seperable data then we need to use hidden layer for example ExOR logical gate.
Number of nodes taken at any layer depends upon the degree of cross validation of our output.

Resources