Transition matrix of absorbing higher order markov chain - transitions

I have a absorbing Markov chain, let say I have the states
s={START, S1, S2, END1, END2}
The state Start will always be the starting point of the chain, however it will not be possible to return to this states once you leave it.
I am curious of how the transition matrix will look like for a absorbing higher order markov chain.
Now imagine I set up the second order Markov chain transition matrix as follwing:
__________C1 C2 C3 START END1 END2
C1,C1
C1,C2
C1,START
C1, END1
C1, END2
.
.
.
How will it look like for instance on C1, START? This will be zero for all columns, but isnt the row required to sum to 1? Do I simply remove this from the matrix?
And also how will it be for C1, END1, this row will also have zero all across?
The state END1 and END2 on the other side will be impossible to leave once you are in it i.e. they are absorbing.
I wonder how the transistion matrix will look like for a second or kth order Markov chain. I cannot find any good litterature on this problem, please contribute with som good litterature.

The (C1, START) row should be removed from the matrix. This is because the state (C1, START) doesn't exist in the graph describing the chain. The reason it doesn't exist is simply that the state isn't reachable and hence shouldn't be considered a valid state.
In general, the transition matrix representing the kth order Markov chain should not contain the rows of invalid k-tuples (tuples representing a sequence of states corresponding to an impossible path).
As for the (C1, END1) row, it's not an all zero row because when you're at END1 your next state is END1 with a probability of 1. Hence, from (C1, END1) you have a non-zero probability to step into (END1, END1).

well, If you think the Markov chain is second order, than your initial states should be like (C1,C1), (C1,C2), ..., and thus your matrix would be like
__________(C1,C1) (C1,C2) (C1,C3) (C1,START) ...
(C1,C1)
(C1,C2)
(C1,C3)
(C1,START)
(C1,END1)
(C1,END2)
...
with many zeros (e.g. the prob. of (C1,C2) to (C1,C1) is zero). In your case, it is a 25*25 matrix.
If you claim it is second order, than you have to have 2 consecutive states to estimate the next one.

Related

How to convert tree-stuctrued feature to vector feature

I have a data set with several features, one of those features are categorical but have tree structure on its value. For example, if this categorical features have value a, b, c, d, e, f, g, h, I, j, k. then following image reveal the tree relationship of the values:
the raw feature do not incorporate this relationship (so that feature only take one column). Now, I want to incorporate this relationship, but I still want the feature be vector form.
my solution for this is: create a binary value column for each node. so in this example, the feature can present by binary vector of length 11. And a feature value equal e can be represented as <1, 1, 0,1,0,0, 0, 0,0,0,0> (shows below)
where the 1st element indicate the first level b; 2nd element indicate the second level a; 3rd, 4th, 5th, and 6th element indicate third level d,e,g and j respectively; 7th element indicate second level c; 8th, 9th 10th and 11th element indicate third level f,h,i and k respectively.
The reason I think this would work is you can recover the tree from this vector representation, so I think the information is not lost during this transformation.
The main purpose for this transformation is I want to use some machine learning algorithm on this dataset, so I want the data set be more informative.
I want to know whether this transformation is valid, if not valid, why? And whether there is better way to do this.

In the algorithm LambdaRank (in Learning to Rank) what does |∆ NDCG| means?

This Article describes the LambdaRank algorithm for information retrieval. In formula 8 page 6, the authors propose to multiply the gradient (lambda) by a term called |∆NDCG|.
I do understand that this term is the difference of two NDCGs when swapping two elements in the list:
the size of the change in NDCG (|∆NDCG|) given by swapping the rank positions of U1 and U2
(while leaving the rank positions of all other urls unchanged)
However, I do not understand which ordered list is considered when swapping U1 and U2. Is it the list ordered by the predictions from the model at the current iteration ? Or is it the list ordered by the ground-truth labels of the documents ? Or maybe, the list of the predictions from the model at the previous iteration as suggested by Tie-Yan Liu in his book Learning to Rank for Information Retrieval ?
Short answer: It's the list ordered by the predictions from the model at the current iteration.
Let's see why it makes sense.
At each training iteration, we perform the following steps (these steps are standard for all Machine Learning algorithms, whether it's classification or regression or ranking tasks):
Calculate scores s[i] = f(x[i]) returned by our model for each document i.
Calculate the gradients of model's weights ∂C/∂w, back-propagated from RankNet's cost C. This gradient is the sum of all pairwise gradients ∂C[i, j]/∂w, calculated for each document's pair (i, j).
Perform a gradient ascent step (i.e. w := w + u * ∂C/∂w where u is step size).
In "Speeding up RankNet" paragraph, the notion λ[i] was introduced as contributions of each document's computed scores (using the model weights at current iteration) to the overall gradient ∂C/∂w (at current iteration). If we order our list of documents by the scores from the model at current iteration, each λ[i] can be thought of as "arrows" attached to each document, the sign of which tells us to which direction, up or down, that document should be moved to increase NDCG. Again, NCDG is computed from the order, predicted by our model.
Now, the problem is that the lambdas λ[i, j] for the pair (i, j) contributes equally to the overall gradient. That means the rankings of documents below, let’s say, 100th position is given equal improtance to the rankings of the top documents. This is not what we want: we should prioritize having relevant documents at the very top much more than having correct ranking below 100th position.
That's why we multiply each of those "arrows" by |∆NDCG| to emphasise on top ranking more than the ranking at the bottom of our list.

a new edge is insert to a Minimum spanning tree

I trying to find an algorithm to the following question with one different :
the edge are not distinct.
Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G.
in this link- there is a solution but it is not for the different I wrote up:
the edges are not nessecerliy distinct.
Updating a Minimum spanning tree when a new edge is inserted
someone has an idea?
Well, the naive approach of just using Prim or Kruskal to find the min cost spanning tree of the new graph and then see which one has a lower total cost isn't too bad at O(|E|log|E|).
But we don't need to look at the whole graph.
Suppose your new edge connects vertices A and B. Let C be the parent of A. If B is not a descendent of A, then if A-B is lower cost than A-C, then T is no longer the MST and B should be the new parent of the subtree rooted at A.
If B is a descendant of A, then if A-B is shorter than any of the branches in T along the path from A to B, then T is no longer the MST, and the highest cost edge along that path should be removed, B is the root of the newly disconnected component, and should be added as a child of A.
I believe you may need to check these things a second time, reversing which vertices are A and B. The complexity of this is log|V| where the base of the log is the average number of children per node of T. In the case of T being a straight line, it's O(|V|), but otherwise, I think you could say it is O(log|V|).
First find an MST using one of the existing efficient algorithms.
Now adding an edge (v,w) creates a cycle in the MST. If the newly added edge has the maximum cost among the edges on the cycle then the MST remains as it is. If some other edge on the cycle has the maximum cost, then that's the edge to be removed to get a tree with lower cost.
So we need an efficient way to find the edge with the maximum value on the cycle. You can climb from v and w until you reach LCA(v, w) (the least common ancestor of v and w) to get the edge with the max cost. This takes linear time in the worst case.
If you are going to answer multiple such queries then pre-processing the MST is probably better. You can pre-process the MST to get a sparse table data structure in O(N lg N) time and then use this data structure to answer max queries in O(lg N) time in the worst case.

Log likelihood of a markov network

I am having trouble understanding the following figure from Coursera class:
From as far as I understand, the equation corresponds the factor table:
And therefore the likelihood of a sample data (a = 0, b=0, c=1) for example would be:
It doesn't look like the graph at any way. Can you please explain the graph for me?
I think you're confusing probability and likelihood.
You have a probability distribution p, parameterised by \theta, which has support on (A, B, C). The probability distribution is a function of A, B, C for fixed theta. The likelihood function, which is what's being graphed in the figure above, is a function of \theta for fixed A, B, C. It's a function which says how probable fixed observations are given different values for the parameters.
In popular usage likelihood and probability are synonymous. In technical use they are not.
With the likelihood/probability issue sorted, that likelihood function is telling you that the joint probability of (A, B, C) is the product of pairwise potentials between all connected pairs, in this case (A, B) and (B, C). I{a^1, b^1) is an indicator function which is 1 when a=1 and b=1 and zero otherwise. \theta_{a^1, b^1} is the parameter corresponding to this outcome.
If I had to guess (I can't see the whole class), I would say there are four \thetas for each pairwise relationship, representing the four possible states (both 1, both 0, or one of each), and we've just dropped the ones where the corresponding indicator function is zero and so the parameters are irrelevant.
Your derivation of the equation is not correct. The form of the MRF basically says add together the parameters corresponding to the correct state of each of the pairs, exponentiate, and normalise. The normalising constant is the sum of the joint probability over all possible configurations.

What is the role of the bias in neural networks? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?
For example, when mapping the AND function, when I use two inputs and one output, it does not give the correct weights. However, when I use three inputs (one of which is a bias), it gives the correct weights.
I think that biases are almost always helpful. In effect, a bias value allows you to shift the activation function to the left or right, which may be critical for successful learning.
It might help to look at a simple example. Consider this 1-input, 1-output network that has no bias:
The output of the network is computed by multiplying the input (x) by the weight (w0) and passing the result through some kind of activation function (e.g. a sigmoid function.)
Here is the function that this network computes, for various values of w0:
Changing the weight w0 essentially changes the "steepness" of the sigmoid. That's useful, but what if you wanted the network to output 0 when x is 2? Just changing the steepness of the sigmoid won't really work -- you want to be able to shift the entire curve to the right.
That's exactly what the bias allows you to do. If we add a bias to that network, like so:
...then the output of the network becomes sig(w0*x + w1*1.0). Here is what the output of the network looks like for various values of w1:
Having a weight of -5 for w1 shifts the curve to the right, which allows us to have a network that outputs 0 when x is 2.
A simpler way to understand what the bias is: it is somehow similar to the constant b of a linear function
y = ax + b
It allows you to move the line up and down to fit the prediction with the data better.
Without b, the line always goes through the origin (0, 0) and you may get a poorer fit.
Here are some further illustrations showing the result of a simple 2-layer feed forward neural network with and without bias units on a two-variable regression problem. Weights are initialized randomly and standard ReLU activation is used. As the answers before me concluded, without the bias the ReLU-network is not able to deviate from zero at (0,0).
Two different kinds of parameters can
be adjusted during the training of an
ANN, the weights and the value in the
activation functions. This is
impractical and it would be easier if
only one of the parameters should be
adjusted. To cope with this problem a
bias neuron is invented. The bias
neuron lies in one layer, is connected
to all the neurons in the next layer,
but none in the previous layer and it
always emits 1. Since the bias neuron
emits 1 the weights, connected to the
bias neuron, are added directly to the
combined sum of the other weights
(equation 2.1), just like the t value
in the activation functions.1
The reason it's impractical is because you're simultaneously adjusting the weight and the value, so any change to the weight can neutralize the change to the value that was useful for a previous data instance... adding a bias neuron without a changing value allows you to control the behavior of the layer.
Furthermore the bias allows you to use a single neural net to represent similar cases. Consider the AND boolean function represented by the following neural network:
(source: aihorizon.com)
w0 corresponds to b.
w1 corresponds to x1.
w2 corresponds to x2.
A single perceptron can be used to
represent many boolean functions.
For example, if we assume boolean values
of 1 (true) and -1 (false), then one
way to use a two-input perceptron to
implement the AND function is to set
the weights w0 = -3, and w1 = w2 = .5.
This perceptron can be made to
represent the OR function instead by
altering the threshold to w0 = -.3. In
fact, AND and OR can be viewed as
special cases of m-of-n functions:
that is, functions where at least m of
the n inputs to the perceptron must be
true. The OR function corresponds to
m = 1 and the AND function to m = n.
Any m-of-n function is easily
represented using a perceptron by
setting all input weights to the same
value (e.g., 0.5) and then setting the
threshold w0 accordingly.
Perceptrons can represent all of the
primitive boolean functions AND, OR,
NAND ( 1 AND), and NOR ( 1 OR). Machine Learning- Tom Mitchell)
The threshold is the bias and w0 is the weight associated with the bias/threshold neuron.
The bias is not an NN term. It's a generic algebra term to consider.
Y = M*X + C (straight line equation)
Now if C(Bias) = 0 then, the line will always pass through the origin, i.e. (0,0), and depends on only one parameter, i.e. M, which is the slope so we have less things to play with.
C, which is the bias takes any number and has the activity to shift the graph, and hence able to represent more complex situations.
In a logistic regression, the expected value of the target is transformed by a link function to restrict its value to the unit interval. In this way, model predictions can be viewed as primary outcome probabilities as shown:
Sigmoid function on Wikipedia
This is the final activation layer in the NN map that turns on and off the neuron. Here also bias has a role to play and it shifts the curve flexibly to help us map the model.
A layer in a neural network without a bias is nothing more than the multiplication of an input vector with a matrix. (The output vector might be passed through a sigmoid function for normalisation and for use in multi-layered ANN afterwards, but that’s not important.)
This means that you’re using a linear function and thus an input of all zeros will always be mapped to an output of all zeros. This might be a reasonable solution for some systems but in general it is too restrictive.
Using a bias, you’re effectively adding another dimension to your input space, which always takes the value one, so you’re avoiding an input vector of all zeros. You don’t lose any generality by this because your trained weight matrix needs not be surjective, so it still can map to all values previously possible.
2D ANN:
For a ANN mapping two dimensions to one dimension, as in reproducing the AND or the OR (or XOR) functions, you can think of a neuronal network as doing the following:
On the 2D plane mark all positions of input vectors. So, for boolean values, you’d want to mark (-1,-1), (1,1), (-1,1), (1,-1). What your ANN now does is drawing a straight line on the 2d plane, separating the positive output from the negative output values.
Without bias, this straight line has to go through zero, whereas with bias, you’re free to put it anywhere.
So, you’ll see that without bias you’re facing a problem with the AND function, since you can’t put both (1,-1) and (-1,1) to the negative side. (They are not allowed to be on the line.) The problem is equal for the OR function. With a bias, however, it’s easy to draw the line.
Note that the XOR function in that situation can’t be solved even with bias.
When you use ANNs, you rarely know about the internals of the systems you want to learn. Some things cannot be learned without a bias. E.g., have a look at the following data: (0, 1), (1, 1), (2, 1), basically a function that maps any x to 1.
If you have a one layered network (or a linear mapping), you cannot find a solution. However, if you have a bias it's trivial!
In an ideal setting, a bias could also map all points to the mean of the target points and let the hidden neurons model the differences from that point.
Modification of neuron WEIGHTS alone only serves to manipulate the shape/curvature of your transfer function, and not its equilibrium/zero crossing point.
The introduction of bias neurons allows you to shift the transfer function curve horizontally (left/right) along the input axis while leaving the shape/curvature unaltered.
This will allow the network to produce arbitrary outputs different from the defaults and hence you can customize/shift the input-to-output mapping to suit your particular needs.
See here for graphical explanation:
http://www.heatonresearch.com/wiki/Bias
In a couple of experiments in my masters thesis (e.g. page 59), I found that the bias might be important for the first layer(s), but especially at the fully connected layers at the end it seems not to play a big role.
This might be highly dependent on the network architecture / dataset.
If you're working with images, you might actually prefer to not use a bias at all. In theory, that way your network will be more independent of data magnitude, as in whether the picture is dark, or bright and vivid. And the net is going to learn to do it's job through studying relativity inside your data. Lots of modern neural networks utilize this.
For other data having biases might be critical. It depends on what type of data you're dealing with. If your information is magnitude-invariant --- if inputting [1,0,0.1] should lead to the same result as if inputting [100,0,10], you might be better off without a bias.
Bias determines how much angle your weight will rotate.
In a two-dimensional chart, weight and bias can help us to find the decision boundary of outputs.
Say we need to build a AND function, the input(p)-output(t) pair should be
{p=[0,0], t=0},{p=[1,0], t=0},{p=[0,1], t=0},{p=[1,1], t=1}
Now we need to find a decision boundary, and the ideal boundary should be:
See? W is perpendicular to our boundary. Thus, we say W decided the direction of boundary.
However, it is hard to find correct W at first time. Mostly, we choose original W value randomly. Thus, the first boundary may be this:
Now the boundary is parallel to the y axis.
We want to rotate the boundary. How?
By changing the W.
So, we use the learning rule function: W'=W+P:
W'=W+P is equivalent to W' = W + bP, while b=1.
Therefore, by changing the value of b(bias), you can decide the angle between W' and W. That is "the learning rule of ANN".
You could also read Neural Network Design by Martin T. Hagan / Howard B. Demuth / Mark H. Beale, chapter 4 "Perceptron Learning Rule"
In simpler terms, biases allow for more and more variations of weights to be learnt/stored... (side-note: sometimes given some threshold). Anyway, more variations mean that biases add richer representation of the input space to the model's learnt/stored weights. (Where better weights can enhance the neural net’s guessing power)
For example, in learning models, the hypothesis/guess is desirably bounded by y=0 or y=1 given some input, in maybe some classification task... i.e some y=0 for some x=(1,1) and some y=1 for some x=(0,1). (The condition on the hypothesis/outcome is the threshold I talked about above. Note that my examples setup inputs X to be each x=a double or 2 valued-vector, instead of Nate's single valued x inputs of some collection X).
If we ignore the bias, many inputs may end up being represented by a lot of the same weights (i.e. the learnt weights mostly occur close to the origin (0,0).
The model would then be limited to poorer quantities of good weights, instead of the many many more good weights it could better learn with bias. (Where poorly learnt weights lead to poorer guesses or a decrease in the neural net’s guessing power)
So, it is optimal that the model learns both close to the origin, but also, in as many places as possible inside the threshold/decision boundary. With the bias we can enable degrees of freedom close to the origin, but not limited to origin's immediate region.
In neural networks:
Each neuron has a bias
You can view bias as a threshold (generally opposite values of threshold)
Weighted sum from input layers + bias decides activation of a neuron
Bias increases the flexibility of the model.
In absence of bias, the neuron may not be activated by considering only the weighted sum from the input layer. If the neuron is not activated, the information from this neuron is not passed through rest of the neural network.
The value of bias is learnable.
Effectively, bias = — threshold. You can think of bias as how easy it is to get the neuron to output a 1 — with a really big bias, it’s very easy for the neuron to output a 1, but if the bias is very negative, then it’s difficult.
In summary: bias helps in controlling the value at which the activation function will trigger.
Follow this video for more details.
Few more useful links:
geeksforgeeks
towardsdatascience
Expanding on zfy's explanation:
The equation for one input, one neuron, one output should look:
y = a * x + b * 1 and out = f(y)
where x is the value from the input node and 1 is the value of the bias node;
y can be directly your output or be passed into a function, often a sigmoid function. Also note that the bias could be any constant, but to make everything simpler we always pick 1 (and probably that's so common that zfy did it without showing & explaining it).
Your network is trying to learn coefficients a and b to adapt to your data.
So you can see why adding the element b * 1 allows it to fit better to more data: now you can change both slope and intercept.
If you have more than one input your equation will look like:
y = a0 * x0 + a1 * x1 + ... + aN * 1
Note that the equation still describes a one neuron, one output network; if you have more neurons you just add one dimension to the coefficient matrix, to multiplex the inputs to all nodes and sum back each node contribution.
That you can write in vectorized format as
A = [a0, a1, .., aN] , X = [x0, x1, ..., 1]
Y = A . XT
i.e. putting coefficients in one array and (inputs + bias) in another you have your desired solution as the dot product of the two vectors (you need to transpose X for the shape to be correct, I wrote XT a 'X transposed')
So in the end you can also see your bias as is just one more input to represent the part of the output that is actually independent of your input.
To think in a simple way, if you have y=w1*x where y is your output and w1 is the weight, imagine a condition where x=0 then y=w1*x equals to 0.
If you want to update your weight you have to compute how much change by delw=target-y where target is your target output. In this case 'delw' will not change since y is computed as 0. So, suppose if you can add some extra value it will help y = w1x + w01, where bias=1 and weight can be adjusted to get a correct bias. Consider the example below.
In terms of line slope, intercept is a specific form of linear equations.
y = mx + b
Check the image
image
Here b is (0,2)
If you want to increase it to (0,3) how will you do it by changing the value of b the bias.
For all the ML books I studied, the W is always defined as the connectivity index between two neurons, which means the higher connectivity between two neurons.
The stronger the signals will be transmitted from the firing neuron to the target neuron or Y = w * X as a result to maintain the biological character of neurons, we need to keep the 1 >=W >= -1, but in the real regression, the W will end up with |W| >=1 which contradicts how the neurons are working.
As a result, I propose W = cos(theta), while 1 >= |cos(theta)|, and Y= a * X = W * X + b while a = b + W = b + cos(theta), b is an integer.
Bias acts as our anchor. It's a way for us to have some kind of baseline where we don't go below that. In terms of a graph, think of like y=mx+b it's like a y-intercept of this function.
output = input times the weight value and added a bias value and then apply an activation function.
The term bias is used to adjust the final output matrix as the y-intercept does. For instance, in the classic equation, y = mx + c, if c = 0, then the line will always pass through 0. Adding the bias term provides more flexibility and better generalisation to our neural network model.
The bias helps to get a better equation.
Imagine the input and output like a function y = ax + b and you need to put the right line between the input(x) and output(y) to minimise the global error between each point and the line, if you keep the equation like this y = ax, you will have one parameter for adaptation only, even if you find the best a minimising the global error it will be kind of far from the wanted value.
You can say the bias makes the equation more flexible to adapt to the best values

Resources