Getting probability of class using naive Bayes - machine-learning

I am trying to classify input with two classes, here is the code. dino and crypto are two classes:
for w, cnt in list(counts.items()): #count is dict with word and it's count value
p_word = vocab[w] / sum(vocab.values())
p_w_given_dino = (word_counts["dino"].get(w, 0.0) + 1) / (sum(word_counts["dino"].values()) + v)
p_w_given_crypto = (word_counts["crypto"].get(w, 0.0) + 1) / (sum(word_counts["crypto"].values()) + v)
log_prob_dino += math.log(cnt * p_w_given_dino / p_word)
log_prob_crypto += math.log(cnt * p_w_given_crypto / p_word)
print("Score(dino) :", math.exp(log_prob_dino + math.log(prior_dino)))
print("Score(crypto):", math.exp(log_prob_crypto + math.log(prior_crypto)))
Another approach is:
prior_dino = (priors["dino"] / sum(priors.values()))
prior_crypto = (priors["crypto"] / sum(priors.values()))
for w, cnt in list(counts.items()):
p_word = vocab[w] / sum(vocab.values())
p_w_given_dino = (word_counts["dino"].get(w, 0.0) + 1) / (sum(word_counts["dino"].values()) + v)
p_w_given_crypto = (word_counts["crypto"].get(w, 0.0) + 1) / (sum(word_counts["crypto"].values()) + v)
prob_dino *= p_w_given_dino
prob_crypto *= p_w_given_crypto
t_prior_dino = prob_dino * prior_dino
t_prior_crypto = prob_crypto * prior_crypto
On the second approach I got very small values.
Which one is correct, or are both of them correct?

These are completely equivalent approaches. The first one however is the preferable one, as working on logarithms of probabilities makes the whole process more numericaly stable. Results should be identical (up to numerical errors).
However it appears that you have errors in second approach
prob_dino *= p_w_given_dino
does not use the fact, that you have cnt occurences; it should be something like
prob_dino *= pow(p_w_given_dino, cnt)

Related

How do I optimize this code so that it executes faster (max ~5 mins)?

I am trying to make a program in turtle that creates a Lyapunov fractal. However, as using timeit shows, this should take around 3 hours to complete, 1.5 if I compromise resolution (N).
import turtle as t; from math import log; from timeit import default_timer as dt
t.setup(2000,1000,0); swid=t.window_width(); shei=t.window_height(); t.up(); t.ht(); t.tracer(False); t.colormode(255); t.bgcolor('pink')
def lyapunov(seq,xmin,xmax,ymin,ymax,N,tico):
truseq=str(bin(seq))[2:]
for x in range(-swid//2+2,swid//2-2):
tx=(x*(xmax-xmin))/swid+(xmax+xmin)/2
if x==-swid//2+2:
startt=dt()
for y in range(-shei//2+11,shei//2-1):
t.goto(x,y); ty=(y*(ymax-ymin))/shei+(ymax+ymin)/2; lex=0; xn=prevxn=0.5
for n in range(1,N+1):
if truseq[n%len(truseq)]=='0': rn=tx
else: rn=ty
xn=rn*prevxn*(1-prevxn)
prevxn=xn
if xn!=1: lex+=(1/N)*log(abs(rn*(1-2*xn)))
if lex>0: t.pencolor(0,0,min(int(lex*tico),255))
else: t.pencolor(max(255+int(lex*tico),0),max(255+int(lex*tico),0),0)
t.dot(size=1); t.update()
if x==-swid//2+2:
endt=dt()
print(f'Estimated total time: {(endt-startt)*(swid-5)} secs')
#Example: lyapunov(2,2.0,4.0,2.0,4.0,10000,100)
I attempted to use yield but I couldn't figure out where it should go.
On my slower machine, I was only able to test with a tiny N (e.g. 10) but I was able to speed up the code about 350 times. (Though this will be clearly lower as N increases.) There are two problems with your use of update(). The first is you call it too often -- you should outdent it from the y loop to the x loop so it's only called once on each vertical pass. Second, the dot() operator forces an automatic update() so you get no advantage from using tracer(). Replace dot() with some other method of drawing a pixel and you'll get back the advantage of using tracer() and update(). (As long as you move update() out of innermost loop as I noted.)
My rework of your code where I tried out these, and other, changes:
from turtle import Screen, Turtle
from math import log
from timeit import default_timer
def lyapunov(seq, xmin, xmax, ymin, ymax, N, tico):
xdif = xmax - xmin
ydif = ymax - ymin
truseq = str(bin(seq))[2:]
for x in range(2 - swid_2, swid_2 - 2):
if x == 2 - swid_2:
startt = default_timer()
tx = x * xdif / swid + xdif/2
for y in range(11 - shei_2, shei_2 - 1):
ty = y * ydif / shei + ydif/2
lex = 0
xn = prevxn = 0.5
for n in range(1, N+1):
rn = tx if truseq[n % len(truseq)] == '0' else ty
xn = rn * prevxn * (1 - prevxn)
prevxn = xn
if xn != 1:
lex += 1/N * log(abs(rn * (1 - xn*2)))
if lex > 0:
turtle.pencolor(0, 0, min(int(lex * tico), 255))
else:
lex_tico = max(int(lex * tico) + 255, 0)
turtle.pencolor(lex_tico, lex_tico, 0)
turtle.goto(x, y)
turtle.pendown()
turtle.penup()
screen.update()
if x == 2 - swid_2:
endt = default_timer()
print(f'Estimated total time: {(endt - startt) * (swid - 5)} secs')
screen = Screen()
screen.setup(2000, 1000, startx=0)
screen.bgcolor('pink')
screen.colormode(255)
screen.tracer(False)
swid = screen.window_width()
shei = screen.window_height()
swid_2 = swid//2
shei_2 = shei//2
turtle = Turtle()
turtle.hideturtle()
turtle.penup()
turtle.setheading(90)
lyapunov(2, 2.0, 4.0, 2.0, 4.0, 10, 100)
screen.exitonclick()

Nueral Network for Linear Regression: prediction different every time

I have 200 training examples. I have run linear regression with 6 features on this dataset and it works fine, so I want to run nueral networs on it too.
Problem: each time I run the program, the prediction (pred) is different, vastly different!
input_layer_size = 6;
hidden_layer_size = 3;
num_labels = 1;
% Load Training Data
load('capitaldata.mat');
% example size
m = size(X, 1);
% initialize theta
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
% Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
% find optimal theta
options = optimset('MaxIter', 50);
% set regularization parameter
lambda = 1;
% Create "short hand" for the cost function to be minimized
costFunction = #(p) nnCostFunctionLinear(p, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), num_labels, (hidden_layer_size + 1));
% test case
test = [18 279 86 59 23 16];
pred = predict(Theta1, Theta2, test);
display(pred);
Functions that are called by the above program:
1) randInitializeWeights.m
function W = randInitializeWeights(L_in, L_out)
W = zeros(L_out, 1 + L_in);
epsilon_init = 0.12;
W = rand(L_out , 1 + L_in) * 2 * epsilon_init - epsilon_init;
end;
2) nnCostFunctionLinear.m should be right since the test result is correct. Let me know if you would like to see it too.
I suspect that the problem is the dataset size, the number of features, or the initialize weights.
Thank you in advance for your help!
As a test, you can seed the random number generator with the same value each time to give the same sequence of random numbers each time. Search for
random seed
and the name of the software you are using to find how to set the seed for the random number generator.

Lua Separation Steering algorithm groups overlapping rooms into one corner

I'm trying to implement a dungeon generation algorithm (presented here and demo-ed here ) that involves generating a random number of cells that overlap each other. The cells then are pushed apart/separated and then connected. Now, the original poster/author described that he is using a Separation Steering Algorithm in order to uniformly distribute the cells over an area. I haven't had much experience with flocking algorithm and/or separation steering behavior, thus I turned to google for an explanation (and found this ). My implementation (based on the article last mentioned) is as follows:
function pdg:_computeSeparation(_agent)
local neighbours = 0
local rtWidth = #self._rooms
local v =
{
x = self._rooms[_agent].startX,
y = self._rooms[_agent].startY,
--velocity = 1,
}
for i = 1, rtWidth do
if _agent ~= i then
local distance = math.dist(self._rooms[_agent].startX,
self._rooms[_agent].startY,
self._rooms[i].startX,
self._rooms[i].startY)
if distance < 12 then
--print("Separating agent: ".._agent.." from agent: "..i.."")
v.x = (v.x + self._rooms[_agent].startX - self._rooms[i].startX) * distance
v.y = (v.y + self._rooms[_agent].startY - self._rooms[i].startY) * distance
neighbours = neighbours + 1
end
end
end
if neighbours == 0 then
return v
else
v.x = v.x / neighbours
v.y = v.y / neighbours
v.x = v.x * -1
v.y = v.y * -1
pdg:_normalize(v, 1)
return v
end
end
self._rooms is a table that contains the original X and Y position of the Room in the grid, along with it's width and height (endX, endY).
The problem is that, instead of tiddly arranging the cells on the grid, it takes the overlapping cells and moves them into an area that goes from 1,1 to distance+2, distance+2 (as seen in my video [youtube])
I'm trying to understand why this is happening.
In case it's needed, here I parse the grid table, separate and fill the cells after the separation:
function pdg:separate( )
if #self._rooms > 0 then
--print("NR ROOMS: "..#self._rooms.."")
-- reset the map to empty
for x = 1, self._pdgMapWidth do
for y = 1, self._pdgMapHeight do
self._pdgMap[x][y] = 4
end
end
-- now, we separate the rooms
local numRooms = #self._rooms
for i = 1, numRooms do
local v = pdg:_computeSeparation(i)
--we adjust the x and y positions of the items in the room table
self._rooms[i].startX = v.x
self._rooms[i].startY = v.y
--self._rooms[i].endX = v.x + self._rooms[i].endX
--self._rooms[i].endY = v.y + self._rooms[i].endY
end
-- we render them again
for i = 1, numRooms do
local px = math.abs( math.floor(self._rooms[i].startX) )
local py = math.abs( math.floor(self._rooms[i].startY) )
for k = self.rectMinWidth, self._rooms[i].endX do
for v = self.rectMinHeight, self._rooms[i].endY do
print("PX IS AT: "..px.." and k is: "..k.." and their sum is: "..px+k.."")
print("PY IS AT: "..py.." and v is: "..v.." and their sum is: "..py+v.."")
if k == self.rectMinWidth or
v == self.rectMinHeight or
k == self._rooms[i].endX or
v == self._rooms[i].endY then
self._pdgMap[px+k][py+v] = 1
else
self._pdgMap[px+k][py+v] = 2
end
end
end
end
end
I have implemented this generation algorithm as well, and I came across more or less the same issue. All of my rectangles ended up in the topleft corner.
My problem was that I was normalizing velocity vectors with zero length. If you normalize those, you divide by zero, resulting in NaN.
You can fix this by simply performing a check whether your velocity's length is zero before using it in any further calculations.
I hope this helps!
Uhm I know it's an old question, but I noticed something and maybe it can be useful to somebody, so...
I think there's a problem here:
v.x = (v.x + self._rooms[_agent].startX - self._rooms[i].startX) * distance
v.y = (v.y + self._rooms[_agent].startY - self._rooms[i].startY) * distance
Why do you multiply these equations by the distance?
"(self._rooms[_agent].startX - self._rooms[i].startX)" already contains the (squared) distance!
Plus, multiplying everything by "distance" you modify your previous results stored in v!
If at least you put the "v.x" outside the bracket, the result would just be higher, the normalize function will fix it. Although that's some useless calculation...
By the way I'm pretty sure the code should be like:
v.x = v.x + (self._rooms[_agent].startX - self._rooms[i].startX)
v.y = v.y + (self._rooms[_agent].startY - self._rooms[i].startY)
I'll make an example. Imagine you have your main agent in (0,0) and three neighbours in (0,-2), (-2,0) and (0,2). A separation steering behaviour would move the main agent toward the X axis, at a normalized direction of (1,0).
Let's focus only on the Y component of the result vector.
The math should be something like this:
--Iteration 1
v.y = 0 + ( 0 + 2 )
--Iteration 2
v.y = 2 + ( 0 - 0 )
--Iteration 3
v.y = 2 + ( 0 - 2 )
--Result
v.y = 0
Which is consistent with our theory.
This is what your code do:
(note that the distance is always 2)
--Iteration 1
v.y = ( 0 + 0 + 2 ) * 2
--Iteration 2
v.y = ( 4 + 0 - 0 ) * 2
--Iteration 3
v.y = ( 8 + 0 - 2 ) * 2
--Result
v.y = 12
And if I got the separation steering behaviour right this can't be correct.

Binary clock with Lua, how to remove dots that aren't used?

I have the following binary clock that I grabbed from this wiki article (the one that's for v1.5.*) for the awesome WM:
binClock = wibox.widget.base.make_widget()
binClock.radius = 1.5
binClock.shift = 1.8
binClock.farShift = 2
binClock.border = 1
binClock.lineWidth = 1
binClock.colorActive = beautiful.bg_focus
binClock.fit = function(binClock, width, height)
local size = math.min(width, height)
return 6 * 2 * binClock.radius + 5 * binClock.shift + 2 * binClock.farShift + 2 * binClock.border + 2 * binClock.border, size
end
binClock.draw = function(binClock, wibox, cr, width, height)
local curTime = os.date("*t")
local column = {}
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 2, 2))))
local bigColumn = 0
for i = 0, 5 do
if math.floor(i / 2) > bigColumn then
bigColumn = bigColumn + 1
end
for j = 0, 3 do
if string.sub(column[i + 1], j + 1, j + 1) == "0" then
active = false
else
active = true
end
binClock:draw_point(cr, bigColumn, i, j, active)
end
end
end
binClock.dec_bin = function(binClock, inNum)
inNum = tonumber(inNum)
local base, enum, outNum, rem = 2, "01", "", 0
while inNum > (base - 1) do
inNum, rem = math.floor(inNum / base), math.fmod(inNum, base)
outNum = string.sub(enum, rem + 1, rem + 1) .. outNum
end
outNum = inNum .. outNum
return outNum
end
binClock.draw_point = function(binClock, cr, bigColumn, column, row, active)
cr:arc(binClock.border + column * (2 * binClock.radius + binClock.shift) + bigColumn * binClock.farShift + binClock.radius,
binClock.border + row * (2 * binClock.radius + binClock.shift) + binClock.radius, 2, 0, 2 * math.pi)
if active then
cr:set_source_rgba(0, 0.5, 0, 1)
else
cr:set_source_rgba(0.5, 0.5, 0.5, 1)
end
cr:fill()
end
binClocktimer = timer { timeout = 1 }
binClocktimer:connect_signal("timeout", function() binClock:emit_signal("widget::updated") end)
binClocktimer:start()
First, if something isn't by default already in Lua that's because this is to be used in the config file for awesome. :)
OK, so what I need is some guidance actually. I am not very familiar with Lua currently, so some guidance is all I ask so I can learn. :)
OK, so first, this code outputs a normal binary clock, but every column has 4 dots (44,44,44), instead of a 23,34,34 setup for the dots, as it would be in a normal binary clock. What's controlling that in this code? So that I can pay around with it.
Next, what controls the color? Right now it's gray background and quite a dark green, I want to brighten both of those up.
And what controls the smoothing? Right now it's outputting circles, would like to see what it's like for it to output squares instead.
That's all I need help with, if you can point me to the code and some documentation for what I need, that should be more than enough. :)
Also, if somebody would be nice enough to add some comments, that also would be awesome. Don't have to be very detailed comments, but at least to the point where it gives an idea of what each thing does. :)
EDIT:
Found what modifies the colors, so figured that out. None of the first variables control if it's a square or circle BTW. :)
The draw_point function draws the dots.
The two loops in the draw function are what create the output and is where the columns come from. To do a 23/34/34 layout you would need to modify the inner loop skip the first X points based on the counter of the outer loop I believe.

Gradient in continuous regression using a neural network

I'm trying to implement a regression NN that has 3 layers (1 input, 1 hidden and 1 output layer with a continuous result). As a basis I took a classification NN from coursera.org class, but changed the cost function and gradient calculation so as to fit a regression problem (and not a classification one):
My nnCostFunction now is:
function [J grad] = nnCostFunctionLinear(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
J = 1/(2*m)*sum(sum((a3 - Y).^2))
th1 = Theta1;
th1(:,1) = 0; %set bias = 0 in reg. formula
th2 = Theta2;
th2(:,1) = 0;
t1 = th1.^2;
t2 = th2.^2;
th = sum(sum(t1)) + sum(sum(t2));
th = lambda * th / (2*m);
J = J + th; %regularization
del_3 = a3 - Y;
t1 = del_3'*a2;
Theta2_grad = 2*(t1)/m + lambda*th2/m;
t1 = del_3 * Theta2;
del_2 = t1 .* a2;
del_2 = del_2(:,2:end);
t1 = del_2'*a1;
Theta1_grad = 2*(t1)/m + lambda*th1/m;
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
Then I use this func in fmincg algorithm, but in firsts iterations fmincg end it's work. I think my gradient is wrong, but I can't find the error.
Can anybody help?
If I understand correctly, your first block of code (shown below) -
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
is to get the output a(3) at the output layer.
Ng's slides about NN has the below configuration to calculate a(3). It's different from what your code presents.
in the middle/output layer, you are not doing the activation function g, e.g., a sigmoid function.
In terms of the cost function J without regularization terms, Ng's slides has the below formula:
I don't understand why you can compute it using:
J = 1/(2*m)*sum(sum((a3 - Y).^2))
because you are not including the log function at all.
Mikhaill, I´ve been playing with a NN for continuous regression as well, and had a similar issues at some point. The best thing to do here would be to test gradient computation against a numerical calculation before running the model. If that´s not correct, fmincg won´t be able to train the model. (Btw, I discourage you of using numerical gradient as the time involved is much bigger).
Taking into account that you took this idea from Ng´s Coursera class, I´ll implement a possible solution for you to try using the same notation for Octave.
% Cost function without regularization.
J = 1/2/m^2*sum((a3-Y).^2);
% In case it´s needed, regularization term is added (i.e. for Training).
if (reg==true);
J=J+lambda/2/m*(sum(sum(Theta1(:,2:end).^2))+sum(sum(Theta2(:,2:end).^2)));
endif;
% Derivatives are computed for layer 2 and 3.
d3=(a3.-Y);
d2=d3*Theta2(:,2:end);
% Theta grad is computed without regularization.
Theta1_grad=(d2'*a1)./m;
Theta2_grad=(d3'*a2)./m;
% Regularization is added to grad computation.
Theta1_grad(:,2:end)=Theta1_grad(:,2:end)+(lambda/m).*Theta1(:,2:end);
Theta2_grad(:,2:end)=Theta2_grad(:,2:end)+(lambda/m).*Theta2(:,2:end);
% Unroll gradients.
grad = [Theta1_grad(:) ; Theta2_grad(:)];
Note that, since you have taken out all the sigmoid activation, the derivative calculation is quite simple and results in a simplification of the original code.
Next steps:
1. Check this code to understand if it makes sense to your problem.
2. Use gradient checking to test gradient calculation.
3. Finally, use fmincg and check you get different results.
Try to include sigmoid function to compute second layer (hidden layer) values and avoid sigmoid in calculating the target (output) value.
function [J grad] = nnCostFunction1(nnParams, ...
inputLayerSize, ...
hiddenLayerSize, ...
numLabels, ...
X, y, lambda)
Theta1 = reshape(nnParams(1:hiddenLayerSize * (inputLayerSize + 1)), ...
hiddenLayerSize, (inputLayerSize + 1));
Theta2 = reshape(nnParams((1 + (hiddenLayerSize * (inputLayerSize + 1))):end), ...
numLabels, (hiddenLayerSize + 1));
Theta1Grad = zeros(size(Theta1));
Theta2Grad = zeros(size(Theta2));
m = size(X,1);
a1 = [ones(m, 1) X]';
z2 = Theta1 * a1;
a2 = sigmoid(z2);
a2 = [ones(1, m); a2];
z3 = Theta2 * a2;
a3 = z3;
Y = y';
r1 = lambda / (2 * m) * sum(sum(Theta1(:, 2:end) .* Theta1(:, 2:end)));
r2 = lambda / (2 * m) * sum(sum(Theta2(:, 2:end) .* Theta2(:, 2:end)));
J = 1 / ( 2 * m ) * (a3 - Y) * (a3 - Y)' + r1 + r2;
delta3 = a3 - Y;
delta2 = (Theta2' * delta3) .* sigmoidGradient([ones(1, m); z2]);
delta2 = delta2(2:end, :);
Theta2Grad = 1 / m * (delta3 * a2');
Theta2Grad(:, 2:end) = Theta2Grad(:, 2:end) + lambda / m * Theta2(:, 2:end);
Theta1Grad = 1 / m * (delta2 * a1');
Theta1Grad(:, 2:end) = Theta1Grad(:, 2:end) + lambda / m * Theta1(:, 2:end);
grad = [Theta1Grad(:) ; Theta2Grad(:)];
end
Normalize the inputs before passing it in nnCostFunction.
In accordance with Week 5 Lecture Notes guideline for a Linear System NN you should make following changes in the initial code:
Remove num_lables or make it 1 (in reshape() as well)
No need to convert y into a logical matrix
For a2 - replace sigmoid() function to tanh()
In d2 calculation - replace sigmoidGradient(z2) with (1-tanh(z2).^2)
Remove sigmoid from output layer (a3 = z3)
Replace cost function in the unregularized portion to linear one: J = (1/(2*m))*sum((a3-y).^2)
Create predictLinear(): use predict() function as a basis, replace sigmoid with tanh() for the first layer hypothesis, remove second sigmoid for the second layer hypothesis, remove the line with max() function, use output of the hidden layer hypothesis as a prediction result
Verify your nnCostFunctionLinear() on the test case from the lecture note

Resources