How to find and update levels accordingly based on points? - ruby-on-rails

I am creating a rails application which is like a game. So it has points and levels. For example: to become level one the user has to get atleast 100 points and again for level two the user has to reach level 2 the user has to collect 200 points. The level difference changes after every 10 levels i.e., The difference between each level changes after 10 levels always. By that I mean the difference in points between level one and two is 100 and the difference in points in level 11 and 12 is 150 and so on. There is no upper bound for levels.
Now my question is let's say a user's total points is 3150 and just got updated to 3155. What's the optimal solution to find the current level and update it if needed?
I can get a solution using while loops and again looping inside it which will give a result in O(n^2). I need something better.
I think this code works but I'm not sure if this is the best way to go about it
def get_level(points)
diff = 100
sum = 0
level = -1
current_level = 0
while level.negative?
10.times do |i|
current_level += 1
sum += diff
if points > sum
next
elsif points <= sum
level = current_level
break
end
end
diff += 50
end
puts level
end

I wrote a get_points function (it should not be difficult). Then based on it get_level function in which it was necessary to solve the quadratic equation to find high value, and then calc low.
If you have any questions, let me know.
Check output here.
#!/usr/bin/env python3
import math
def get_points(level):
high = (level + 1) // 10
low = (level + 1) % 10
high_point = 250 * high * high + 750 * high # (3 + high) * high // 2 * 500
low_point = (100 + 50 * high) * low
return low_point + high_point
def get_level(points):
# quadratic equation
a = 250
b = 750
c = -points
d = b * b - 4 * a * c
x = (-b + math.sqrt(d)) / (2 * a)
high = int(x)
remainder = points - (250 * high * high + 750 * high)
low = remainder // (100 + 50 * high)
level = high * 10 + low
return level
def main():
for l in range(0, 40):
print(f'{l:3d} {get_points(l - 1):5d}..{get_points(l) - 1}')
for level, (l, r) in (
(1, (100, 199)),
(2, (200, 299)),
(9, (900, 999)),
(10, (1000, 1149)),
(11, (1150, 1299)),
(19, (2350, 2499)),
(20, (2500, 2699)),
):
for p in range(l, r + 1): # for in [l, r]
assert get_level(p) == level, f'{p} {l}'
if __name__ == '__main__':
main()
Why did you set the value of a=250 and b = 750? Can you explain that to me please?
Let's write out every 10 level and the difference between points:
lvl - pnt (+delta)
10 - 1000 (+1000 = +100 * 10)
20 - 2500 (+1500 = +150 * 10)
30 - 4500 (+2000 = +200 * 10)
40 - 7000 (+2500 = +250 * 10)
Divide by 500 (10 levels * 50 difference changes) and received an arithmetic progression starting at 2:
10 - 2 (+2)
20 - 5 (+3)
30 - 9 (+4)
40 - 14 (+5)
Use arithmetic progression get points formula for level = k * 10 equal to:
sum(x for x in 2..k+1) * 500 =
(2 + k + 1) * k / 2 * 500 =
(3 + k) * k * 250 =
250 * k * k + 750 * k
Now we have points and want to find the maximum high such that point >= 250 * high^2 + 750 * high, i. e. 250 * high^2 + 750 * high - points <= 0. Value a = 250 is positive and branches of the parabola are directed up. Now we find the solution of quadratic equation 250 * high^2 + 750 * high - points = 0 and discard the real part (is high = int(x) in python script).

Related

How to calculate F1 Score for Multi-label Classification

I am trying to calculate F1 score (and accuracy) for my multi-label classification problem. Could you please provide feedback on my method, if I'm calculating it correctly. Note that I'm calculating IOU (intersection over union) when model predicts an object as 1, and mark it as TP only if IOU is greater than or equal to 0.5.
GT labels: 14 x 10 x 128
Output: 14 x 10 x 128
where 14 is the batch_size, 10 is the sequence_length, and 128 is the object vector (i.e., 1 if the object at an index belongs to the sequence and 0 otherwise).
def calculate_performance_metrics(total_padded_elements, gt_labels, predicted_labels):
# check if TP pred objects overlap with TP gt objects
TP_INDICES = (torch.logical_and(predicted_labels == 1, gt_labels == 1)).nonzero() # we only want the batch and object indices, i.e. the 0 and 2 indices
TP = calculate_tp_with_iou() # details of this don't matter for now
FP = torch.sum(torch.logical_and(predicted_labels, 1 - gt_labels)).item()
TN = torch.sum(torch.logical_and(1 - predicted_labels, 1 - gt_labels)).item()
FN = torch.sum(torch.logical_and(1 - predicted_labels, gt_labels)).item()
return float(TP), float(FP), float(TN - total_padded_elements), float(FN)
for epoch in range(10):
TP = FP = TN = FN = EPOCH_PRECISION = EPOCH_RECALL = EPOCH_F1 = 0.
for inputs, gt_labels, masks in tr_dl:
outputs = model(inputs) # out shape: (14, 10, 128)
# mask shape: (14, 10). So need to expand it to the shape of output
masks = masks[:, :, None].expand_as(outputs)
pred_labels = (torch.sigmoid(outputs) >= 0.5).float().type(torch.int64) # consider all predictions above 0.5 as 1, rest 0
pred_labels = pred_labels * masks
gt_labels = (gt_labels * masks).type(torch.int64)
total_padded_elements = masks.numel() - masks.sum() # need this to get accurate true negatives
batch_tp, batch_fp, batch_tn, batch_fn = calculate_performance_metrics(gt_labels, pred_labels, total_padded_elements)
EPOCH_TP += batch_tp
EPOCH_FP += batch_fp
EPOCH_TN += batch_tn
EPOCH_FN += batch_fn
EPOCH_ACCURACY = (EPOCH_TP + EPOCH_TN) / (EPOCH_TP + EPOCH_TN + EPOCH_FP + EPOCH_FN)
if EPOCH_TP + EPOCH_FP > 0:
EPOCH_PRECISION = EPOCH_TP / (EPOCH_TP + EPOCH_FP)
if EPOCH_TP + EPOCH_FN > 0:
EPOCH_RECALL = EPOCH_TP / (EPOCH_TP + EPOCH_FN)
EPOCH_F1 = (2 * EPOCH_PRECISION * EPOCH_RECALL) / (EPOCH_PRECISION + EPOCH_RECALL)

How do I optimize this code so that it executes faster (max ~5 mins)?

I am trying to make a program in turtle that creates a Lyapunov fractal. However, as using timeit shows, this should take around 3 hours to complete, 1.5 if I compromise resolution (N).
import turtle as t; from math import log; from timeit import default_timer as dt
t.setup(2000,1000,0); swid=t.window_width(); shei=t.window_height(); t.up(); t.ht(); t.tracer(False); t.colormode(255); t.bgcolor('pink')
def lyapunov(seq,xmin,xmax,ymin,ymax,N,tico):
truseq=str(bin(seq))[2:]
for x in range(-swid//2+2,swid//2-2):
tx=(x*(xmax-xmin))/swid+(xmax+xmin)/2
if x==-swid//2+2:
startt=dt()
for y in range(-shei//2+11,shei//2-1):
t.goto(x,y); ty=(y*(ymax-ymin))/shei+(ymax+ymin)/2; lex=0; xn=prevxn=0.5
for n in range(1,N+1):
if truseq[n%len(truseq)]=='0': rn=tx
else: rn=ty
xn=rn*prevxn*(1-prevxn)
prevxn=xn
if xn!=1: lex+=(1/N)*log(abs(rn*(1-2*xn)))
if lex>0: t.pencolor(0,0,min(int(lex*tico),255))
else: t.pencolor(max(255+int(lex*tico),0),max(255+int(lex*tico),0),0)
t.dot(size=1); t.update()
if x==-swid//2+2:
endt=dt()
print(f'Estimated total time: {(endt-startt)*(swid-5)} secs')
#Example: lyapunov(2,2.0,4.0,2.0,4.0,10000,100)
I attempted to use yield but I couldn't figure out where it should go.
On my slower machine, I was only able to test with a tiny N (e.g. 10) but I was able to speed up the code about 350 times. (Though this will be clearly lower as N increases.) There are two problems with your use of update(). The first is you call it too often -- you should outdent it from the y loop to the x loop so it's only called once on each vertical pass. Second, the dot() operator forces an automatic update() so you get no advantage from using tracer(). Replace dot() with some other method of drawing a pixel and you'll get back the advantage of using tracer() and update(). (As long as you move update() out of innermost loop as I noted.)
My rework of your code where I tried out these, and other, changes:
from turtle import Screen, Turtle
from math import log
from timeit import default_timer
def lyapunov(seq, xmin, xmax, ymin, ymax, N, tico):
xdif = xmax - xmin
ydif = ymax - ymin
truseq = str(bin(seq))[2:]
for x in range(2 - swid_2, swid_2 - 2):
if x == 2 - swid_2:
startt = default_timer()
tx = x * xdif / swid + xdif/2
for y in range(11 - shei_2, shei_2 - 1):
ty = y * ydif / shei + ydif/2
lex = 0
xn = prevxn = 0.5
for n in range(1, N+1):
rn = tx if truseq[n % len(truseq)] == '0' else ty
xn = rn * prevxn * (1 - prevxn)
prevxn = xn
if xn != 1:
lex += 1/N * log(abs(rn * (1 - xn*2)))
if lex > 0:
turtle.pencolor(0, 0, min(int(lex * tico), 255))
else:
lex_tico = max(int(lex * tico) + 255, 0)
turtle.pencolor(lex_tico, lex_tico, 0)
turtle.goto(x, y)
turtle.pendown()
turtle.penup()
screen.update()
if x == 2 - swid_2:
endt = default_timer()
print(f'Estimated total time: {(endt - startt) * (swid - 5)} secs')
screen = Screen()
screen.setup(2000, 1000, startx=0)
screen.bgcolor('pink')
screen.colormode(255)
screen.tracer(False)
swid = screen.window_width()
shei = screen.window_height()
swid_2 = swid//2
shei_2 = shei//2
turtle = Turtle()
turtle.hideturtle()
turtle.penup()
turtle.setheading(90)
lyapunov(2, 2.0, 4.0, 2.0, 4.0, 10, 100)
screen.exitonclick()

Smart algorithm to create unique exponential sequence steps between two values

I'm looking for a algorithm that returns 5 steps between 2 given dynamic values, including both starting values, with exponential growth. The returned values should be nicely rounded and unique.
Example:
range 100 - 10000 should return something like this:
100, 500, 2500, 5000, 10000
This is what i came up with so far (credit goes mostly to the SO thread I once found but can't recover):
min = 100
max = 10000
a = Array.new
loops = 5
factor = 2.5
for i in 0..loops-1
x = (max - min) * ( (i.to_f / (loops.to_f - 1.0)) ** factor ) + min
case x
when min
a[i] = x.to_i
when max
a[i] = x.to_i
when (min + 1).to_f..500
a[i] = (x.to_f / 250).round(0) * 250
when 500..2000
a[i] = (x.to_f / 500).round(0) * 500
else
a[i] = (x.to_f / 2500).round(0) * 2500
end
end
The result is adjustable with the factor, I found 2.5 to be working best. This works quite well already in most cases. Before rounding I get these values:
[100.0, 409.37, 1850.08, 4922.67, 10000.0]
But it does not check for duplicates that can occur in the rounding process, which happens mostly if the range is smaller:
100 - 1000
Raw: [100.0, 128.12, 259.09, 538.42, 1000.0]
Rounded: [100, 250, 250, 500, 1000]
5000 - 10000
Raw: [5000.0, 5156.25,5883.88, 7435.69, 10000.0]
Rounded: [5000, 5000, 5000, 7500, 10000]
Now I'm a little torn between discarding the whole code and trying to come up with a smarter calculation method that already includes rounding or just checking for duplicates on a second run - but I didn't get a satisfying result from any of those two options.
Does someone have a clue on how to integrate a duplicate check in the rounding or make the rounding more dynamic?

Extracting sampled Time Points

I have a matlab Curve from which i would like to plot and find Concentration values at 17 different time samples
Following is the curve from which i would like to extract Concentration values at 17 different time points
following are the time points in minutes
t = 0,0.25,0.50,1,1.5,2,3,4,9,14,19,24,29,34,39,44,49. minutes samples
Following is the Function which i have written to plot the above graph
function c_t = output_function_constrainedK2(t, a1, a2, a3,b1,b2,b3,td, tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)= conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
plot(t,c_t);
axis([0 50 0 1400]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('Model :Constrained K2');
end
If possible, Kindly please suggest me some idea how i could possibly alter the above function so that i can come up with concentration values at 17 different time points stated above
Following are the input values that i have used to come up with the curve
output_function_constrainedK2(0:0.1:50,2501,18500,65000,0.5,0.7,0.3,3,8,0.014,0.051,0.07)
This will give you concentration values at the time points you wanted. You will have to put this inside the output_function_constrainedK2 function so that you can access the variables t and c_t.
T=[0 0.25 0.50 1 1.5 2 3 4 9 14 19 24 29 34 39 44 49];
concentration=interp1(t,c_t,T)

Lua Separation Steering algorithm groups overlapping rooms into one corner

I'm trying to implement a dungeon generation algorithm (presented here and demo-ed here ) that involves generating a random number of cells that overlap each other. The cells then are pushed apart/separated and then connected. Now, the original poster/author described that he is using a Separation Steering Algorithm in order to uniformly distribute the cells over an area. I haven't had much experience with flocking algorithm and/or separation steering behavior, thus I turned to google for an explanation (and found this ). My implementation (based on the article last mentioned) is as follows:
function pdg:_computeSeparation(_agent)
local neighbours = 0
local rtWidth = #self._rooms
local v =
{
x = self._rooms[_agent].startX,
y = self._rooms[_agent].startY,
--velocity = 1,
}
for i = 1, rtWidth do
if _agent ~= i then
local distance = math.dist(self._rooms[_agent].startX,
self._rooms[_agent].startY,
self._rooms[i].startX,
self._rooms[i].startY)
if distance < 12 then
--print("Separating agent: ".._agent.." from agent: "..i.."")
v.x = (v.x + self._rooms[_agent].startX - self._rooms[i].startX) * distance
v.y = (v.y + self._rooms[_agent].startY - self._rooms[i].startY) * distance
neighbours = neighbours + 1
end
end
end
if neighbours == 0 then
return v
else
v.x = v.x / neighbours
v.y = v.y / neighbours
v.x = v.x * -1
v.y = v.y * -1
pdg:_normalize(v, 1)
return v
end
end
self._rooms is a table that contains the original X and Y position of the Room in the grid, along with it's width and height (endX, endY).
The problem is that, instead of tiddly arranging the cells on the grid, it takes the overlapping cells and moves them into an area that goes from 1,1 to distance+2, distance+2 (as seen in my video [youtube])
I'm trying to understand why this is happening.
In case it's needed, here I parse the grid table, separate and fill the cells after the separation:
function pdg:separate( )
if #self._rooms > 0 then
--print("NR ROOMS: "..#self._rooms.."")
-- reset the map to empty
for x = 1, self._pdgMapWidth do
for y = 1, self._pdgMapHeight do
self._pdgMap[x][y] = 4
end
end
-- now, we separate the rooms
local numRooms = #self._rooms
for i = 1, numRooms do
local v = pdg:_computeSeparation(i)
--we adjust the x and y positions of the items in the room table
self._rooms[i].startX = v.x
self._rooms[i].startY = v.y
--self._rooms[i].endX = v.x + self._rooms[i].endX
--self._rooms[i].endY = v.y + self._rooms[i].endY
end
-- we render them again
for i = 1, numRooms do
local px = math.abs( math.floor(self._rooms[i].startX) )
local py = math.abs( math.floor(self._rooms[i].startY) )
for k = self.rectMinWidth, self._rooms[i].endX do
for v = self.rectMinHeight, self._rooms[i].endY do
print("PX IS AT: "..px.." and k is: "..k.." and their sum is: "..px+k.."")
print("PY IS AT: "..py.." and v is: "..v.." and their sum is: "..py+v.."")
if k == self.rectMinWidth or
v == self.rectMinHeight or
k == self._rooms[i].endX or
v == self._rooms[i].endY then
self._pdgMap[px+k][py+v] = 1
else
self._pdgMap[px+k][py+v] = 2
end
end
end
end
end
I have implemented this generation algorithm as well, and I came across more or less the same issue. All of my rectangles ended up in the topleft corner.
My problem was that I was normalizing velocity vectors with zero length. If you normalize those, you divide by zero, resulting in NaN.
You can fix this by simply performing a check whether your velocity's length is zero before using it in any further calculations.
I hope this helps!
Uhm I know it's an old question, but I noticed something and maybe it can be useful to somebody, so...
I think there's a problem here:
v.x = (v.x + self._rooms[_agent].startX - self._rooms[i].startX) * distance
v.y = (v.y + self._rooms[_agent].startY - self._rooms[i].startY) * distance
Why do you multiply these equations by the distance?
"(self._rooms[_agent].startX - self._rooms[i].startX)" already contains the (squared) distance!
Plus, multiplying everything by "distance" you modify your previous results stored in v!
If at least you put the "v.x" outside the bracket, the result would just be higher, the normalize function will fix it. Although that's some useless calculation...
By the way I'm pretty sure the code should be like:
v.x = v.x + (self._rooms[_agent].startX - self._rooms[i].startX)
v.y = v.y + (self._rooms[_agent].startY - self._rooms[i].startY)
I'll make an example. Imagine you have your main agent in (0,0) and three neighbours in (0,-2), (-2,0) and (0,2). A separation steering behaviour would move the main agent toward the X axis, at a normalized direction of (1,0).
Let's focus only on the Y component of the result vector.
The math should be something like this:
--Iteration 1
v.y = 0 + ( 0 + 2 )
--Iteration 2
v.y = 2 + ( 0 - 0 )
--Iteration 3
v.y = 2 + ( 0 - 2 )
--Result
v.y = 0
Which is consistent with our theory.
This is what your code do:
(note that the distance is always 2)
--Iteration 1
v.y = ( 0 + 0 + 2 ) * 2
--Iteration 2
v.y = ( 4 + 0 - 0 ) * 2
--Iteration 3
v.y = ( 8 + 0 - 2 ) * 2
--Result
v.y = 12
And if I got the separation steering behaviour right this can't be correct.

Resources