What linear mapping function is required to map a gray image from [30-200] to [0, 255]?
I have already done some work and this is what I have come up with, but I would like to know if it's the correct way to do it:
min : 30, want to map to 0
mid : 85, want to map to 128
max : 200, want to map to 255
if (i <= mid), M(i) = 127*(i - min)/(mid - min)
if (i > mid), M(i) = 128 + (255 - 128)*(i - mid - 1)/(max - mid - 1);
This appears to be correct to me because:
if i = 30, it should map to 0. Plugging the information in:
M(30) = 127*(30-30)/(85 - 30) = 0
If i = 85, it should map to 127:
M(85) = 127*(85 - 30) / (85 - 30) = 127
If i = 200, it should map to 200:
M(200) = 128 + (255 - 128)*(200 - 85 - 1)/(200 - 85 - 1) = 255
Thank you.
If you apply a linear mapping, the middle of the first interval will be automatically mapped to the middle of the second interval (you can see it as Thales principle). So you need to apply only one equation: y=a.x+b
To solve this equation you have two information: a.30+b=0, and
a.200+b=255.
Then you solve this and you obtain: a=1.5, and b=-45.
And be careful, the middle of [30, 200] is not 85 but 115 (=30+85).
Finally you can check, if you apply 1.5*x-45=y, you successfully obtain: 30->0, 200->255, and 115->127.5
Related
So I have a ring looking like health bar in GMod, and I'm trying to make the health bar go down smoothly as I lose health, and obviously I got no idea how to do that, I've tried math approach and lerping but it didn't work (probably my poor coding was at fault) so your suggestions with those methods are still welcome
This is the function that draws my health
local function healthBar()
local hp = ply:Health()
local maxHp = ply:GetMaxHealth()
surface.SetDrawColor(225,225,225,255)
for i = 0, 180, 45 do
function HpAng(i, maxAng)
local curSeg = (i / maxAng) + 1
local segAng = (maxHp / 5)
local segMax = segAng * curSeg
if segMax <= hp then
return i + maxAng
end
return (i + maxAng) * (hp/segMax)
end
draw.JRing(ScrW() / 2 + 750, ScrH() / 2 + 260, 75, 8, i + 2, HpAng(i, 45))
end
end
This is how the health bar looks like:
https://i.stack.imgur.com/TsKzm.jpg
math.Approach moves one value towards another value by a given increment, and so the idea is that you want to increment the health currently displayed towards whatever the player's current health is, in each frame. Assuming your healthBar() function is run inside a rendering function such as HUDPaint, this should work:
local lerpHp = 0
local lerpSpeed = 0.1
local function healthBar()
local hp = ply:Health()
local maxHp = ply:GetMaxHealth()
if (lerpHp != hp) then
lerpHp = math.Approach(lerpHp, hp, math.abs(hp - lerpHp) * lerpSpeed)
end
surface.SetDrawColor(225,225,225,255)
for i = 0, 180, 45 do
function HpAng(i, maxAng)
local curSeg = (i / maxAng) + 1
local segAng = (maxHp / 5)
local segMax = segAng * curSeg
if segMax <= lerpHp then
return i + maxAng
end
return (i + maxAng) * (lerpHp/segMax)
end
draw.JRing(ScrW() / 2 + 750, ScrH() / 2 + 260, 75, 8, i + 2, HpAng(i, 45))
end
end
The lerpHp variable is outside the function, since we need to keep track of what the old health was between frames.
math.abs(hp - lerpHp) increases the speed of the animation proportional to the difference in health, meaning the health bar will move faster if you have lost a larger amount of health. This is optional but this is the behaviour you'll see in most Garry's Mod HUDs.
We use Google PageSpeed Insights as a marketing tool to compare the download speed of websites we do with what our competitors do. But so many mobile sites are rated in the 30s and wondered if that's what the average mobile rating is. Does anyone know? Thx
Short Answer
The average mobile rating is 31.
Long Answer.
An article I found after writing the below that answers the question
This article from tunetheweb has actually done the hard work for us here and gathered the data from httparchive. (give the article a read it has a wealth of interesting information!)
The below table taken from that article covers your question (the answer is 31 for the performance metric 50th percentile)
Percentile Performance Accessibility Best Practices SEO PWA
10 8 56 64 69 14
25 16 69 64 80 25
50 31 80 71 86 29
75 55 88 79 92 36
90 80 95 86 99 54
95 93 97 93 100 54
99 99 100 93 100 64
I have left the below in as the information may be useful to somebody but the above answers the question much better. At least my guess of 35 wasn't a millions miles away from the actual answer. hehe.
My original Answer
You would imagine that a score of 50 would be the average right? Nope!
Lighthouse uses a log-normal curve to create a curve that dictates scores.
The two key control points on that curve are the 25th percentile for the median (a score of 50 means you are in the top 25% effectively) and the 8th percentile for a score of 90.
The numbers used to determine these points are derived from http archive data.
You can explore the curve used for Time To Interactive scoring here as an example.
Now I am sure someone who is a lot better at maths than me can use that data to calculate the average score for a site, but I would estimate it to be around 35 for a mobile site, which is pretty close to what you have observed.
One thing I can do is provide how the scoring works based on those control points so you can see the various cutoff points etc. for each metric.
The below is taken from the maths module at https://github.com/paulirish/lh-scorecalc/tree/190bed715a3589601f314b3c8a50fb0fb147c121
I have also included the median and falloff values currently used in this calculation in the scoring variable.
To play with it use either the VALUE_AT_QUANTILE function to get what value you need to achieve a certain percentage (so to see the value for the 90th percentile for Time To Interactive you would use VALUE_AT_QUANTILE(7300, 2900, 0.9); (take median (7300) and falloff (2900) from TTI in the scoring variable and then enter the desired percentile as a decimal (90 -> 0.9)).
Similar for QUANTILE_AT_VALUE function which does the reverse (shows the percentile that a particular value would fall at). E.g. if you wanted to see what percentile a First CPU Idle time of 3200 gets you would use QUANTILE_AT_VALUE(6500, 2900, 3200).
Anyway I have gone a bit off tangent, but hopefully the above and below will let someone cleverer than me the info needed to work it out (I have included the weightings for each item as well in the weights variable).
const scoring = {
FCP: {median: 4000, falloff: 2000, name: 'First Contentful Paint'},
FMP: {median: 4000, falloff: 2000, name: 'First Meaningful Paint'},
SI: {median: 5800, falloff: 2900, name: 'Speed Index'},
TTI: {median: 7300, falloff: 2900, name: 'Time to Interactive'},
FCI: {median: 6500, falloff: 2900, name: 'First CPU Idle'},
TBT: {median: 600, falloff: 200, name: 'Total Blocking Time'}, // mostly uncalibrated
LCP: {median: 4000, falloff: 2000, name: 'Largest Contentful Paint'},
CLS: {median: 0.25, falloff: 0.054, name: 'Cumulative Layout Shift', units: 'unitless'},
};
const weights = {
FCP: 0.15,
SI: 0.15,
LCP: 0.25,
TTI: 0.15,
TBT: 0.25,
CLS: 0.05
};
function internalErf_(x) {
// erf(-x) = -erf(x);
var sign = x < 0 ? -1 : 1;
x = Math.abs(x);
var a1 = 0.254829592;
var a2 = -0.284496736;
var a3 = 1.421413741;
var a4 = -1.453152027;
var a5 = 1.061405429;
var p = 0.3275911;
var t = 1 / (1 + p * x);
var y = t * (a1 + t * (a2 + t * (a3 + t * (a4 + t * a5))));
return sign * (1 - y * Math.exp(-x * x));
}
function internalErfInv_(x) {
// erfinv(-x) = -erfinv(x);
var sign = x < 0 ? -1 : 1;
var a = 0.147;
var log1x = Math.log(1 - x*x);
var p1 = 2 / (Math.PI * a) + log1x / 2;
var sqrtP1Log = Math.sqrt(p1 * p1 - (log1x / a));
return sign * Math.sqrt(sqrtP1Log - p1);
}
function VALUE_AT_QUANTILE(median, falloff, quantile) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
return Math.exp(location + shape * Math.SQRT2 * internalErfInv_(1 - 2 * quantile));
}
function QUANTILE_AT_VALUE(median, falloff, value) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
var standardizedX = (Math.log(value) - location) / (Math.SQRT2 * shape);
return (1 - internalErf_(standardizedX)) / 2;
}
console.log("Time To Interactive (TTI) 90th Percentile Time:", VALUE_AT_QUANTILE(7300, 2900, 0.9).toFixed(0));
console.log("First CPU Idle time of 3200 score / percentile:", (QUANTILE_AT_VALUE(6500, 2900, 3200).toFixed(3)) * 100);
I am trying to replicate an ESS (effective sample size) calculation using the method of Vehtari et al. in: Rank-normalization, folding, and localization: An improved Rhat for assessing convergence of MCMC
I am working from the code here:
https://github.com/avehtari/rhat_ess/blob/master/code/monitornew.R
# Geyer's initial positive sequence
rho_hat_t <- rep.int(0, n_samples)
t <- 0
rho_hat_even <- 1
rho_hat_t[t + 1] <- rho_hat_even
rho_hat_odd <- 1 - (mean_var - mean(acov[t + 2, ])) / var_plus # 251
rho_hat_t[t + 2] <- rho_hat_odd
while (t < nrow(acov) - 5 && !is.nan(rho_hat_even + rho_hat_odd) &&
(rho_hat_even + rho_hat_odd > 0)) {
t <- t + 2
rho_hat_even = 1 - (mean_var - mean(acov[t + 1, ])) / var_plus # 256
rho_hat_odd = 1 - (mean_var - mean(acov[t + 2, ])) / var_plus # 257
if ((rho_hat_even + rho_hat_odd) >= 0) {
rho_hat_t[t + 1] <- rho_hat_even
rho_hat_t[t + 2] <- rho_hat_odd
}
}
I can follow the code from the paper except when we get to equation 10 in the paper (calculating the cross-chain autocorrelation). The code (lines 251, 256 and 257) appears in the form:
1 - (mean_var - mean(acov[t + 1, ])) / var_plus
which is close to equation 10, except the missing the 's' terms from equation 10:
I can't see anywhere in the code that this is somehow accounted for elsewhere in the way the calculation is being done. I have tried putting the 's' terms back into those lines of code and it makes a big difference to the final ESS value.
Is anyone able to help me understand the discrepancy between paper and code?
Thanks.
In the formula in the paper, s^2 is is the estimate of variance and rho the estimate of autocorrelation. Thus s^2 * rho is an estimate of the autocovariance, which is what you see in the code.
I am creating a rails application which is like a game. So it has points and levels. For example: to become level one the user has to get atleast 100 points and again for level two the user has to reach level 2 the user has to collect 200 points. The level difference changes after every 10 levels i.e., The difference between each level changes after 10 levels always. By that I mean the difference in points between level one and two is 100 and the difference in points in level 11 and 12 is 150 and so on. There is no upper bound for levels.
Now my question is let's say a user's total points is 3150 and just got updated to 3155. What's the optimal solution to find the current level and update it if needed?
I can get a solution using while loops and again looping inside it which will give a result in O(n^2). I need something better.
I think this code works but I'm not sure if this is the best way to go about it
def get_level(points)
diff = 100
sum = 0
level = -1
current_level = 0
while level.negative?
10.times do |i|
current_level += 1
sum += diff
if points > sum
next
elsif points <= sum
level = current_level
break
end
end
diff += 50
end
puts level
end
I wrote a get_points function (it should not be difficult). Then based on it get_level function in which it was necessary to solve the quadratic equation to find high value, and then calc low.
If you have any questions, let me know.
Check output here.
#!/usr/bin/env python3
import math
def get_points(level):
high = (level + 1) // 10
low = (level + 1) % 10
high_point = 250 * high * high + 750 * high # (3 + high) * high // 2 * 500
low_point = (100 + 50 * high) * low
return low_point + high_point
def get_level(points):
# quadratic equation
a = 250
b = 750
c = -points
d = b * b - 4 * a * c
x = (-b + math.sqrt(d)) / (2 * a)
high = int(x)
remainder = points - (250 * high * high + 750 * high)
low = remainder // (100 + 50 * high)
level = high * 10 + low
return level
def main():
for l in range(0, 40):
print(f'{l:3d} {get_points(l - 1):5d}..{get_points(l) - 1}')
for level, (l, r) in (
(1, (100, 199)),
(2, (200, 299)),
(9, (900, 999)),
(10, (1000, 1149)),
(11, (1150, 1299)),
(19, (2350, 2499)),
(20, (2500, 2699)),
):
for p in range(l, r + 1): # for in [l, r]
assert get_level(p) == level, f'{p} {l}'
if __name__ == '__main__':
main()
Why did you set the value of a=250 and b = 750? Can you explain that to me please?
Let's write out every 10 level and the difference between points:
lvl - pnt (+delta)
10 - 1000 (+1000 = +100 * 10)
20 - 2500 (+1500 = +150 * 10)
30 - 4500 (+2000 = +200 * 10)
40 - 7000 (+2500 = +250 * 10)
Divide by 500 (10 levels * 50 difference changes) and received an arithmetic progression starting at 2:
10 - 2 (+2)
20 - 5 (+3)
30 - 9 (+4)
40 - 14 (+5)
Use arithmetic progression get points formula for level = k * 10 equal to:
sum(x for x in 2..k+1) * 500 =
(2 + k + 1) * k / 2 * 500 =
(3 + k) * k * 250 =
250 * k * k + 750 * k
Now we have points and want to find the maximum high such that point >= 250 * high^2 + 750 * high, i. e. 250 * high^2 + 750 * high - points <= 0. Value a = 250 is positive and branches of the parabola are directed up. Now we find the solution of quadratic equation 250 * high^2 + 750 * high - points = 0 and discard the real part (is high = int(x) in python script).
I'm looking for a algorithm that returns 5 steps between 2 given dynamic values, including both starting values, with exponential growth. The returned values should be nicely rounded and unique.
Example:
range 100 - 10000 should return something like this:
100, 500, 2500, 5000, 10000
This is what i came up with so far (credit goes mostly to the SO thread I once found but can't recover):
min = 100
max = 10000
a = Array.new
loops = 5
factor = 2.5
for i in 0..loops-1
x = (max - min) * ( (i.to_f / (loops.to_f - 1.0)) ** factor ) + min
case x
when min
a[i] = x.to_i
when max
a[i] = x.to_i
when (min + 1).to_f..500
a[i] = (x.to_f / 250).round(0) * 250
when 500..2000
a[i] = (x.to_f / 500).round(0) * 500
else
a[i] = (x.to_f / 2500).round(0) * 2500
end
end
The result is adjustable with the factor, I found 2.5 to be working best. This works quite well already in most cases. Before rounding I get these values:
[100.0, 409.37, 1850.08, 4922.67, 10000.0]
But it does not check for duplicates that can occur in the rounding process, which happens mostly if the range is smaller:
100 - 1000
Raw: [100.0, 128.12, 259.09, 538.42, 1000.0]
Rounded: [100, 250, 250, 500, 1000]
5000 - 10000
Raw: [5000.0, 5156.25,5883.88, 7435.69, 10000.0]
Rounded: [5000, 5000, 5000, 7500, 10000]
Now I'm a little torn between discarding the whole code and trying to come up with a smarter calculation method that already includes rounding or just checking for duplicates on a second run - but I didn't get a satisfying result from any of those two options.
Does someone have a clue on how to integrate a duplicate check in the rounding or make the rounding more dynamic?