I have a matlab Curve from which i would like to plot and find Concentration values at 17 different time samples
Following is the curve from which i would like to extract Concentration values at 17 different time points
following are the time points in minutes
t = 0,0.25,0.50,1,1.5,2,3,4,9,14,19,24,29,34,39,44,49. minutes samples
Following is the Function which i have written to plot the above graph
function c_t = output_function_constrainedK2(t, a1, a2, a3,b1,b2,b3,td, tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)= conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
plot(t,c_t);
axis([0 50 0 1400]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('Model :Constrained K2');
end
If possible, Kindly please suggest me some idea how i could possibly alter the above function so that i can come up with concentration values at 17 different time points stated above
Following are the input values that i have used to come up with the curve
output_function_constrainedK2(0:0.1:50,2501,18500,65000,0.5,0.7,0.3,3,8,0.014,0.051,0.07)
This will give you concentration values at the time points you wanted. You will have to put this inside the output_function_constrainedK2 function so that you can access the variables t and c_t.
T=[0 0.25 0.50 1 1.5 2 3 4 9 14 19 24 29 34 39 44 49];
concentration=interp1(t,c_t,T)
Related
I'm trying to set a lower bounds of zero on my influx query result so that negative values are replaced with zero in the result. e.g. For the query:
SELECT x from measurement
If my raw response is
time x
---- -
1632972969471900180 0
1632972969471988621 -130
1632972969472238055 803
then i want to alter the query so that the result is:
time x'
---- -
1632972969471900180 0
1632972969471988621 0
1632972969472238055 803
My solution was to use the ABS absolute value function, adding the absolute value to the original value and dividing by 2. This maps negative values to zero and leaves posiive (and zero) values unchanged. e.g.
SELECT (x + ABS(x)) / 2 from measurement
time x x'
---- - -
1632972969471900180 0 = 0 + 0 / 2 = 0
1632972969471988621 -130 = -130 + 130 / 2 = 0
1632972969472238055 803 = 803 + 803 / 2 = 803
We use Google PageSpeed Insights as a marketing tool to compare the download speed of websites we do with what our competitors do. But so many mobile sites are rated in the 30s and wondered if that's what the average mobile rating is. Does anyone know? Thx
Short Answer
The average mobile rating is 31.
Long Answer.
An article I found after writing the below that answers the question
This article from tunetheweb has actually done the hard work for us here and gathered the data from httparchive. (give the article a read it has a wealth of interesting information!)
The below table taken from that article covers your question (the answer is 31 for the performance metric 50th percentile)
Percentile Performance Accessibility Best Practices SEO PWA
10 8 56 64 69 14
25 16 69 64 80 25
50 31 80 71 86 29
75 55 88 79 92 36
90 80 95 86 99 54
95 93 97 93 100 54
99 99 100 93 100 64
I have left the below in as the information may be useful to somebody but the above answers the question much better. At least my guess of 35 wasn't a millions miles away from the actual answer. hehe.
My original Answer
You would imagine that a score of 50 would be the average right? Nope!
Lighthouse uses a log-normal curve to create a curve that dictates scores.
The two key control points on that curve are the 25th percentile for the median (a score of 50 means you are in the top 25% effectively) and the 8th percentile for a score of 90.
The numbers used to determine these points are derived from http archive data.
You can explore the curve used for Time To Interactive scoring here as an example.
Now I am sure someone who is a lot better at maths than me can use that data to calculate the average score for a site, but I would estimate it to be around 35 for a mobile site, which is pretty close to what you have observed.
One thing I can do is provide how the scoring works based on those control points so you can see the various cutoff points etc. for each metric.
The below is taken from the maths module at https://github.com/paulirish/lh-scorecalc/tree/190bed715a3589601f314b3c8a50fb0fb147c121
I have also included the median and falloff values currently used in this calculation in the scoring variable.
To play with it use either the VALUE_AT_QUANTILE function to get what value you need to achieve a certain percentage (so to see the value for the 90th percentile for Time To Interactive you would use VALUE_AT_QUANTILE(7300, 2900, 0.9); (take median (7300) and falloff (2900) from TTI in the scoring variable and then enter the desired percentile as a decimal (90 -> 0.9)).
Similar for QUANTILE_AT_VALUE function which does the reverse (shows the percentile that a particular value would fall at). E.g. if you wanted to see what percentile a First CPU Idle time of 3200 gets you would use QUANTILE_AT_VALUE(6500, 2900, 3200).
Anyway I have gone a bit off tangent, but hopefully the above and below will let someone cleverer than me the info needed to work it out (I have included the weightings for each item as well in the weights variable).
const scoring = {
FCP: {median: 4000, falloff: 2000, name: 'First Contentful Paint'},
FMP: {median: 4000, falloff: 2000, name: 'First Meaningful Paint'},
SI: {median: 5800, falloff: 2900, name: 'Speed Index'},
TTI: {median: 7300, falloff: 2900, name: 'Time to Interactive'},
FCI: {median: 6500, falloff: 2900, name: 'First CPU Idle'},
TBT: {median: 600, falloff: 200, name: 'Total Blocking Time'}, // mostly uncalibrated
LCP: {median: 4000, falloff: 2000, name: 'Largest Contentful Paint'},
CLS: {median: 0.25, falloff: 0.054, name: 'Cumulative Layout Shift', units: 'unitless'},
};
const weights = {
FCP: 0.15,
SI: 0.15,
LCP: 0.25,
TTI: 0.15,
TBT: 0.25,
CLS: 0.05
};
function internalErf_(x) {
// erf(-x) = -erf(x);
var sign = x < 0 ? -1 : 1;
x = Math.abs(x);
var a1 = 0.254829592;
var a2 = -0.284496736;
var a3 = 1.421413741;
var a4 = -1.453152027;
var a5 = 1.061405429;
var p = 0.3275911;
var t = 1 / (1 + p * x);
var y = t * (a1 + t * (a2 + t * (a3 + t * (a4 + t * a5))));
return sign * (1 - y * Math.exp(-x * x));
}
function internalErfInv_(x) {
// erfinv(-x) = -erfinv(x);
var sign = x < 0 ? -1 : 1;
var a = 0.147;
var log1x = Math.log(1 - x*x);
var p1 = 2 / (Math.PI * a) + log1x / 2;
var sqrtP1Log = Math.sqrt(p1 * p1 - (log1x / a));
return sign * Math.sqrt(sqrtP1Log - p1);
}
function VALUE_AT_QUANTILE(median, falloff, quantile) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
return Math.exp(location + shape * Math.SQRT2 * internalErfInv_(1 - 2 * quantile));
}
function QUANTILE_AT_VALUE(median, falloff, value) {
var location = Math.log(median);
var logRatio = Math.log(falloff / median);
var shape = Math.sqrt(1 - 3 * logRatio - Math.sqrt((logRatio - 3) * (logRatio - 3) - 8)) / 2;
var standardizedX = (Math.log(value) - location) / (Math.SQRT2 * shape);
return (1 - internalErf_(standardizedX)) / 2;
}
console.log("Time To Interactive (TTI) 90th Percentile Time:", VALUE_AT_QUANTILE(7300, 2900, 0.9).toFixed(0));
console.log("First CPU Idle time of 3200 score / percentile:", (QUANTILE_AT_VALUE(6500, 2900, 3200).toFixed(3)) * 100);
I am creating a rails application which is like a game. So it has points and levels. For example: to become level one the user has to get atleast 100 points and again for level two the user has to reach level 2 the user has to collect 200 points. The level difference changes after every 10 levels i.e., The difference between each level changes after 10 levels always. By that I mean the difference in points between level one and two is 100 and the difference in points in level 11 and 12 is 150 and so on. There is no upper bound for levels.
Now my question is let's say a user's total points is 3150 and just got updated to 3155. What's the optimal solution to find the current level and update it if needed?
I can get a solution using while loops and again looping inside it which will give a result in O(n^2). I need something better.
I think this code works but I'm not sure if this is the best way to go about it
def get_level(points)
diff = 100
sum = 0
level = -1
current_level = 0
while level.negative?
10.times do |i|
current_level += 1
sum += diff
if points > sum
next
elsif points <= sum
level = current_level
break
end
end
diff += 50
end
puts level
end
I wrote a get_points function (it should not be difficult). Then based on it get_level function in which it was necessary to solve the quadratic equation to find high value, and then calc low.
If you have any questions, let me know.
Check output here.
#!/usr/bin/env python3
import math
def get_points(level):
high = (level + 1) // 10
low = (level + 1) % 10
high_point = 250 * high * high + 750 * high # (3 + high) * high // 2 * 500
low_point = (100 + 50 * high) * low
return low_point + high_point
def get_level(points):
# quadratic equation
a = 250
b = 750
c = -points
d = b * b - 4 * a * c
x = (-b + math.sqrt(d)) / (2 * a)
high = int(x)
remainder = points - (250 * high * high + 750 * high)
low = remainder // (100 + 50 * high)
level = high * 10 + low
return level
def main():
for l in range(0, 40):
print(f'{l:3d} {get_points(l - 1):5d}..{get_points(l) - 1}')
for level, (l, r) in (
(1, (100, 199)),
(2, (200, 299)),
(9, (900, 999)),
(10, (1000, 1149)),
(11, (1150, 1299)),
(19, (2350, 2499)),
(20, (2500, 2699)),
):
for p in range(l, r + 1): # for in [l, r]
assert get_level(p) == level, f'{p} {l}'
if __name__ == '__main__':
main()
Why did you set the value of a=250 and b = 750? Can you explain that to me please?
Let's write out every 10 level and the difference between points:
lvl - pnt (+delta)
10 - 1000 (+1000 = +100 * 10)
20 - 2500 (+1500 = +150 * 10)
30 - 4500 (+2000 = +200 * 10)
40 - 7000 (+2500 = +250 * 10)
Divide by 500 (10 levels * 50 difference changes) and received an arithmetic progression starting at 2:
10 - 2 (+2)
20 - 5 (+3)
30 - 9 (+4)
40 - 14 (+5)
Use arithmetic progression get points formula for level = k * 10 equal to:
sum(x for x in 2..k+1) * 500 =
(2 + k + 1) * k / 2 * 500 =
(3 + k) * k * 250 =
250 * k * k + 750 * k
Now we have points and want to find the maximum high such that point >= 250 * high^2 + 750 * high, i. e. 250 * high^2 + 750 * high - points <= 0. Value a = 250 is positive and branches of the parabola are directed up. Now we find the solution of quadratic equation 250 * high^2 + 750 * high - points = 0 and discard the real part (is high = int(x) in python script).
I have the following problem:
Solve the following recurrence relation, simplifying your final answer
using 'O' notation.
f(0)=3
f(1)=12
f(n)=6f(n-1)-9f(n-2)
We know this is a homogeneous 2nd order relation so we write the characteristic equation: a^2-6a+9=0 and the solutions are a1,2=3.
The problem is when I replace these values I get:
f(n)=c1*3^n+c2*3^n
and using the 2 initial relations I have:
f(0)=c1+c2=3
f(1)=3(c1+c2)=12
which gives me that there no values such that c1 and c2 such that these 2 relation are true.
Am I doing something wrong? Is the way it should be solved different when it comes to identical roots for the characteristic equation?
You can't solve it this way, because your matrix A is not diagonalizable.
However, here is what you get if you use Jordan's normal form instead:
f(n) = 3^{n-1}(3n + 9)
The Jordan matrix and the basis (with notation from wikipedia + Octave) is:
J := [3,1;0,3]
P := [3,4;1,1]
such that PJP^{-1} = A, where
A := [6,-9;1,0]
is your recurrence matrix. Furthermore, the Jordan matrix is almost as good as a diagonal matrix for computing powers:
J^n = 3^(n-1) * [3,n;0,3].
The recurrence is then:
[f(n+1); f(n)] = A^n [12,3] = PJ^nP^-1[12,3] = (<whatever>, 3^(n-1)*(3n+9)).
Here a quick numerical check (Scala, but you can take whatever you want, Octave or I whatever you like):
scala> def f(n: Int): Int = { if (n == 0) 3 else if (n == 1) 12 else (6 * f(n-1) - 9 * f(n-2)) }
f: (n: Int)Int
scala> for (i <- 0 until 20) println(f(i))
3
12
45
162
567
1944
6561
21870
72171
236196
767637
2480058
7971615
25509168
81310473
258280326
817887699
^
scala> def explicit(n: Int): Int = (Math.pow(3, n -1) * (3 * n + 9)).toInt
explicit: (n: Int)Int
scala> for (i <- 0 until 20) println(explicit(i))
3
12
45
162
567
1944
6561
21870
72171
236196
767637
2480058
7971615
25509168
81310473
258280326
817887699
I am trying to implement logistic regression with gradient descent,
I get my Cost function j_theta for the number of iterations and fortunately my j_theta is decreasing when plotted j_theta against the number of iteration.
The data set I use is given below:
x=
1 20 30
1 40 60
1 70 30
1 50 50
1 50 40
1 60 40
1 30 40
1 40 50
1 10 20
1 30 40
1 70 70
y= 0
1
1
1
0
1
0
0
0
0
1
The code that I managed to write for logistic regression using Gradient descent is:
%1. The below code would load the data present in your desktop to the octave memory
x=load('stud_marks.dat');
%y=load('ex4y.dat');
y=x(:,3);
x=x(:,1:2);
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
[m,n]=size(x);
x=[ones(m,1),x];
X=x;
% Now we limit the x1 and x2 we need to leave or skip the first column x0 because they should stay as 1.
mn = mean(x);
sd = std(x);
x(:,2) = (x(:,2) - mn(2))./ sd(2);
x(:,3) = (x(:,3) - mn(3))./ sd(3);
% We will not use vectorized technique, Because its hard to debug, We shall try using many for loops rather
max_iter=50;
theta = zeros(size(x(1,:)))';
j_theta=zeros(max_iter,1);
for num_iter=1:max_iter
% We calculate the cost Function
j_cost_each=0;
alpha=1;
theta
for i=1:m
z=0;
for j=1:n+1
% theta(j)
z=z+(theta(j)*x(i,j));
z
end
h= 1.0 ./(1.0 + exp(-z));
j_cost_each=j_cost_each + ( (-y(i) * log(h)) - ((1-y(i)) * log(1-h)) );
% j_cost_each
end
j_theta(num_iter)=(1/m) * j_cost_each;
for j=1:n+1
grad(j) = 0;
for i=1:m
z=(x(i,:)*theta);
z
h=1.0 ./ (1.0 + exp(-z));
h
grad(j) += (h-y(i)) * x(i,j);
end
grad(j)=grad(j)/m;
grad(j)
theta(j)=theta(j)- alpha * grad(j);
end
end
figure
plot(0:1999, j_theta(1:2000), 'b', 'LineWidth', 2)
hold off
figure
%3. In this step we will plot the graph for the given input data set just to see how is the distribution of the two class.
pos = find(y == 1); % This will take the postion or array number from y for all the class that has value 1
neg = find(y == 0); % Similarly this will take the position or array number from y for all class that has value 0
% Now we plot the graph column x1 Vs x2 for y=1 and y=0
plot(x(pos, 2), x(pos,3), '+');
hold on
plot(x(neg, 2), x(neg, 3), 'o');
xlabel('x1 marks in subject 1')
ylabel('y1 marks in subject 2')
legend('pass', 'Failed')
plot_x = [min(x(:,2))-2, max(x(:,2))+2]; % This min and max decides the length of the decision graph.
% Calculate the decision boundary line
plot_y = (-1./theta(3)).*(theta(2).*plot_x +theta(1));
plot(plot_x, plot_y)
hold off
%%%%%%% The only difference is In the last plot I used X where as now I use x whose attributes or features are featured scaled %%%%%%%%%%%
If you view the graph of x1 vs x2 the graph would look like,
After I run my code I create a decision boundary. The shape of the decision line seems to be okay but it is a bit displaced. The graph of the x1 vs x2 with decision boundary is given below:
![enter image description here][2]
Please suggest me where am I going wrong ....
Thanks:)
The New Graph::::
![enter image description here][1]
If you see the new graph the coordinated of x axis have changed ..... Thats because I use x(feature scalled) instead of X.
The problem lies in your cost function calculation and/or gradient calculation, your plotting function is fine. I ran your dataset on the algorithm I implemented for logistic regression but using the vectorized technique because in my opinion it is easier to debug.
The final values I got for theta were
theta =
[-76.4242,
0.8214,
0.7948]
I also used alpha = 0.3
I plotted the decision boundary and it looks fine, I would recommend using the vectorized form as it is easier to implement and to debug in my opinion.
I also think your implementation of gradient descent is not quite correct. 50 iterations is just not enough and the cost at the last iteration is not good enough. Maybe you should try to run it for more iterations with a stopping condition.
Also check this lecture for optimization techniques.
https://class.coursera.org/ml-006/lecture/37