There's this question I'm solving where
f(n) = 2^n
g(n) = 2^(n/2)
f(n) = ?(g(n))
I've found many answers as Ω and ω.
But shouldn't it be f(n) = θ(g(n))? Since the constant shouldn't affect the growth of the function?
f(n) = θ(g(n)) if and only if f(n) = Ω(g(n)) and f(n) = O(g(n)), but we can see that f(n) = O(g(n)) just doesn't hold as explained below.
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= c g(n) for all n >= n0}
2^n <= c 2^(n/2) would lead to 2^(n/2) <= c, and we cannot find such an n0 to make the above definition holds.
n = np.linspace(1, 10)
plt.plot(n, 2 ** n, label="2 ** n")
plt.plot(n, 2 ** (n/2), label="2 ** (n/2)")
plt.legend(loc='upper left')
plt.show()
Reference: CLRS
Related
I have a convex optimization problem I am trying to solve with cvxpy. Given a 1 x n row vector y and an m x n matrix C, I want to find a scalar b and a 1 x m row vector a such that the sum of squares of y - (aC + b(aC # aC)) is as small as possible (the # denotes element wise multiplication). In addition, all entires in a must be nonnegative and sum to 1 and -100 <= b <= 100. Below is my attempt to solve this using cvxpy.
import numpy as np
import cvxpy as cvx
def find_a(y, C, b_min=-100, b_max=100):
b = cvx.Variable()
a = cvx.Variable( (1,C.shape[0]) )
aC = a * C # this should be matrix multiplication
x = (aC + cvx.multiply(b, cvx.square(aC)))
objective = cvx.Minimize ( cvx.sum_squares(y - x) )
constraints = [0. <= a,
a <= 1.,
b_min <= b,
b <= b_max,
cvx.sum(a) == 1.]
prob = cvx.Problem(objective, constraints)
result = prob.solve()
print a.value
print result
y = np.asarray([[0.10394265, 0.25867508, 0.31258457, 0.36452763, 0.36608997]])
C = np.asarray([
[0., 0.00169811, 0.01679245, 0.04075472, 0.03773585],
[0., 0.00892802, 0.03154158, 0.06091544, 0.07315024],
[0., 0.00962264, 0.03245283, 0.06245283, 0.07283019],
[0.04396226, 0.05245283, 0.12245283, 0.18358491, 0.23886792]])
find_a(y, C)
I keep getting a DCPError: Problem does not follow DCP rules. error when I try to solve for a. I am thinking that either my function is not really convex, or I do not understand how to construct the proper cvxpy Problem. Any help would be greatly appreciated.
I am trying to write the recurrence relation for the running time of the following function:
function G(n):
if n>0 then:
x=0
for i = 1 to n:
x = x + 1
G(n-1)
end if
What I came up with was:
T(n) = 1 if n <= 0
T(n) = T(n-1) + 1 if n>0
However I was told that this was incorrect and I don't know why or what the correct solution would be. Any help is greatly appreciated!
T(n) = 1 if n <= 0
T(n) = T(n-1) + O(n) if n>0
Instead of O(1), it should be O(n), because you are looping from 1 to n
If you solve the recurrence, the overall complexity will be O(n2)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Sorry I have tried a lot to solve this recurrence equation
T (n) = 3T (n / 3) + Ѳ (log3n)
with the replacement method but I can not get the required result:
1) T (n) = O (nlogn)
2) Induction
Base: for every n = 1 -> 1log1 + 1 = 1 = T (1)
Inductive step: T (k) = klogk + k for each k <n
Use k = n / 3
T (n) = 3T (n / 3) + Ѳ (log₃n)
1) T (n) = O (nlogn)
2) Induction
Base: for every n = 1 -> 1log1 + 1 = 1 = T (1)
Inductive step: T (k) = klogk + k for each k <n
Use k = n / 3
T (n) = 3T (n / 3) + Ѳ (log₃n)
= 3 [n / 3logn / 3 + n / 3] + (log₃n)
= nlogn / 3 + n + (log₃n)
= n(logn-log3) + n + (log₃n)
= nlogn-nlog3 + n + (log3n)
Firstly we can (eventually) ignore the base 3 in the Theta-notation, as it amounts to a multiplicative factor as is therefore irrelevant. Then we can try the following method:
1. Hypothesis by inspection:
If we re-substitute T into itself multiple times, we get:
What is the upper limit m? We need to assume that T(n) has a stopping condition, i.e. some value of n where it stops recursing. Assuming that it is n = 1 (it really doesn't matter, as long as it's a constant much smaller than n). Continuing (and briefly restoring the base 3):
Surprisingly the answer is not Ө(n log n).
2. Induction base case
We don't use induction to prove the final result, but the series result we deduced by inspecting the behaviour of the expansion.
For the base case n / 3 = 1, we have:
Which is consistent.
3. Induction recurrence
Again, consistent. Thus by induction the summation result is correct, and T(n) is indeed Ө(n).
4. Numerical tests:
Just in case you still cannot believe that it is Ө(n), here is a numerical test to prove the result.
Javascript code:
function T(n) {
return n <= 1 ? 0 : 3*T(floor(n/3)) + log(n);
}
Results:
n T(n)
--------------------------
10 5.598421959
100 66.33828212
1000 702.3597066
10000 6450.185742
100000 63745.45154
1000000 580674.1886
10000000 8924162.276
100000000 81068207.64
Graph:
The linear relationship is clear.
For the A* search algorithm, provided an heuristic h, supose h is admisible.
That is:
h(n) ≤ h*(n) for every node n, where h* is the real cost from n to goal.
Does this ensure the heuristic is monotone?
That is:
f(n) ≤ g(n') + h(n') for every sucesor n' of n, where f(n)= h(n) + g(n) and g(n) is the accumulated cost.
No.
Assume you have three successor states s1, s2, s3 and a goal state g so that s1 -> s2 -> s3 -> g.
s1 is the starting node.
Consider also the following values for h(s) and h*(s) (i.e. true cost):
h(s1) = 3 , h*(s1) = 6
h(s2) = 4 , h*(s2) = 5
h(s3) = 3 , h*(s3) = 3
h(g) = 0 , h*(g) = 0
Following the only path to the goal we can have that:
g(s1) = 0, g(s2) = 1, g(s3) = 3, g(g) = 6, coinciding with the true cost above.
Although the heuristic function is admissible (h(s) <= h*(s)), f(n) will not be monotonic. For instance f(s1) = h(s1) + g(s1) = 3 while f(s2) = h(s2) + g(s2) = 5 with f(s1) < f(s2). Same holds between f(s2) and f(s3).
Of course this means you have a quite uninformative heuristic.
I am working on a client's site, and I'm writing an amortization schedule calculator in in ruby on rails. For longer loan term calculations, it doesn't seem to be breaking when the balance reaches 0
Here is my code:
def calculate_amortization_results
p = params[:price].to_i
i = params[:rate].to_d
l = params[:term].to_i
j = i/(12*100)
n = l * 12
m = p * (j / (1 - (1 + j) ** (-1 * n)))
#loanAmount = p
#rateAmount = i
#monthlyAmount = m
#amort = []
#interestAmount = 0
while p > 0
line = Hash.new
h = p*j
c = m-h
p = p-c
line["interest"] = h
line["principal"] = c
if p <= 0
line["balance"] = 0
else
line["balance"] = p
end
line["payment"] = h+c
#amort.push(line)
#interestAmount += h
end
end
And here is the view:
- #amort.each_with_index do |a, i|
%li
.m
= i+1
.i
= number_to_currency(a["interest"], :unit => "$")
.p
= number_to_currency(a["principal"], :unit => "$")
.pp
= number_to_currency(a["payment"], :unit => "$")
.b
= number_to_currency(a["balance"], :unit => "$")
What I am seeing is, in place of $0.00 in the final payment balance, it shows "-$-inf", iterates one more loop, then displays $0.00, but shows "-$-inf" for interest. It should loop until p gets to 0, then stop and set the balance as 0, but it isn't. Any idea what I've done wrong?
The calculator is here. It seems to work fine for shorter terms, like 5 years, but longer terms cause the above error.
Edit:
Changing the while loop to n.times do
and then changing the balance view to
= number_to_currency(a["balance"], :unit => "$", :negative_format => "$0.00")
Is a workaround, but i'd like to know why the while loop wouldn't work correctly
in Ruby the default for numerical values is Fixnum ... e.g.:
> 15 / 4
=> 3
You will see weird rounding errors if you try to use Fixnum values and divide them.
To make sure that you use Floats, at least one of the numbers in the calculation needs to be a Float
> 15.0 / 4
=> 3.75
> 15 / 4.0
=> 3.75
You do two comparisons against 0 , which should be OK if you make sure that p is a Float.
As the other answer suggests, you should use "decimal" type in your database to represent currency.
Please try if this will work:
def calculate_amortization_results
p = params[:price].to_f # instead of to_i
i = params[:rate].to_f # <-- what is to_d ? use to_f
l = params[:term].to_i
j = i/(12*100.0) # instead of 100
n = l * 12
m = p * (j / (1 - (1 + j) ** (-1 * n))) # division by zero if i==0 ==> j==0
#loanAmount = p
#rateAmount = i
#monthlyAmount = m
#amort = []
#interestAmount = 0.0 # instead of 0
while p > 0
line = Hash.new
h = p*j
c = m-h
p = p-c
line["interest"] = h
line["principal"] = c
if p <= 0
line["balance"] = 0
else
line["balance"] = p
end
line["payment"] = h+c
#amort.push(line)
#interestAmount += h
end
end
If you see "inf" in your output, you are doing a division by zero somewhere.. better check the logic of your calculation, and guard against division by zero.
according to Wikipedia the formula is:
http://en.wikipedia.org/wiki/Amortization_calculator
to improve rounding errors, it's probably better to re-structure the formula like this:
m = (p * j) / (1 - (1 + j) ** (-1 * n) # these are two divisions! x**-1 == 1/x
which is equal to:
m = (p * j) + (p * j) / ((1 + j) ** n) - 1.0)
which is equal to: (use this one)
q = p * j # this is much larger than 1 , so fewer rounding errors when dividing it by something
m = q + q / ((1 + j) ** n) - 1.0) # only one division
I think it has something to do with the floating point operations precision. It has already been discussed here: Ruby number precision with simple arithmetic and it would be better to use decimal format for financial purposes.
The answer could be computing the numbers in the loop, but with precomputed number of iterations and from the scratch.