Recurrence Relation tree method - recurrence

I am currently having issues with figuring our some recurrence stuff and since I have midterms about it coming up soon I could really use some help and maybe an explanation on how it works.
So I basically have pseudocode for solving the Tower of Hanoi
TOWER_OF_HANOI ( n, FirstRod, SecondRod, ThirdRod)
if n == 1
move disk from FirstRod to ThirdRod
else
TOWER_OF_HANOI(n-1, FirstRod, ThirdRod, SecondRod)
move disk from FirstRod to ThirdRod
TOWER_OF_HANOI(n-1, SecondRod, FirstRod, ThirdRod)
And provided I understand how to write the relation (which, honestly I'm not sure I do...) it should be T(n) = 2T(n-1)+Ɵ(n), right? I sort of understand how to make a tree with fractional subproblems, but even then I don't fully understand the process that would give you the end solution of Ɵ(n) or Ɵ(n log n) or whatnot.
Thanks for any help, it would be greatly appreciated.

Assume the time complexity is T(n), it is supposed to be: T(n) = T(n-1) + T(n-1) + 1 = 2T(n-1) + 1. Why "+1" but not "+n"? Since "move disk from FirstRod to ThirdRod" costs you only one move.
For T(n) = 2T(n-1) + 1, its recursion tree will exactly look like this:
https://www.quora.com/What-is-the-complexity-of-T-n-2T-n-1-+-C (You might find it helpful, the image is neat.) C is a constant; it means the cost per operation. In the case of Tower of Hanoi, C = 1.
Calculate the sum of the cost each level, you will easily find out in this case, the total cost will be 2^n-1, which is exponential(expensive). Therefore, the answer of this recursion equation is Ɵ(2^n).

Related

Is there a faster way to find primes in lua?

I am working on Project Euler, and my code is just taking way too long to compute. I am supposed to find the sum of all primes less than 2,000,000 , but my program would take years to complete. I would try some different ways to find primes, but the problem is that I only know one way.
Anyways, here is my code:
sum=2
flag=0
prime=3
while prime<2000000 do
for i=2,prime-1 do
if prime%i==0 then
flag=1
end
end
if flag==0 then
print(prime)
sum=sum+prime
end
prime=prime+1
flag=0
if prime==2000000 then
print(sum)
end
end
Does anyone know of any more ways to find primes, or even a way to optimize this? I always try to figure coding out myself, but this one is truly stumping me.
Anyways, thanks!
This code is based on Sieve of Eratosthenes.
Whenever a prime is found, its multiples are marked as non-prime. Remaining integers are primes.
nonprimes={}
max=2000000
sum=2
prime=3
while prime<max do
if not nonprimes[prime] then
-- found a prime
sum = sum + prime
-- marks multiples of prime
i=prime*prime
while i < max do
nonprimes[i] = true
i = i + 2*prime
end
end
-- primes cannot be even
prime = prime + 2
end
print(sum)
As an optimization, even numbers are never considered. It reduces array size and number of iterations by 2. This is also why considered multiple of a found prime are (2k+1)*prime.
Your program had some bugs and computing n^2 divisions is very expensive.

How to solve this recurrence using masters method?

T(n)=4t(n/2) + n^2 and t(1)=1
I dont know guys, I can solve other ones but I seem to get stuck and cant start with this one
Let's work through this one and see what we find. In this case, we have a = 4, b = 2, and d = 2. Since logb a = 2 = d, we should get that t(n) = Θ(n2 log n).
Let's quickly check to see if that's the case by thinking about how much work is done per level in the tree. At the top level, we do n2 work and then make four calls on problems of size n/2. Each of those problems does (n/2)2 = n2 / 4 work, and since there's four copies of that problem the work done at the next level is n2. Each of those subproblems fires off four recursive calls on problems of size (n/4)2 = n2 / 16, and since there are sixteen of those subproblems the work done at that level is also n2. Overall, we see that each layer in the tree does n2 work and that there are Θ(log n) layers, so the total work done is Θ(n2 log n), matching our bound from the Master Theorem.

Project Euler #3 Ruby Solution - What is wrong with my code?

This is my code:
def is_prime(i)
j = 2
while j < i do
if i % j == 0
return false
end
j += 1
end
true
end
i = (600851475143 / 2)
while i >= 0 do
if (600851475143 % i == 0) && (is_prime(i) == true)
largest_prime = i
break
end
i -= 1
end
puts largest_prime
Why is it not returning anything? Is it too large of a calculation going through all the numbers? Is there a simple way of doing it without utilizing the Ruby prime library(defeats the purpose)?
All the solutions I found online were too advanced for me, does anyone have a solution that a beginner would be able to understand?
"premature optimization is (the root of all) evil". :)
Here you go right away for the (1) biggest, (2) prime, factor. How about finding all the factors, prime or not, and then taking the last (biggest) of them that is prime. When we solve that, we can start optimizing it.
A factor a of a number n is such that there exists some b (we assume a <= b to avoid duplication) that a * b = n. But that means that for a <= b it will also be a*a <= a*b == n.
So, for each b = n/2, n/2-1, ... the potential corresponding factor is known automatically as a = n / b, there's no need to test a for divisibility at all ... and perhaps you can figure out which of as don't have to be tested for primality as well.
Lastly, if p is the smallest prime factor of n, then the prime factors of n are p and all the prime factors of n / p. Right?
Now you can complete the task.
update: you can find more discussion and a pseudocode of sorts here. Also, search for "600851475143" here on Stack Overflow.
I'll address not so much the answer, but how YOU can pursue the answer.
The most elegant troubleshooting approach is to use a debugger to get insight as to what is actually happening: How do I debug Ruby scripts?
That said, I rarely use a debugger -- I just stick in puts here and there to see what's going on.
Start with adding puts "testing #{i}" as the first line inside the loop. While the screen I/O will be a million times slower than a silent calculation, it will at least give you confidence that it's doing what you think it's doing, and perhaps some insight into how long the whole problem will take. Or it may reveal an error, such as the counter not changing, incrementing in the wrong direction, overshooting the break conditional, etc. Basic sanity check stuff.
If that doesn't set off a lightbulb, go deeper and puts inside the if statement. No revelations yet? Next puts inside is_prime(), then inside is_prime()'s loop. You get the idea.
Also, there's no reason in the world to start with 600851475143 during development! 17, 51, 100 and 1024 will work just as well. (And don't forget edge cases like 0, 1, 2, -1 and such, just for fun.) These will all complete before your finger is off the enter key -- or demonstrate that your algorithm truly never returns and send you back to the drawing board.
Use these two approaches and I'm sure you'll find your answers in a minute or two. Good luck!
Do you know you can solve this with one line of code in Ruby?
Prime.prime_division(600851475143).flatten.max
=> 6857

Big-O of an operation over a single linked list

Suppose you've got a single linked list of size N, and you want to perform an operation on every element, beginning at the end.
I've come up with the following pseudocode:
while N > 0
Current = LinkedList
for 0 to N
Current = Current.tail
end
Operation(Current.head)
N := N-1
end
Now I've got to determine which Big-O this algorithm is.
Supposing that Operation() is O(1), I think it's something like this:
N + (N-1) + (N-2) + ... + (N-(N-1)) + 1
But I'm not sure what Big-O that actually is. I think it is definitely smaller than O(N^2), but I don't think you can say its O(N) either ...
Your equation is basically that of the triangular numbers, and sums to N(N+1)/2. I'll leave you to determine the O() from that!
A quicker way to do this is to construct a new list that is the reverse of the original list, and then perform the operations on that.
Your algorithm is O(n^2) as you suggest in your post. You can do it in O(n), though.
It's important to remember that Big-O notation is an upper bound on the algorithm's time complexity.
1+2+3+...+n = n*(n+1)/2 = 0.5*n^2+O(n)
This is O(n^2), and O(n^2) is tight, i.e. there is no lower runtime order that'd contain your runtime.
A faster algorithm that works from front-to-back could have O(n) instead of O(n^2)
Your runtime analysis is correct, the runtime is 1 + 2 + ... + N which is a sum of the arithmetic progression and therefore = (N²-N) / 2.

Can a SHA-1 hash be all-zeroes?

Is there any input that SHA-1 will compute to a hex value of fourty-zeros, i.e. "0000000000000000000000000000000000000000"?
Yes, it's just incredibly unlikely. I.e. one in 2^160, or 0.00000000000000000000000000000000000000000000006842277657836021%.
Also, becuase SHA1 is cryptographically strong, it would also be computationally unfeasible (at least with current computer technology -- all bets are off for emergent technologies such as quantum computing) to find out what data would result in an all-zero hash until it occurred in practice. If you really must use the "0" hash as a sentinel be sure to include an appropriate assertion (that you did not just hash input data to your "zero" hash sentinel) that survives into production. It is a failure condition your code will permanently need to check for. WARNING: Your code will permanently be broken if it does.
Depending on your situation (if your logic can cope with handling the empty string as a special case in order to forbid it from input) you could use the SHA1 hash ('da39a3ee5e6b4b0d3255bfef95601890afd80709') of the empty string. Also possible is using the hash for any string not in your input domain such as sha1('a') if your input has numeric-only as an invariant. If the input is preprocessed to add any regular decoration then a hash of something without the decoration would work as well (eg: sha1('abc') if your inputs like 'foo' are decorated with quotes to something like '"foo"').
I don't think so.
There is no easy way to show why it's not possible. If there was, then this would itself be the basis of an algorithm to find collisions.
Longer analysis:
The preprocessing makes sure that there is always at least one 1 bit in the input.
The loop over w[i] will leave the original stream alone, so there is at least one 1 bit in the input (words 0 to 15). Even with clever design of the bit patterns, at least some of the values from 0 to 15 must be non-zero since the loop doesn't affect them.
Note: leftrotate is circular, so no 1 bits will get lost.
In the main loop, it's easy to see that the factor k is never zero, so temp can't be zero for the reason that all operands on the right hand side are zero (k never is).
This leaves us with the question whether you can create a bit pattern for which (a leftrotate 5) + f + e + k + w[i] returns 0 by overflowing the sum. For this, we need to find values for w[i] such that w[i] = 0 - ((a leftrotate 5) + f + e + k)
This is possible for the first 16 values of w[i] since you have full control over them. But the words 16 to 79 are again created by xoring the first 16 values.
So the next step could be to unroll the loops and create a system of linear equations. I'll leave that as an exercise to the reader ;-) The system is interesting since we have a loop that creates additional equations until we end up with a stable result.
Basically, the algorithm was chosen in such a way that you can create individual 0 words by selecting input patterns but these effects are countered by xoring the input patterns to create the 64 other inputs.
Just an example: To make temp 0, we have
a = h0 = 0x67452301
f = (b and c) or ((not b) and d)
= (h1 and h2) or ((not h1) and h3)
= (0xEFCDAB89 & 0x98BADCFE) | (~0x98BADCFE & 0x10325476)
= 0x98badcfe
e = 0xC3D2E1F0
k = 0x5A827999
which gives us w[0] = 0x9fb498b3, etc. This value is then used in the words 16, 19, 22, 24-25, 27-28, 30-79.
Word 1, similarly, is used in words 1, 17, 20, 23, 25-26, 28-29, 31-79.
As you can see, there is a lot of overlap. If you calculate the input value that would give you a 0 result, that value influences at last 32 other input values.
The post by Aaron is incorrect. It is getting hung up on the internals of the SHA1 computation while ignoring what happens at the end of the round function.
Specifically, see the pseudo-code from Wikipedia. At the end of the round, the following computation is done:
h0 = h0 + a
h1 = h1 + b
h2 = h2 + c
h3 = h3 + d
h4 = h4 + e
So an all 0 output can happen if h0 == -a, h1 == -b, h2 == -c, h3 == -d, and h4 == -e going into this last section, where the computations are mod 2^32.
To answer your question: nobody knows whether there exists an input that produces all zero outputs, but cryptographers expect that there are based upon the simple argument provided by daf.
Without any knowledge of SHA-1 internals, I don't see why any particular value should be impossible (unless explicitly stated in the description of the algorithm). An all-zero value is no more or less probable than any other specific value.
Contrary to all of the current answers here, nobody knows that. There's a big difference between a probability estimation and a proof.
But you can safely assume it won't happen. In fact, you can safely assume that just about ANY value won't be the result (assuming it wasn't obtained through some SHA-1-like procedures). You can assume this as long as SHA-1 is secure (it actually isn't anymore, at least theoretically).
People doesn't seem realize just how improbable it is (if all humanity focused all of it's current resources on finding a zero hash by bruteforcing, it would take about xxx... ages of the current universe to crack it).
If you know the function is safe, it's not wrong to assume it won't happen. That may change in the future, so assume some malicious inputs could give that value (e.g. don't erase user's HDD if you find a zero hash).
If anyone still thinks it's not "clean" or something, I can tell you that nothing is guaranteed in the real world, because of quantum mechanics. You assume you can't walk through a solid wall just because of an insanely low probability.
[I'm done with this site... My first answer here, I tried to write a nice answer, but all I see is a bunch of downvoting morons who are wrong and can't even tell the reason why are they doing it. Your community really disappointed me. I'll still use this site, but only passively]
Contrary to all answers here, the answer is simply No.
The hash value always contains bits set to 1.

Resources