Problem of understading the programing syntax of DART recursion [duplicate] - dart

I have trouble understanding the code of this recursive function. I am new to DART programming. I understand what a recursive function accomplishes, but I have a problem understanding the programming syntax.
int sum(List<int> numberList, int index) {
if (index < 0) {
return 0;
} else {
return numberList[index] + sum(numberList, index - 1);
}
}
main() {
// Driver Code
var result = sum([1, 2, 3, 4, 5], 4);
print(result);
}
Question: where is the value for each step stored- Does the result for the first pass at line 5 equals 9 taken the inputs from line 11. Where is the value of result 9 stored? How does the function know to add 9 + 3 in the second pass?
Does the recursive function have "internal memory" of the values generated by each pass?
My understanding of the programing language would be that var result passes the arguments to the sum function.
The sum function executes the if-else command until the index value is 0, which means it executes 4 times. With the first pass the return command creates a value of 9 (5 + 4 since value of index is 5 and value of index-1 is 4).
Here begins my confusion. The sum function would now do a second if-else pass and execute the return command again.
Now the initial value of numberList[index] would need to be 9 and the value of sum(numberList, index - 1); would need to be 3, to get 9 + 3 = 12. Additional 2 passes gets 12 + 2 = 14 and 14 + 1 = 15 the expected result.
My question here is how does (if it does) the index value in the "numberList[index]" changes. The index value is defined as 4. Is this an internal logic of the recursive function or am I completely misinterpreting the programming syntax? I would expect that we have a "temporary" variable for the result which increases with each pass.

Related

Solve Fibonacci Number using dynamic programming but get crazy slow running time

I am attempting to solve nth Fibonacci number using dynamic programming and here is my code:
class Solution:
def fib(self, n: int) -> int:
result = [0]*(n+1)
if n==0:
return 0
elif n==1:
return 1
else:
result[n] = self.fib(n-1) + self.fib(n-2)
return result[n]
I run on leetcode and it took more than 2000 ms to finish.
The solution on leetcode only takes 28 ms to finsih:
class Solution:
cache = {0: 0, 1: 1}
def fib(self, N: int) -> int:
if N in self.cache:
return self.cache[N]
self.cache[N] = self.fib(N - 1) + self.fib(N - 2)
return self.cache[N]
I compare my solution with leetcode solution and find that they are actually very similar. Leetcode use dictionary for storing, I use list. But the time complexity for getting item and inserting for both dictionary and list are O(1)? So I don't understand why there is a huge difference in runtime

A list of Dart StringBuffers seem to interfere with each other

I'm doing a nested iteration over two lists, in which I am populating some StringBuffers, like this:
var int_list = [1, 2, 3];
var letters_list = ['a', 'b', 'c'];
var row_strings = List.filled(3, StringBuffer());
var single_buffer = StringBuffer();
int_list.asMap().forEach((int_index, column) {
letters_list.asMap().forEach((letter_index, letter) {
// debug the writing operation
print('writing $letter_index - $letter');
row_strings[letter_index].write(letter);
// try a buffer not in a list as a test
if (letter_index == 0) {
single_buffer.write(letter);
}
});
});
print(single_buffer);
print(row_strings);
What I expect to happen is that in the list of StringBuffers, buffer 0 gets all the 'a's, buffer 1 gets all the 'b's, and buffer 3 the 'c'.
The debug output confirms that the writing operation is doing the right thing:
writing 0 - a
writing 1 - b
writing 2 - c
writing 0 - a
writing 1 - b
writing 2 - c
writing 0 - a
writing 1 - b
writing 2 - c
and the single string buffer gets the right output:
aaa
But the output of the list is this:
[abcabcabc, abcabcabc, abcabcabc]
What is going on here? There seems to be some strange behaviour when the StringBuffers are in a list.
Your problem is this line:
var row_strings = List.filled(3, StringBuffer());
This constructor is documented as:
List.filled(int length, E fill, {bool growable: false})
Creates a list of the given length with fill at each position.
https://api.dart.dev/stable/2.10.5/dart-core/List/List.filled.html
So what you are doing is creating a single StringBuffer instance and uses that on every position in your row_strings list.
What you properly want, is to create a new StringBuffer for each position in the list. You need to use List.generate for that:
List.generate(int length,E generator(int index), {bool growable: true})
Generates a list of values.
Creates a list with length positions and fills it with values created by calling generator for each index in the range 0 .. length - 1 in increasing order.
https://api.dart.dev/stable/2.10.5/dart-core/List/List.generate.html
Using that, we end up with:
final row_strings = List.generate(3, (_) => StringBuffer());
We don't need the index argument in our generator so I have just called it _. The function (_) => StringBuffer() will be executed for each position in the list and saved. Since out function returns a new instance of StringBuffer each time it is executed, we will end up with a list of 3 separate StringBuffer instances.

How to get length of a table in Lua? [duplicate]

This question already has answers here:
Why does Lua's length (#) operator return unexpected values?
(2 answers)
Closed 6 years ago.
I am new to lua and my lua version is 5.1.
I've got this problem. Can anybody help me to explain '#'?
local tblTest =
{
[1] = 2,
[2] = 5,
[5] = 10,
}
print(#tblTest)
this output 2 and ..
local tblTest =
{
[1] = 2,
[2] = 5,
[4] = 10,
}
print(#tblTest)
output is 4. Why?
thanks all of u.
The output is 4 because the last key with a value is 4 but that doesn't mean that 3 isn't also defined. In lua 3 would be defined as nil. So when you use the # operator it counts every key in a sequence with a value until the last non-nil value. Except,(and I could be wrong about this) the last key in the table is a power of 2, which do to language optimization, it counts up to the value that is a power of 2. In general you should stay away from tables with nil values as there are some other weird behaviors that happen because of this.
This chunk with do what you want though:
local T = {
[1] = 2,
[2] = 5,
[10] = 10
}
local lengthNum = 0
For k, v in pairs(T) do -- for every key in the table with a corresponding non-nil value
lengthNum = lengthNum + 1
end
print(lengthNum)
}
What this does is it checks the entire table for keys (such as [1] or [2]) and checks if they have value. Every key with a non-nil value runs the for loop one more time. There might be a shorter way to this, but this is how I would do it.

Hash value of String that would be stable across iOS releases?

In documentation String.hash for iOS it says:
You should not rely on this property having the same hash value across
releases of OS X.
(strange why they speak of OS X in iOS documentation)
Well, I need a hasshing function that will not change with iOS releases. It can be simple I do not need anything like SHA. Is there some library for that?
There is another question about this here but the accepted (and only) answer there simply states that we should respect the note in documentation.
Here is a non-crypto hash, for Swift 3:
func strHash(_ str: String) -> UInt64 {
var result = UInt64 (5381)
let buf = [UInt8](str.utf8)
for b in buf {
result = 127 * (result & 0x00ffffffffffffff) + UInt64(b)
}
return result
}
It was derived somewhat from a C++11 constexpr
constexpr uint64_t str2int(char const *input) {
return *input // test for null terminator
? (static_cast<uint64_t>(*input) + // add char to end
127 * ((str2int(input + 1) // prime 127 shifts left almost 7 bits
& 0x00ffffffffffffff))) // mask right 56 bits
: 5381; // start with prime number 5381
}
Unfortunately, the two don't yield the same hash. To do that you'd need to reverse the iterator order in strHash:
for b in buf.reversed() {...}
But that will run 13x slower, somewhat comparable to the djb2hash String extension that I got from https://useyourloaf.com/blog/swift-hashable/
Here are some benchmarks, for a million iterations:
hashValue execution time: 0.147760987281799
strHash execution time: 1.45974600315094
strHashReversed time: 18.7755110263824
djb2hash execution time: 16.0091370344162
sdbmhash crashed
For C++, the str2Int is roughly as fast as Swift 3's hashValue:
str2int execution time: 0.136421

Behavior of load() when chunk function returns nil

From the Lua 5.1 documentation for load():
Loads a chunk using function func to get its pieces. Each call to func must return a string that concatenates with previous results. A return of an empty string, nil, or no value signals the end of the chunk.
From my testing, this is not actually true. Or, rather, the documentation is at a minimum misleading.
Consider this example script:
function make_loader(return_at)
local x = 0
return function()
x = x + 1
if x == return_at then return 'return true' end
return nil
end
end
x = 0
repeat
x = x + 1
until not load(make_loader(x))()
print(x)
The output is the number of successive calls to the function returned by make_loader() that returned nil before load() gives up and returns a function returning nothing.
One would expect the output here to be "1" if the documentation is to be taken at face value. However, the output is "3". This implies that the argument to load() is called until it returns nil three times before load() gives up.
On the other hand, if the chunk function returns a string immediately and then nil on subsequent calls, it only takes one nil to stop loading:
function make_loader()
local x = 0
return {
fn=function()
x = x + 1
if x == 1 then return 'return true' end
return nil
end,
get_x=function() return x end
}
end
loader = make_loader()
load(loader.fn)
print(loader.get_x())
This prints "2" as I would expect.
So my question is: is the documentation wrong? Is this behavior desirable for some reason? Is this simply a bug in load()? (It seems to appear intentional, but I cannot find any documentation explaining why.)
This is a bug in 5.1. It has been corrected in 5.2, but we failed to incorporate the correction in 5.1.
I get slightly different results from yours, but they are still not quite what the documentation implies:
function make_loader(return_at)
local x = 0
return function()
x = x + 1
print("make_loader", return_at, x)
if x == return_at then return 'return true' end
return nil
end
end
for i = 1, 4 do
load(make_loader(i))
end
This returns the following results:
make_loader 1 1
make_loader 1 2
make_loader 2 1
make_loader 2 2
make_loader 2 3
make_loader 3 1
make_loader 3 2
make_loader 4 1
make_loader 4 2
For 1 it's called two times because the first one was return true and the second one nil. For 2 it's called three times because the first one was nil, then return true, and then nil again. For all other values it's called two times: it seems like the very first nil is ignored and the function is called at least once more.
If that's indeed the case, the documentation needs to reflect that. I looked at the source code, but didn't see anything that could explain why the first returned nil is ignored (I also tested with empty string and no value with the same result).

Resources