How can "yield*" improves generator performance? - dart

I saw on the Dart documentation (Link) that yield* improves the performance of a recursive generator.
Iterable<int> naturalsDownFrom(int n) sync* {
if (n > 0) {
yield n;
yield* naturalsDownFrom(n - 1);
}
}
I don't get how that can be
Or how does the yield* works. What the differences between yield and yield* ?

I wouldn't argue that you should use yield* for performance.
You should use yield* whenever you want to emit all the events of another stream inside an async* function.
The yield* differs from yield in that the latter emits a single value, and the former emits all the events of another stream.
Doing
yield* someStream;
is almost the same as doing:
await for (var value in someStream) {
yield value;
}
That is, the yield* emits the same data events as the stream it works on.
The difference is that yield* also emits error events, and always emits the entire stream, where the await for stops at the first error event.
You should not make your function recursive unless necessary, just to use yield*, that's not going to help performance.

Related

The question is to check whether the given linkedlist is palindrome. Please tell what am I doing wrong?

I understand other approaches such as using stack and reversing the second half of the linked list. But, what is wrong with my approach.
* Definition for singly-linked list.
* public class ListNode {
* int val;
* ListNode next;
* ListNode() {}
* ListNode(int val) { this.val = val; }
* ListNode(int val, ListNode next) { this.val = val; this.next = next; }
* }
*/
class Solution {
public boolean isPalindrome(ListNode head) {
if(head.next==null){return true;}
while(head!=null){
ListNode ptr=head, preptr=head;
while(ptr.next!=null){ptr=ptr.next;}
if(ptr==head){break;}
while(preptr.next.next!=null){preptr=preptr.next;}
if(head.val==ptr.val){
preptr.next=null;
head=head.next;
}
else{return false;}
}
return true;
}
}```
The following can be said about your solution:
It fails with an exception if head is null. To avoid that, you could just remove the first if statement. That case does not need a separate handling. When the list is a single node, then the first iteration will execute the break and so you'll get true as return value. But at least you will not access ->next when head is null
It mutates the given list. This is not very nice. The caller will not expect this will happen, and may need the original list for other purposes even after this call to isPalindrome.
It is slow. Its time complexity is quadratic. If this is part of a coding challenge, then the test data may be large, and the execution of your function may then exceed the allotted time.
Using a stack is indeed a solution, but it feels like cheating: then you might as well convert the whole list to an array and test whether the array is a palindrome using its direct addressing capabilities.
You can do this with just the list as follows:
Count the number of nodes in the list
Use that to identify the first node of the second half of the list. If the number of nodes is odd, let this be the node after the center node.
Apply a list reversal algorithm on that second half. Now you have two shorter lists.
Compare the values in those two lists are equal (ignore the center node if there was one). Remember the outcome (false or true)
Repeat step 3 so the reversal is rolled back, and the list is back in its original state.
Return the result that was found in step 4.
This takes linear time, and so for larger lists, this should outperform your solution.

Combiner never gets called in reduction operation (but is mandatory)

I am trying to figure out what accumulator and combiner do in reduce stream operation.
List<User> users = Arrays.asList(new User("John", 30), new User("Julie", 35));
int result = users.stream()
.reduce(0,
(partialAgeResult, user) -> {
// accumulator is called twice
System.out.println(MessageFormat.format("partialAgeResult {0}, user {1}", partialAgeResult, user));
return partialAgeResult + user.getAge();
},
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2;
});
System.out.println(MessageFormat.format("Result is {0}", result));
I notice that the combiner is never executed, and the result is 65.
If I use users.parallelStream() then the combiner is executed once and the result is 1050.
Why stream and parallelStream yield different results? I don't see any side-effects of executing this in parallel.
What is the purpose of the combiner in the simple stream version?
The problem is here. You are multiplying and not adding in your combiner.
(integer, integer2) -> {
// combiner is never called
System.out.println(MessageFormat.format("integer {0}, integer2 {1}", integer, integer2));
return integer * integer2; //<----- Should be addition
});
The combiner is used to appropriately combine various parts of a parallel operation as these operations can perform independently on individual "pieces" of the original stream.
A simple example would be summing a list of elements. You could have a variety of partial sums in a parallel operation, so you need to sum the partial sums in the combiner to get the total sum (a good exercise for you to try and see for yourself).
For a sequential stream with a mismatch between the types of the accumulator arguments or implementation( BiFunction<U,? super T,U>), you have to give combiner but that never invoked since you there is no need to combine partial result those are parallelly calculated.
So you can simplify this by just convert into partial data before reduce to avoid giving combiner.
users.stream().map(e -> e.getAge()).reduce(0, (a, b) -> a + b);
So, there is no purpose using a combiner with an accumulator like BiFunction<U,? super T,U> for sequential stream actually, but you have to provide since there is no method like
reduce(U identity, BiFunction<U,? super T,U> accumulator)
But for parallel stream combiner called.
And you are getting 1050 because your multiplying in combiner that means (30*35).

Dafny iterator: precondition and modifes clause violated

Dafny shows multiple errors when calling MoveNext() on an iterator that does nothing:
iterator Iter()
{}
method main()
decreases *
{
var iter := new Iter();
while (true)
decreases *
{
var more := iter.MoveNext();
if (!more) { break; }
}
}
The errors are on the call to iter.MoveNext():
call may violate context's modifies clause
A precondition for this call might not hold.
There is no modifies clause for main or Iter, and there is no precondition for Iter. Why is this program incorrect?
You need the following invariant on the loop
invariant iter.Valid() && fresh(iter._new)
Then your program verifies. As usual, there's nothing wrong (dynamically) with your program, but you can have false positives at verification time due to missing annotations.
As far as I know, this invariant is always required when using iterators.
(A little) More information about iterators can be found in the Dafny Reference, in Chapter 16. (At least, enough information for me to remember the answer to this question.)

Pause lua coroutine from outside function for use in a scheduler

This has been talked about before and all that can be done from the outside it seems is kill the coroutine. This is of course not practical for a scheduler. Is there a way to pause a coroutine from the outside at all or maybe a workaround?
From the C API, you can set a hook that will yield after $n lines / instructions. (This isn't possible through debug.sethook, because it adds an in-between layer which prevents it from working.)
You can wrap that up as a single function which you can expose to Lua, so apart from adding that one function, you can do it from Lua. Example:
static int setyieldhook( lua_State * L ) {
lua_State * coro;
int steps;
luaL_checktype( L, 1, LUA_TTHREAD );
coro = lua_tothread( L, 1 );
steps = luaL_optinteger( L, 2, 0 );
if (steps <= 0) {
lua_sethook( coro, NULL, 0, 0 );
} else {
lua_sethook( coro, yieldhook, LUA_MASKCOUNT, steps );
}
return 0;
}
and then just push that as a function to Lua and give it a name, e.g. debug.setyieldhook.
This one would be used as debug.setyieldhook( coro, timeout ) and whenever the coroutine runs, it will yield after timeout Lua instructions. To clear, debug.setyieldhook( coro, 0 ). (Note: You cannot change/remove hooks set via setyieldhook through debug.sethook and vice versa – this will throw an error or silently create a mess. But you could extend setyieldhook to detect & clear "normal" Lua hooks, and/or wrap debug.sethook to check for & clear the yield hook.)
Other things to watch out for:
If the coroutine yields, this will not reset the hook timer.
The coroutine will yield without returning anything, so you probably want
to wrap coroutine.yield and/or coroutine.resume so you can tell apart
"normal" yields from timeout yields.
C functions do not count up the number of instructions processed, and so
will not trigger the hook (e.g. long-running non-greedy string matches via
string.*), so this doesn't provide hard timing guarantees.

Can the lock function be used to implement thread-safe enumeration?

I'm working on a thread-safe collection that uses Dictionary as a backing store.
In C# you can do the following:
private IEnumerable<KeyValuePair<K, V>> Enumerate() {
if (_synchronize) {
lock (_locker) {
foreach (var entry in _dict)
yield return entry;
}
} else {
foreach (var entry in _dict)
yield return entry;
}
}
The only way I've found to do this in F# is using Monitor, e.g.:
let enumerate() =
if synchronize then
seq {
System.Threading.Monitor.Enter(locker)
try for entry in dict -> entry
finally System.Threading.Monitor.Exit(locker)
}
else seq { for entry in dict -> entry }
Can this be done using the lock function? Or, is there a better way to do this in general? I don't think returning a copy of the collection for iteration will work because I need absolute synchronization.
I don't think that you'll be able to do the same thing with the lock function, since you would be trying to yield from within it. Having said that, this looks like a dangerous approach in either language, since it means that the lock can be held for an arbitrary amount of time (e.g. if one thread calls Enumerate() but doesn't enumerate all the way through the resulting IEnumerable<_>, then the lock will continue to be held).
It may make more sense to invert the logic, providing an iter method along the lines of:
let iter f =
if synchronize then
lock locker (fun () -> Seq.iter f dict)
else
Seq.iter f dict
This brings the iteration back under your control, ensuring that the sequence is fully iterated (assuming that f doesn't block, which seems like a necessary assumption in any case) and that the lock is released immediately thereafter.
EDIT
Here's an example of code that could hold the lock forever.
let cached = enumerate() |> Seq.cache
let firstFive = Seq.take 5 cached |> Seq.toList
We've taken the lock in order to start enumerating through the first 5 items. However, we haven't continued through the rest of the sequence, so the lock won't be released (maybe we would enumerate the rest of the way later based on user feedback or something, in which case the lock would finally be released).
In most cases, correctly written code will ensure that it disposes of the original enumerator, but there's no way to guarantee that in general. Therefore, your sequence expressions should be designed to be robust to only being enumerated part way. If you intend to require your callers to enumerate the collection all at once, then forcing them to pass you the function to apply to each element is better than returning a sequence which they can enumerate as they please.
I agree with kvb that the code is suspicious and that you probably don't want to hold the lock. However, there is a way to write the locking in a more comfortable way using the use keyword. It's worth mentioning it, because it may be useful in other situations.
You can write a function that starts holding a lock and returns IDisposable, which releases the lock when it is disposed:
let makeLock locker =
System.Threading.Monitor.Enter(locker)
{ new System.IDisposable with
member x.Dispose() =
System.Threading.Monitor.Exit(locker) }
Then you can write for example:
let enumerate() = seq {
if synchronize then
use l0 = makeLock locker
for entry in dict do
yield entry
else
for entry in dict do
yield entry }
This is essentially implementing C# like lock using the use keyword, which has similar properties (allows you to do something when leaving the scope). So, this is much closer to the original C# version of the code.

Resources