I was working with flutter, when I made http requests with async and Future, when I executed Future.whenComplete() and printed a message, it is printed infinite number of times, can I stop it's execution
You can register as many then, whenComplete, callbacks to a Future as you want. They all will be called when the Future completes.
If you add such a callback when the Future is already completed, the callbacks will be called immediately (in the next microtask as far as I remember).
Related
I have a few basic questions about Dart Futures that I can't seem to get an answer to myself. Considering the following code:
Future(
() => print('from Future') // 1
).then(
(_) => print('after Future') // 2
);
What is put on the Event Loop, code block 1 or 2?
If 1 is put on the Event Loop, is 2 executed immediately after it, synchronously, or is it put on the Event Loop as well, for later execution?
If 2 is executed immediately, would it ever make sense for 2 to be:
Future.delayed(someDuration, () => print('after Future'));
What would the use case be? Like to split a longer 'task' so that other code is run in between? Is it something that is actually done in practice, like in Flutter, to prevent 'jank'?
Edit: I found a very insightful article: https://webdev-angular3-dartlang-org.firebaseapp.com/articles/performance/event-loop#how-to-schedule-a-task, which kind of answers pretty much every single question I asked here.
The constructor you are calling is Future() which are documented as:
Creates a future containing the result of calling computation asynchronously with Timer.run.
If the result of executing computation throws, the returned future is completed with the error.
If the returned value is itself a Future, completion of the created future will wait until the returned future completes, and will then complete with the same result.
If a non-future value is returned, the returned future is completed with that value.
https://api.dart.dev/stable/2.8.2/dart-async/Future/Future.html
Where Timer.run is documented as:
Runs the given callback asynchronously as soon as possible.
This function is equivalent to new Timer(Duration.zero, callback).
https://api.dart.dev/stable/2.8.2/dart-async/Timer/run.html
So, since we are creating a timer which are already completed, it will immediately be put on the event loop.
So with this knowledge we can answer your questions:
What is put on the Event Loop, code block 1 or 2?
Block 1 is put on the event loop. Since block 2 is dependent on the result from block 1, it will not be put on any queue. Instead, block 2 will be notified when block 1 has returned its result.
If 1 is put on the Event Loop, is 2 executed immediately after it, synchronously, or is it put on the Event Loop as well, for later execution?
As far as I understand the documentation, block 2 will be executed immediately synchronously as part of the block 1 is completed (unless the future as has already been completed which then will trigger a microtask):
Register callbacks to be called when this future completes.
When this future completes with a value, the onValue callback will be called with that value. If this future is already completed, the callback will not be called immediately, but will be scheduled in a later microtask.
https://api.dart.dev/stable/2.8.2/dart-async/Future/then.html
If 2 is executed immediately, would it ever make sense for 2 to be:
The specific example does not make much sense. But yes, you can use the Future.delayed if you want to schedule smaller tasks on the event loop. It should be noted that Dart are single threaded so you cannot schedule tasks to be running in another thread by using Future.delayed.
But in the context of Flutter, you properly want to have multiple smaller tasks so the UI can be drawn between each task. But if you are going to make some heavy calculations, you should properly use an Isolate to run these in another thread.
In Dart, you can tell the VM to wait for a Future by calling await.
The thing is that you can only call await in an async function, which returns a ... Future.
So if I have a function which doesn't take a long time to run, and has to be run in a function who's type is not async, how do I break out of the async chain?
There is no such thing as breaking out of the async cycle. It's possible to have sync functions to call async code, but the result of the async code won't be available yet when the sync function returns.
The difference between a synchronous function and an asynchronous function is that the former is done when it returns, and the latter is still working in the background when it returns, which is why it returns a Future which will complete when it's really done.
That is the distinction - an asynchronous function is one that returns a Future. The async marker is not what makes the function asynchronous, it's just one way of implementing an asynchronous function. You can also have functions without the async marker which return a Future.
You can call an asynchronous function from a synchronous function. However, there is no way for the synchronous function to delay its return, so it must return before the future completes. It can set up a listener on the future, future.then((value) { doSomethingWith(value); }), but that listener will certainly only be called after the synchronous function has returned. That then-call also returns a future, so the synchronous function has to ignore some Future. That's fine. You are allowed to ignore a future when you don't need the result.
Whatever you do, you can't get the result of a Future in a synchronous function before it returns.
I'm pretty new to FRP and I'm facing a problem:
I subscribe to an observable that triggers subscribeNext every second.
In the subscribeNext's block, I zip observables that execute asynchronous operations and in zip's completed block I perform an action with the result.
let signal: RACSignal
let asynchOperations: [RACSignal]
var val: AnyObject?
// subscribeNext is trigered every second
signal.subscribeNext {
let asynchOperations = // several RACSignal
// Perform asynchronous operations
RACSignal.zip(asynchOperations).subscribeNext({
val = $0
}, completed: {
// perform actions with `val`
})
}
I would like to stop the triggering of subscribeNext for signal (that is normally triggered every second) until completed (from the zip) has been reached.
Any suggestion?
It sounds like you want an RACCommand.
A command is an object that can perform asynchronous operations, but only have one instance of its operation running at a time. As soon as you tell a command to start execute:ing, it will become "disabled," and will automatically become enabled again when the operation completes.
(You can also make a command that's enabled based on other criteria than just "am I executing right now," but it doesn't sound like you need that here.)
Once you have that, you could derive a signal that "gates" the interval signal (for example, if:then:else: on the command's enabled signal toggling between RACSignal.empty and your actual signal -- I do this enough that I have a helper for it), or you can just check the canExecute property before invoking execute: in your subscription block.
Note: you're doing a slightly weird thing with your inner subscription there -- capturing the value and then dealing with the value on the completed block.
If you're doing that because it's more explicit, and you know that the signal will only send one value but you feel the need to encode that directly, then that's fine. I don't think it's standard, though -- if you have a signal that will only send one value, that's something that unfortunately can't be represented at the type level, but is nonetheless an assumption that you can make in your code (or at least, I find myself comfortable with that assumption. To each their own).
But if you're doing it for timing reasons, or because you actually only want the last value sent from the signal, you can use takeLast:1 instead to get a signal that will always send exactly one value right at the moment that the inner signal completes, and then only subscribe in the next block.
Slight word of warning: RACCommands are meant to be used from the main thread to back UI updates; if you want to use a command on a background thread you'll need to be explicit about the scheduler to deliver your signals on (check the docs for more details).
Another completely different approach to getting similar behavior is temporal recursion: perform your operation, then when it's complete, schedule the operation to occur again one second later, instead of having an ongoing timer.
This is slightly different as you'll always wait one second between operations, whereas in the current one you could be waiting anywhere between zero and one seconds, but if that's a not a problem then this is a much simpler solution than using an RACCommand.
ReactiveCocoa's delay: method makes this sort of ad-hoc scheduling very convenient -- no manual NSTimer wrangling here.
As it stands, NSNotifications allow for a target-action mechanism in response to one post / event.
I would like to have a notification which triggers an action (runs a function) only after two events have been triggered.
The scenario is that I have two asynchronous processes which need to complete before I can call the function. Perhaps I'm missing something, but I haven't found a way to do this. Or maybe I'm not thinking of an obvious reason why this would be a really bad idea?
Also, some of my terminology may be off, so please feel free to edit and fix it.
There are many possibilities on how you can implement this. They all center around keeping track of which processes are finished. The best way depends on how your background processes are implemented.
If you are using NSOperationQueue you could add a third operation that has the other two operations as a dependency. That way you won't have to take care of notifications at all.
Otherwise you can can count how many operations have finished and execute your code when the counter reaches the right value. GCD has dispatch groups as a nice abstraction for this.
First you create a dispatch group:
let group = dispatch_group_create()
Then you enter the group for each background process:
dispatch_group_enter(group)
Finally you can register an block that gets called when the group becomes empty, that is when each dispatch_group_enter is balanced by an dispatch_group_leave:
dispatch_group_notify(group, dispatch_get_main_queue()) {
// All processes are done.
}
After each of your processes finish you leave the group again:
dispatch_group_leave(group)
It's important to call dispatch_group_enter before calling dispatch_group_notify or your block will be scheduled immediately as the group is already empty.
After your notify block was executed you can reuse the queue or discard it.
I realize this question sounds contradictory. I have several Async requests going out in an application. The situation is that the first async request is an authentication request, and the rest will use an access token returned by the successful authentication request.
The two obvious solutions would be:
run them all synchronous, and risk UI block. (bad choice)
run them async, and put request 2-N in the completion handler for the first one. (not practical)
The trouble is that the subsequent requests may be handled anywhere in the project, at anytime. The failure case would be if the 2nd request was called immediately after the 1st authentication request was issued, and before the access token was returned.
My question thus is, is there any way to queue up Async requests, or somehow say not to issue them until the first request returns successfully?
EDIT:
Why (2) is not practical: The first is an authentication request, happening when the app loads. The 2nd+ may occur right away, in which case it is practical, but it also may occur in a completely separate class or any other part of a large application. I can't essentially put the entire application in the completion handler. Other accesses to the API requests may occur in other classes, and at anytime. Even 1-2 days away after many other things have occurred.
SOLUTION:
//pseudo code using semaphore lock on authentication call to block all other calls until it is received
// at start of auth
_semaphore = dispatch_semaphore_create(0)
// at start of api calls
if(_accessToken == nil && ![_apiCall isEqualToString:#"auth]){
dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER);
}
// at end of auth with auth token
dispatch_semaphore_signal([[SFApi Instance] semaphore]);
_accessToken = ...;
This sounds like a case where you'd want to use NSOperation's dependencies
From apple docs:
Operation Dependencies
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency: and removeDependency: methods. By default, an operation object that has dependencies is not considered ready until all of its dependent operation objects have finished executing. Once the last dependent operation finishes, however, the operation object becomes ready and able to execute.
note that in order for this to work, you must subclass NSOperation "properly" with respect to KVO-compliance
The NSOperation class is key-value coding (KVC) and key-value observing (KVO) compliant for several of its properties. As needed, you can observe these properties to control other parts of your application.
You can't really have it both ways-- there's no built-in serialization for the NSURLConnection stuff. However, you are probably already funneling all of your API requests through some common class anyway (presumably you're not making raw network calls willy-nilly all over the app).
You'll need to build the infrastructure inside that class that prevents the execution of the later requests until the first request has completed. This suggests some sort of serial dispatch queue that all requests (including the initial auth step) are funneled through. You could do this via dependent NSOperations, as is suggested elsewhere, but it doesn't need to be that explicit. Wrapping the requests in a common set of entry points will allow you to do this any way you want behind the scenes.
In cases like this I always find it easiest to write the code synchronously and get it running on the UI thread first, correctly, just for debugging. Then, move the operations to separate threads and make sure you handle concurrency.
In this case the perfect mechanism for concurrency is a semaphore; the authentication operation clears the semaphore when it is done, and all the other operations are blocking on it. Once authentication is done, floodgates are open.
The relevant functions are dispatch_semaphore_create() and dispatch_semaphore_wait() from the Grand Central Dispatch documentation: https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/doc/uid/TP40008079-CH2-SW2
Another excellent solution is to create a queue with a barrier:
A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.
Looks like you got it running with a semaphore, nicely done!
Use blocks... 2 ways that I do it:
First, a block inside of a block...
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
// this won't run until the first one has finished
[myCommKit adjustSomething: thingToAdjust withCallback:^(ReturnCode returnCode, NSDictionary *successCode) {
if (successCode) {
// this won't run until both the first and then the second one finished
}
}];
}
}];
// don't be confused.. anything down here will run instantly!!!!
Second way is a method inside of a block
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
[self doNextThingAlsoUsingBlocks];
}
}];
Either way, any time I do async communication with my server I use blocks. You have to think differently when writing code that communicates with a server. You have to force things to go in the order you want and wait for the return success/fail before doing the next thing. And getting used to blocks is the right way to think about it. It could be 15 seconds between when you start the block and when it gets to the callback and executes the code inside. It could never come back if they're not online or there's a server outage.
Bonus way.. I've also sometimes done things using stages:
switch (serverCommunicationStage) {
case FIRST_STAGE:
{
serverCommunicationStage = FIRST_STAGE_WAITING;
// either have a block in here or call a method that has a block
[ block {
// in call back of this async call
serverCommunicationStage = SECOND_STAGE;
}];
break;
}
case FIRST_STAGE_WAITING:
{
// this just waits for the first step to complete
break;
}
case SECOND_STAGE:
{
// either have a block in here or call a method that has a block
break;
}
}
Then in your draw loop or somewhere keep calling this method. Or set up a timer to call it every 2 seconds or whatever makes sense for your application. Just make sure to manage the stages properly. You don't want to accidentally keep calling the request over and over. So make sure to set the stage to waiting before you enter the block for the server call.
I know this might seem like an older school method. But it works fine.