How do I simulate a "tick" when testing components with embedded Q.js promises - q

My app code components often calls upon dependent components that sport asynchronous methods that return Q.js promises. I'd like to write synchronous test of such outer components whenever possible ... mostly because synchronous tests are more readable but also because it can be almost impossible to know when a dependent component is "ready" (as discussed below)
I've designed the dependent components so I can configure them to behave synchronously when under test. But their APIs still return Q.js promises. Even though such a promise will be fully resolved "immediately" (e.g., return Q(some_data);), Q guarantees that the promise won't actually resolve until the next tick. This (properly) ensures asynchronous behavior even when the time-to-resolution is effectively zero.
I get it.
But that means I can't write synchronous tests for the app components and I can't control when the ready-to-go promises resolve. I can't test the code at all when the dependent component doesn't expose the promise to the caller ... which it should not do when the method of the dependent component API should be fire-and-forget as is often the case.
It would be great if my test could tell Q that a "tick" had occurred, thus causing it to attempt to resolve queued promises. This idea is inspired by Angular's $q which has this feature baked in (you call $scope.$apply) for just this purpose.
I don't see any way to trigger a "tick" in Q today. For sure I do NOT want to monkey-punch setTimeout!
Is there a way I don't know about? Would this be a good feature?

It’s an intriguing idea. Q does not provide any way to force flush of the event queue today. There are probably good security and safety reasons to not provide this capability in most cases, and I would certainly encourage writing asynchronous tests for asynchronous systems. I am interested in hearing feedback on the idea from our panel of active contributors, so I have filed an issue to Q’s next-gen event queue implementation, ASAP (linked).

Related

do dart streams come with extra overhead?

I have a general efficiency question about dart streams.
I have a project that makes some use of them, but it has been proposed that we convert nearly everything (functions and data) to be dart streams. This is in order to achieve a fully reactive architecture.
I don't know how streams really work under the hood, so I don't really know if this kind of design comes with any kind of memory or computational overhead.
Thanks for your attention to this question.
There is an overhead. It's not necessarily big, but it's there.
Streams have a well-defined asynchronous behavior, and it's documented how they react to listeners being added, paused or cancelled, even if that happens while an event is being delivered (because, most often, that is when it happens).
Streams are asynchronous, which means there is a delay between adding an event to the stream (through a StreamController), and that event being received by the listener. That delay makes it necessary to store (buffer) the event, schedule a microtask, and then unbuffer the event and deliver it in that later microtask. Scheduling a microtask costs. There might be zones involved, which can cost extra.
On top of that, because the stream needs to be able to react to pause and cancel events in a timely manner, which means that each event delivery is also flanked by extra checks of whether the event handler has paused or cancelled. It's not a lot of overhead, but it's there.
For single-subscription streams, that's about it.
For broadcast streams, which can have multiple listeners, there can be a little extra overhead to handle new listeners being added while delivering the event. Again, not a lot, but it's there. The state-space for a stream is actually quite complicated.
(You can create "a synchoronous StreamController" which delivers events "immediately", but most of the time, you shouldn't. Those are not for avoiding asynchrony, they are for avoiding adding extra asynchronous delays when propagating already synchronous events, and should be used very carefully to avoid breaking code assuming that they won't get events in the middle of something else. A properly implemented reactive framework will use such controllers in their implementation, but that will not get rid of the original inherent delay of delivering the original asynchronous event.)
Now, performance is not absolute. Using streams everywhere might make your life easier, and if the performance is good enough for your application (it's not dominating the actual computations), then the increased development speed and maintainability might pay for itself. You should measure (and have repeatable benchmarks to measure) before making a decision about an implementation strategy based on performance alone.

How to understand dart async operation?

As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.

What is Fuseable interface in Reactor project for?

There are many usages of Fuseable interface in Reactor source code but I can't find any reference what is it. Could someone explain it's purpose?
The Fuseable interface, and its containing interfaces define the contracts used for stream fusion. Stream fusion is a reactive streams optimisation.
Without any such optimisation (in "normal" execution if you will), each reactive operator:
Subscribes to a previous operator in the chain
Is notified when the subscriber has completed
Performs its operation
Notifies its subscribers
...and then the cycle repeats for all operators. This is fantastic for making sure everything stays non-blocking, but all of those asynchronous calls come with some amount of overhead.
"Stream fusion" (or "operator fusion") significantly reduces this overhead by performing two or more of the operations in one chunk (fusing them together as one unit), passing values between them using a Queue or similar rather than via subscriptions, eliminating this overhead. It's not always possible of course - it can't be done this way if running in parallel, when certain side effects come into play, etc. - but a neat optimisation when it is possible.

NSOperation, start vs main

According to Apple document on NSOperation, we have to override main method for non-concurrent operations and start method for concurrent operations. But why?
First, keep in mind that "concurrent" and "non-concurrent" have somewhat specialized meanings in NSOperation that tend to confuse people (and are used synonymously with "asynchronous/synchronous"). "Concurrent" means "the operation will manage its own concurrency and state." "Non-concurrent" means "the operation expects something else, usually a queue, to manage its concurrency, and wants default state handling."
start does all the default state handling. Part of that is that it sets isExecuting, then calls main and when main returns, it clears isExecuting and sets isFinished. Since you're handling your own state, you don't want that (you don't want exiting main to finish the operation). So you need to implement your own start and not call super. Now, you could still have a main method if you wanted, but since you're already overriding start (and that's the thing the calls main), most people just put all the code in start.
As a general rule, don't use concurrent operations. They are seldom what you mean. They definitely don't mean "things that run in the background." Both kinds of operations can run in the background (and neither has to run in the background). The question is whether you want default system behavior (non-concurrent), or whether you want to handle everything yourself (concurrent).
If your idea of handling it yourself is "spin up an NSThread," you're almost certainly doing it wrong (unless you're doing this to interface with a C/C++ library that requires it). If it's creating a queue, you're probably doing it wrong (NSOperation has all kinds of features to avoid this). If it's almost anything that looks like "manually handling doing things in the background," you're probably doing it wrong. The default (non-concurrent) behavior is almost certainly better than what you're going to do.
Where concurrent operations can be helpful is in cases that the API you're using already handles concurrency for you. A non-concurrent operation ends when main returns. So what if your operation wraps an async thing like NSURLConnection? One way to handle that is to use a dispatch group and then call dispatch_wait at the end of your main so it doesn't return until everything's done. That's ok. I do it all the time. But it blocks a thread that wouldn't otherwise be blocked, which wastes some resources and in some elaborate corner cases could lead to deadlock (really elaborate. Apple claims it's possible and they've seen it, but I've never been able to get it to happen even on purpose).
So another way you could do it is to define yourself as a concurrent operation, and set isFinished by hand in your NSURLConnection delegate methods. Similar situations happen if you're wrapping other async interfaces like Dispatch I/O, and concurrent operations can be more efficient for that.
(In theory, concurrent operations can also be useful when you want to run an operation without using a queue. I can kind of imagine some very convoluted cases where this makes sense, but it's a stretch, and if you're in that boat, I assume you know what you're doing.)
But if you have any question at all, just use the default non-conurrent behavior. You can almost always get the behavior you want that way with little hassle (especially if you use a dispatch group), and then you don't have to wrap your brain around the somewhat confusing explanation of "concurrent" in the docs.
I would assume that concurrent vs. non-concurrent is not just a flag somewhere but a very substantial difference. By having two different methods, it is made absolutely sure that you don't use a concurrent operation where you should use a non-concurrent one or vice versa.
If you get it wrong, your code will absolutely not work because of this design. That's what you want, because you immediately fix it. If there was one method only, then using concurrent instead of non-concurrent would lead to very subtle errors that might be very hard to find. And non-concurrent instead of concurrent will lead to performance problems that you also might miss.

How to cleanly encapsulate and execute in sequence a series of background tasks in iOS?

My app includes a back-end server, with many transactions which must be carried out in the background. Many of these transactions require many synchronous bits of code to run.
For example, do a query, use the result to do another query, create a new back-end object, then return a reference to the new object to a view controller object in the foreground so that the UI can be updated.
A more specific scenario would be to carry out a sequence of AJAX calls in order, similar to this question, but in iOS.
This sequence of tasks is really one unified piece of work. I did not find existing facilities in iOS that allowed me to cleanly code this sequence as a "unit of work". Likewise I did not see a way to provide a consistent context for the "unit of work" that would be available across the sequence of async tasks.
I recently had to do some JavaScript and had to learn to use the Promise concept that is common in JS. I realized that I could adapt this idea to iOS and objective-C. The results are here on Github. There is documentation, code and unit tests.
A Promise should be thought of as a promise to return a result object (id) or an error object (NSError) to a block at a future time. A Promise object is created to represent the asynchronous result. The asynchronous code delivers the result to the Promise and then the Promise schedules and runs a block to handle the result or error.
If you are familiar with Promises on JS, you will recognize the iOS version immediately. If not, check out the Readme and the Reference.
I've used most of the usual suspects, and I have to say that for me, Grand Central Dispatch is the way to go.
Apple obviously care enough about it to re-write a lot of their library code to use completion blocks.
IIRC, Apple have also said that GCD is the preferred implementation for multitasking.
I also remember that some of the previous options have been re-implemented using GCD under the hood, so you're not already attached to something else, Go GCD!
BTW, I used to find writing the block signatures a real pain, but if you just hit return when the placeholder is selected, it does all that for you. What could be sweeter than that.

Resources