Apologies if this question is deemed inappropriate for SO, but I was wondering if any of you know if there's a significant performance difference between streaming futures via asStream and consuming futures normally via then. Would you expect a general performance difference between the following two operations?
Operation 1
expensiveOperation().asStream().listen((res) {
doSomething(res);
});
Operation 2
expensiveOperation().then((res) {
doSomething(res);
});
asStream just allocates a wrapper object and forwards the future's outcome to the stream. All in all you probably won't notice the difference.
If you happen to have a case where you actually measure a slow-down, file a bug on http://dartbug.com. There are still ways to make the wrapper cheaper.
Related
I am wrapping my head around Flux Sinks and cannot understand the higher-level picture. When using Sinks.Many<T> tryEmitNext, the function tells me if there was contention and what should I do in case of failure, (FailFast/Handler).
But is there a simple construct which allows me to safely emit elements from multiple threads. For example, instead of letting the user know that there was contention and I should try again, maybe add elements to a queue(mpmc, mpsc etc), and only notify when the queue is full.
Now I can add a queue myself to alleviate the problem, but it seems a common use case. I guess I am missing a point here.
I hit the same issue, migrating from Processors which support safe emission from multiple threads. I use this custom EmitFailureHandler to do a busy loop as suggested by the EmitFailureHandler docs.
public static EmitFailureHandler etryOnNonSerializedElse(EmitFailureHandler fallback){
return (signalType, emitResult) -> {
if (emitResult == EmitResult.FAIL_NON_SERIALIZED) {
LockSupport.parkNanos(10);
return true;
} else
return fallback.onEmitFailure(signalType, emitResult);
};
}
There are various confusing aspects about the 3.4.0 implementation
There is an implication that unless the Unsafe variant is used, the sink supports serialized emission but actually all the serialized version does is to fail fast in case of concurrent emission.
The Sink provided by Flux.Create does support threadsafe emission.
I hope there will be a solidly engineered alternative to this offered by the library at some point.
To process HTTP requests, we have to make blocking calls (e.g. JDBC calls) as part of a Mono/Flux-based process. Our current plan looks something like this:
// I renamed getSomething to processJaxrsHttpRequest
CompletionStage<String> processJaxrsHttpRequest(String input) {
return Mono.just(input)
.map(in -> process(in))
.flatMap(str -> Mono.fromCallable(() -> jdbcCall(str)).subscribeOn(fixedScheduler))
.flatMap(str -> asyncHttpCall(str))
.flatMap(str -> Mono.fromCallable(() -> jdbcCall(str)).subscribeOn(fixedScheduler))
.toFuture();
}
where fixedScheduler is used concurrently across HTTP requests.
We were hoping to get some feedback on this strategy for handling block calls within a decent number of fluxes. Of course, we understand that if all our requests were flowing through these blocking calls then we might as well not use reactor (outside of the admittedly nice processing API).
Update: Thanks bsideup for this answer. However, I should have been a little more specific with my questions.
My overall question is how to effectively have a blocking call used across multiple fluxes were these fluxes can be created/subscribed to in large numbers. We tried the suggested approach, but it results in an explosion of threads and quickly OOMs. So we are thinking to use a shared scheduler. So.. here are my questions.
Is using a shared scheduler (fixedScheduler) what you would suggest in the situation I describe? If not, will you point me in any directions?
If using a shared scheduler is good, would this be a good implementation of it: Schedulers.newParallel("blocking-scheduler", maxNumThreads)?
Update 2: Just dug a big on Schedulers#newParallel and realize that won't work since it 'rejects' blocking calls.
Really appreciate any tips!
While subscribeOn is indeed one way of handling blocking calls and your usage is okay, you can as well use publishOn.
It moves processing to the provided Scheduler, unless other publishOn is specified:
CompletionStage<String> getSomething(String input) {
return Mono.just(input)
.map(in -> process(in)) // process must be non-blocking, or go after publishOn
.publishOn(Schedulers.boundedElastic())
.map(::jdbcCall)
.flatMap(str -> asyncHttpCall(str))
.publishOn(Schedulers.boundedElastic())
.map(::jdbcCall)
.toFuture();
}
As you can see, you can continue using async calls too. Just make sure you're not blocking non-blocking schedulers (in that example, I use publishOn again after flatMap because asyncHttpCall may complete from non-blocking scheduler)
Lets say you have a complex Lua application, and there is some base function that different parts of your code call repeatedly. It's a stateless function with little to no side effects, and fairly simple.
How does the virtual machine handle this? Does it queue up all the calls, and let them run one by one, waiting for the function to to return before calling it again? Or does it do some trickery to avoid this sort of situation? What if the function had some big side effects, like print()?
Lua is single threaded so every function call must return before the next one is called. If a function is blocked then so is the VM. The only way around that is coroutines or Lua lanes or C threads.
Examples:
Asynchronous method with its own dispatching:
// Library
func asyncAPI(callback: Result -> Void) {
dispatch_async(self.queue) {
...
callback(result)
}
}
// Caller
asyncAPI() { result in
...
}
Synchronous method with exposed dispatch queue:
// Library
func syncAPI() -> Result {
assert(isRunningOnCorrectQueue())
...
return result
}
// Caller
dispatch_async(api.queue) {
let result = api.syncAPI()
...
}
These two examples behave the same but I am looking to learn whether one of these ends up complicating a larget codebase more than the other, especially when there is a lot of asynchrony.
I would argue against both of the patterns you propose.
For the first pattern (where the API manages it's own backgrounding) I see little or no benefit to doing it this way, as opposed to leaving it to the caller. If you want to use a private, serial queue to protect data (or any other sort of critical section) internal to your API, that's fine, but that queue should be private, and it should specifically not target any public, non-global-concurrent queue (Note: it should especially not target the main queue). Ideally, the primary implementation of your API would also take a second parameter, so callers can specify on which queue to invoke the callback. (People can work around the lack of such a parameter by passing a callback block that re-dispatches to their desired queue, but I think that's clunkier than having an extra, optional parameter.) This puts the API consumer in complete control of the concurrency, while preserving your freedom to use queues internally to protect state.
As to the second approach, it's my opinion that we all should avoid creating new synchronous, blocking API. When you provide a synchronous, blocking API and don't provide a callback-based version, that means that you have denied consumers of your API any opportunity to avoid blocking. When you only provide synchronous, blocking API, then if someone wants to call your API in the background, at least one thread (in addition to any additional threads that your API consumes behind the scenes) will be consumed from the finite number of threads available to each process. (In the worst case this can lead to starvation conditions that are effectively deadlocks.)
Another red flag with this second example is that it vends a queue; Any time an API vends a queue, something is amiss. As mentioned, if you want to use a private serial queue to protect state or other critical sections internal to your API, go for it, but don't expose that queue to the outside world. If nothing else, it unnecessarily exposes details of your implementation. In looking at the system framework headers, I couldn't find a single case where a dispatch_queue_t was vended where it wasn't immediately obvious that the intent was for the API consumer to push in the queue, and not read it out.
It's also worth mentioning that these patterns are problematic regardless of whether your workload is CPU-bound or IO-bound. If it's CPU-bound, then not managing your own dispatch gives consumers of the API explicit control over how this CPU work is executed. If your workload is IO-bound, then you should use the OS- and libdispatch-provided asynchronous IO mechanisms (dispatch_io, dispatch_sources, kevent, etc) to avoid consuming a thread (or more than one) for the duration of your work.
Another answer here implied that forcing consumers to manage their own concurrency leads to "boilerplate" code. If you feel that the burden of API consumers potentially having to wrap calls to your API with dispatch_async is too great, then feel free to provide a convenience overload that dispatches to the default global concurrent queue, but please always leave the version that allows API consumers the ability to explicitly manage their own concurrency.
If, on the other hand, all this is internal to the implementation, and not part of the public API, then do whatever is most expedient, knowing that you can refactor the implementation behind the public API any time in the future.
As you said, the 2 generally accomplish the same thing but the first is more preferable in most scenarios. There are several benefits to using the first method.
The API is simpler. You simply call the method and provide code for the callback block.
Less boilerplate code, No typing dispatch_async every time you want to call it as it is just included in the method itself.
Less room for bugs/errors. By wrapping the asynchronous logic inside the method itself, you ensure that it is called on the right queue internally without the caller having to worry about any of that.
Touching on the last point, you also have finer control over the queue itself. Let's say you are trying to perform certain tasks on a particular queue. It is way simpler to simply wrap the code in a GCD call on that queue a single time rather than having to remember to reuse that same queue every time you want to call the method.
I'm going over some older Dart code, addressing breaking changes with the latest Dart SDK. This one I can't figure out:
Future<DateTime> get lastsave =>
client.lastsave.transform((int unixTs) =>
new DateTime.fromMillisecondsSinceEpoch(unixTs * 1000, isUtc:true));
=>
The method 'transform' is not defined for the class 'Future<List<int>>'
From what I understand, the purpose of Future.transform() was to apply a synchronous transformation (see e.g. this discussion thread). I.e. convert the async call to a sync call and return the value.
Has Future.transform been replaced with something else?
Must have been quite a while since that code was updated ;)
Just replace transform with then and it should work.
From https://groups.google.com/a/dartlang.org/forum/#!topic/misc/Boch2XH9Tmk
We have also improved the Future class, and made it simpler to use. One simple “then” methods lets you apply asynchronous or synchronous functions to the result of a future, merging the three methods “chain”, “transform” and “then”. Streams and Futures should make asynchronous Dart programs easier to write and read, and should reduce some types of programming errors.