What is the equivalent to RxJava PublishSubject in project-reactor? I need Flux on which I can call next() method.
They are called Sinks. See Reactor reference: https://projectreactor.io/docs/core/release/reference/#processors
Related
It's been hammered into my head that I shouldn't use ThreadLocal with Reactor. But I want to know if I can use ThreadLocal within a single execution of a reactor function.
Specifically, when inside a Spring Webflux Controller method, can the thread ever change if I don't invoke a reactor function?
Please let me know if this is correct
#GetMapping
public Mono<String> someControllerMethod() {
// Thread 1 executing
ThreadLocal<String> USER_ID = new ThreadLocal<>();
USER_ID.set("1");
Thread.sleep(...);
someMethod();
// Thread 1 executing
assertEquals(USER_ID.get(), "1"); // this will ALWAYS be true
return Mono.just("hello ")
// this is the only time a new thread executes and USER_ID is not set
.flatMap(s -> s + USER_ID.get());
}
void someMethod() {
// Thread 1 executing
assertEquals(USER_ID.get(), "1"); // this will ALWAYS be true
}
Is my understanding above correct?
Revised this section for clarity
In a reactor chain of many operators, each operator (e.g. map) could be run under different threads, and even different "instances?" (e.g. map of url N) of the same operator could be on different threads. But once we're in an instance of a operator, will it always be the same thread (ie is it safe to declare ThreadLocal in an instance of an reactor operator)?
// main thread
Flux.fromIterable(urls)
.map(url -> {
// each of these instances runs on a different thread
// but is declaring ThreadLocal here safe to do?
ThreadLocal<String> URL = new ThreadLocal<>();
URL.set(url);
// Will URL always be set deep in the call stack?
someOtherMethod();
// Will URL always be set at the end?
URL.get();
});
.subscribeOn(Schedules.boundedElastic())
.subscribe();
void someOtherMethod() {
URL.get(); // will this will ALWAYS be set?
}
Basically, I'd like to know whether it's safe to use ThreadLocal objects like io.grpc.Context within a single instance of a Reactor operator execution.
It's been hammered into my head that I shouldn't use ThreadLocal with Reactor.
You mustn't use ThreadLocal in a reactive chain with reactor (which is the only sensible way to use that library.) In a reactive chain, the thread might change whenever you invoke an asynchronous operator - so a single reactive chain could have operations executing on many different threads throughout. In this case your ThreadLocal might work sometimes, but it's unreliable - introduce an async operator that switches the thread (say a web request that's executed on the netty worker pool), and you've then introduced a subtle and weird bug that's hard to track down (you're arbitrarily leaking information from one reactive chain to another unintentionally.) In short, it's incredibly bad practice to tie your reactive chains to a single thread - while it might seem to work initially, you're going to eventually run into a lot of problems if you do.
That being said, you don't really have a reactive chain in the above method - it's incredibly weird. If you're returning a Mono<String> to try to make the method reactive, then you need to be executing everything as part of a reactive chain. What you're actually doing is:
Using synchronous & blocking logic, a complete no-no as it ties up an event loop thread which isn't allowed;
Calling another method that's not part of a reactive chain;
Using a JUnit test method in a controller class;
Wrapping up a value to return in Mono.just();
Making one flatMap call at the end (which won't work as it's not even mapping to a publisher to flatten, you'd have to use map instead.)
...so while using your ThreadLocal is technically "safe" in this context, from a wider perspective the implementation makes no sense at all. You realistically have two options - either make the entire method non-blocking and reactive properly, not just wrapping blocking logic in a reactive publisher, or make the whole controller just return a standard object and forget the reactive element entirely.
Follow-up:
once we're in an instance of a operator, will it always be the same thread (ie is it safe to declare ThreadLocal in an instance of an reactor operator)?
No, there's at least two cases I can think of where that wouldn't be safe:
Operators can be nested. Once you're "inside" a certain operator, there's no reason why other operators can't be used that would also switch thread.
Code in other threads can be explicitly started even if there's no operator.
I don't think you can wind up in cases where the thread changes under you other than those two, but I could well be missing something, and it's still a rather delicate scenario (someone could break it quite easily.) If you must use a Threadlocal for some reason then I'd still be seriously considering whether you should be using reactor in this context.
I am trying to wrap my head around this. I have to be understanding incorrectly.
Example:
Future A() { ..}
Future B() async{
await A();
print "123";
}
Why does B need to return a Future?
Doesn't await make B() synchronous? i.e., It waits for A to completely finish and then executes the print statement.
Then, What is the necessity for B to return a Future?
async and await don't make async execution sync. There is no way to do that.
All async and await does is to make async code look more like sync code. It is just syntactic sugar. Everything that can be done with async and await can be done without it as well.
Instead of deeply nested .then(...then(...then(...).catchError())).catchError(...) distinct statements, for loops, try, catch, finally can be used which makes code easier to write, read and reason about.
In Dart language an async modifier allows to use an extended syntax in asynchronous functions (and, of course, asynchronous functions which in most cases cannot return result immediately until they not completed, they return a wrapper of the future result which called in Dart a Future).
Using extended syntax means a possibility to write asynchronous operations in the same manner if you wrote a synchronous operations.
In order to be able in a single function use both synchronous and asynchronous operations the extended syntax allows to use operator await.
With operator await the asynchronous operations will look like they are synchronous operations because an operator await perform a work which can be called asynchronously wait until the operation will not completed.
Is it possible to call Erlang functions (callback funs) from NIFs?
I read the doc(http://www.erlang.org/doc/man/erl_nif.html), but didn't find how to do that.
No, calling an Erlang function from a NIF isn't possible. You can either implement your functionality in an Erlang function that calls into a private NIF that returns a value indicating whether or not invoking a callback is necessary, or perhaps you could instead send a message to another process using enif_send and have it call the callback function for you.
Is it possible to use GCD without blocks? Is there a way to use GCD using _f variant as mikeash says in his post. I searched around and there is no proof for either sides. is it possible or impossible.
If its doable please give an example.
/Selvin
Of course it is possible! By _f variants Mike just mean set of GCD functions with _f suffix. They are alternatives for usual GCD functions but can accept a user defined function as a parameter instead of blocks. There are plenty of them:
dispatch_async_f
dispatch_sync_f
dispatch_after_f
dispatch_apply_f
dispatch_group_async_f
dispatch_group_notify_f
dispatch_set_finalizer_f
dispatch_barrier_async_f
dispatch_barrier_sync_f
dispatch_source_set_registration_handler_f
dispatch_source_set_cancel_handler_f
dispatch_source_set_event_handler_f
They accept dispatch_function_t parameter (instead of usual dispatch_block_t) which is defined as follows:
typedef void (*dispatch_function_t)(void*). As you see it can accept any user parameter and also a function because of *void pointer. So you can even use dispatch_function_t with function that have no arguments - you can just write a wrapper function like so:
void func(void) {
//do any calculations you want here
}
void wrapper_function(void*) { func(); }
dispatch_async_f(queue, 0, &wrapper_function);
Or pass a function pointer as a parameter. Or on the contrary you can use _f variants of GCD functions with user defined functions which can accept any number of arguments via the varargs (variadic functions) - just write a function wrapper for it also as above. As you see _f functions is rather a powerful mechanism and you are not limited only with blocks without parameters for GCD but can use usual functions.
Yes you can, as stated on the article:
You can use GCD without blocks, via the _f variants provided for every
GCD function that takes a block
If you look at the GCD documentation you can check the variants. If you need a quick example there are many on SO:
Using Silverlight 3 with RIA: What's the difference between the LoadOperation.Completed event and using a callback through the DomainContext.Load method? Both fire asynchronously and both provide access to the LoadOperation. When/why would I use one over the other?
Thanks :-)
There's no difference; the 2 options are offered for flexibility. Many times, the callback will suffice, but if you return the LoadOperation from a method, the caller could then choose to subscribe.
Note that even if the Load completes before you subscribe to the Completed event, your handler will still get called. We guarantee every subscriber to the event will be called.
Agreed that there is not difference in functionality. It's about coding style. If the work I have to do following completion of the query is simple, like binding data to a grid, I like the use the following syntax to inline the completed code, rather than defining a separate method.
context.Load<EntityType>(query).Completed += (lo, args) =>
{
myGrid.ItemsSource = ((LoadOperation) lo).Entities;
}
This has the cleanliness of synchronous code, but the code inside the braces will in fact be executed asynchronously.
Good luck!