Not able to receive onNext and onComplete call on subscribed mono - project-reactor

I was trying reactor library and I'm not able to figure out why below mono never return back with onNext or onComplete call. I think I missing very trivial thing. Here's a sample code.
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new Subscriber<MyUser>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
}
#Override
public void onNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
}
#Override
public void onError(Throwable t) {
System.out.println("onError!" + Thread.currentThread().getName());
}
#Override
public void onComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
});
}
private static class MyServiceService {
private Repository myRepo = new Repository();
public Mono<MyUser> save(String userId) {
return myRepo.save(userId);
}
}
private static class Repository {
public Mono<MyUser> save(String userId) {
return Mono.create(myUserMonoSink -> {
Future<MyUser> submit = exe.submit(() -> this.blockingMethod(userId));
ListenableFuture<MyUser> myUserListenableFuture = JdkFutureAdapters.listenInPoolThread(submit);
Futures.addCallback(myUserListenableFuture, new FutureCallback<MyUser>() {
#Override
public void onSuccess(MyUser result) {
myUserMonoSink.success(result);
}
#Override
public void onFailure(Throwable t) {
myUserMonoSink.error(t);
}
});
});
}
private MyUser blockingMethod(String userId) throws InterruptedException {
Thread.sleep(5000);
return new MyUser("blocking", userId);
}
}
Above code only prints Subcribed!main. What I'm not able to figure out is why that future callback is not pushing values through myUserMonoSink.success

The important thing to keep in mind is that a Flux or Mono is asynchronous, most of the time.
Once you subscribe, the asynchronous processing of saving the user starts in the executor, but execution continues in your main code after .subscribe(...).
So the main thread exits, terminating your test before anything was pushed to the Mono.
[sidebar]: when is it ever synchronous?
When the source of data is a Flux/Mono synchronous factory method. BUT with the added pre-requisite that the rest of the chain of operators doesn't switch execution context. That could happen either explicitly (you use a publishOn or subscribeOn operator) or implicitly (some operators like time-related ones, eg. delayElements, run on a separate Scheduler).
Simply put, your source is ran in the ExecutorService thread of exe, so the Mono is indeed asynchronous. Your snippet on the other hand is ran on main.
How to fix the issue
To observe the correct behavior of Mono in an experiment (as opposed to fully async code in production), several possibilities are available:
keep subscribe with system.out.printlns, but add a new CountDownLatch(1) that is .countDown() inside onComplete and onError. await on the countdown latch after the subscribe.
use .log().block() instead of .subscribe(...). You lose the customization of what to do on each event, but log() will print those out for you (provided you have a logging framework configured). block() will revert to blocking mode and do pretty much what I suggested with the CountDownLatch above. It returns the value once available or throws an Exception in case of error.
instead of log() you can customize logging or other side effects using .doOnXXX(...) methods (there's one for pretty much every type of event + combinations of events, eg. doOnSubscribe, doOnNext...)
If you're doing a unit test, use StepVerifier from the reactor-tests project. It will subscribe to the flux/mono and wait for events when you call .verify(). See the reference guide chapter on testing (and the rest of the reference guide in general).

Issue is that in created anonymous class onSubscribe method does nothing.
If you look at implementation of LambdaSubscriber, it requests some number of events.
Also it's easier to extend BaseSubscriber as it has some predefined logic.
So your subscriber implementation would be:
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new BaseSubscriber<MyUser>() {
#Override
protected void hookOnSubscribe(Subscription subscription) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
request(1); // or requestUnbounded();
}
#Override
protected void hookOnNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
// request(1); // if wasn't called requestUnbounded() 2
}
#Override
protected void hookOnComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
#Override
protected void hookOnError(Throwable throwable) {
System.out.println("onError!" + Thread.currentThread().getName());
}
});
Maybe it's not the best implementation, I'm new to reactor too.
Simon's answer has pretty good explanation about testing asynchronous code.

Related

Why hookOnComplete() not being called sometimes in my case?

I have a really complicated flux here, something like this:
private Object run(Flux<Pair<String, Flux<Record>>> flux) throws InterruptedException {
BlockingQueue<Object> result = new SynchronousQueue<>();
flux.subscribeOn(Schedulers.boundedElastic())
.parallel(2, 1)
.runOn(Schedulers.boundedElastic())
.flatMap(pair -> pair.getRight().parallel()
.runOn(Schedulers.boundedElastic())
.flatMap(record -> stringToError(record))
.doOnComplete(() -> System.out.println("Complete " + pair.getLeft()))) // #log
.sequential()
.buffer(1000)
.subscribe(new BaseSubscriber<List<Error>>() {
#Override
protected void hookOnComplete() {
result.offer(Boolean.TRUE);
}
#Override
protected void hookOnError(Throwable throwable) {
result.offer(throwable);
}
#Override
protected void hookOnCancel() {
throw new IllegalStateException();
}
});
return result.take();
}
It runs pretty well except that sometimes it blocked forever.
NO hookOnComplete(), NO hookOnError(), NO hookOnCancel().
And when this happened, line #log does already print all data I feed in. That's so strange.
I don't know how to deal with this.
Can anyone tells me what can I do and what could cause this?
By the way, I use reactor-core 3.3.2 here.

How to properly call methods returning future in Reactor

To prevent the XY problem, I'll start from the beginning:
I have a non-blocking SOAP client which I wrapped it to make the return type Mono<T> (By default it accepts callback. I can elaborate on this if needed).
Now I want to do (given ID):
1. Get the code by ID
2. Do something with the code
3. After that, get Foo and Bar and create FooBar
What I wrote was:
public class MyService {
private final MySoapClient soapClient;
public Mono<FooBarDto> doSomething(String id) {
return Mono.just(id)
.flatMap(soapClient::getCode) // returns Mono<String>
.flatMap(code ->
soapClient.doSomething(code) // returns Mono<Void>
.then(getFooBar(id, code))); // See this
}
private Mono<FooBarDto> getFooBar(String id, String code) {
return Mono.zip(
soapClient.getFoo(code), // returns Mono<Foo>
soapClient.getBar(code) // returns Mono<Bar>
).map(tuple2 -> toFooBarDto(id, tuple2));
}
private FooBarDto toFooBarDto(String id, Tuple2<Foo, Bar> tuple2) {
return FooBarDto.builder()/* set properties */.build();
}
}
Now the problem is, because methods of the SOAP client are not lazy (the moment you call them they start the process), the semantic of then won't work here. Meaning I want to get Foo and Bar when doSomething is done. They all start together.
I tried to change it fix it by changing then to flatMap, but made it even worse. The getFooBar never got called. (1. Can someone please explain why?).
So what I ended up doing was to wrap SOAP calls again to make them lazy:
public class MySoapClient {
private final AutoGeneratedSoapClient client;
Mono<Foo> getFoo(GetFooRequest request) {
return Mono.just(request).flatMap(this::doGetMsisdnByIccid);
}
private Mono<Foo> doGetFoo(GetFooRequest request) {
val handler = new AsyncHandler<GetFooRequest>();
client.getFoo(request, handler);
return Mono.fromFuture(handler.future);
}
private static class AsyncHandler<T> implements javax.xml.ws.AsyncHandler<T> {
private final CompletableFuture<T> future = new CompletableFuture<>();
#Override
public void handleResponse(Response<T> res) {
try {
future.complete(res.get());
} catch (Exception e) {
future.completeExceptionally(e);
}
}
}
}
Is there any better way to do it? Specifically:
2. Using CompeletableFuture and the callback.
3. Making methods lazy in the SOAP client.
I tried to change it fix it by changing then to flatMap, but made it
even worse. The getFooBar never got called. (1. Can someone please
explain why?)
I think a Mono<Void> always completes empty (or error), so subsequent flatMap is never called.
Using CompeletableFuture and the callback.
Making methods lazy in the SOAP client.
To make the call lazy you can do one of the followings:
1, You can use Mono.fromFuture which accepts a supplier:
private Mono<Foo> doGetFoo(GetFooRequest request) {
return Mono.fromFuture(() -> {
val handler = new AsyncHandler<GetFooRequest>();
client.getFoo(request, handler);
return handler.future;
});
}
2, You can use Mono.defer:
private Mono<Foo> doGetFoo(GetFooRequest request) {
return Mono.defer(() -> {
val handler = new AsyncHandler<GetFooRequest>();
client.getFoo(request, handler);
return Mono.fromFuture(handler.future);
});
}
3, You can get rid of CompletableFuture and use Mono.create instead, something like this:
private Mono<Foo> doGetFoo(GetFooRequest request) {
return Mono.create(sink -> {
AsyncHandler<Foo> handler = response ->
{
try
{
sink.success(response.get());
} catch (Exception e)
{
sink.error(e);
}
};
client.getFoo(request, handler);
});
}
If you do any of these it will be safe to use then method and it will work as expected.

How can I do batch deletes millions on entities using DatastoreIO and Dataflow

I'm trying to use Dataflow to delete many millions of Datastore entities and the pace is extremely slow (5 entities/s). I am hoping you can explain to me the pattern I should follow to allow that to scale up to a reasonable pace. Just adding more workers did not help.
The Datastore Admin console has the ability to delete all entities of a specific kind but it fails a lot and takes me a week or more to delete 40 million entities. Dataflow ought to be able to help me delete millions of entities that match only certain query parameters as well.
I'm guessing that some type of batching strategy should be employed (where I create a mutation with 1000 deletes in it for example) but its not obvious to me how I would go about that. DatastoreIO gives me just one entity at a time to work with. Pointers would be greatly appreciated.
Below is my current slow solution.
Pipeline p = Pipeline.create(options);
DatastoreIO.Source source = DatastoreIO.source()
.withDataset(options.getDataset())
.withQuery(getInstrumentQuery(options))
.withNamespace(options.getNamespace());
p.apply("ReadLeafDataFromDatastore", Read.from(source))
.apply("DeleteRecords", ParDo.of(new DeleteInstrument(options.getDataset())));
p.run();
static class DeleteInstrument extends DoFn<Entity, Integer> {
String dataset;
DeleteInstrument(String dataset) {
this.dataset = dataset;
}
#Override
public void processElement(ProcessContext c) {
DatastoreV1.Mutation.Builder mutation = DatastoreV1.Mutation.newBuilder();
mutation.addDelete(c.element().getKey());
final DatastoreV1.CommitRequest.Builder request = DatastoreV1.CommitRequest.newBuilder();
request.setMutation(mutation);
request.setMode(DatastoreV1.CommitRequest.Mode.NON_TRANSACTIONAL);
try {
DatastoreOptions.Builder dbo = new DatastoreOptions.Builder();
dbo.dataset(dataset);
dbo.credential(getCredential());
Datastore db = DatastoreFactory.get().create(dbo.build());
db.commit(request.build());
c.output(1);
count++;
if(count%100 == 0) {
LOG.info(count+"");
}
} catch (Exception e) {
c.output(0);
e.printStackTrace();
}
}
}
There is no direct way of deleting entities using the current version of DatastoreIO. This version of DatastoreIO is going to be deprecated in favor of a new version (v1beta3) in the next Dataflow release. We think there is a good use case for providing a delete utility (either through an example or PTransform), but still work in progress.
For now you can batch your deletes, instead of deleting one at a time:
public static class DeleteEntityFn extends DoFn<Entity, Void> {
// Datastore max batch limit
private static final int DATASTORE_BATCH_UPDATE_LIMIT = 500;
private Datastore db;
private List<Key> keyList = new ArrayList<>();
#Override
public void startBundle(Context c) throws Exception {
// Initialize Datastore Client
// db = ...
}
#Override
public void processElement(ProcessContext c) throws Exception {
keyList.add(c.element().getKey());
if (keyList.size() >= DATASTORE_BATCH_UPDATE_LIMIT) {
flush();
}
}
#Override
public void finishBundle(Context c) throws Exception {
if (keyList.size() > 0) {
flush();
}
}
private void flush() throws Exception {
// Make one delete request instead of one for each element.
CommitRequest request =
CommitRequest.newBuilder()
.setMode(CommitRequest.Mode.NON_TRANSACTIONAL)
.setMutation(Mutation.newBuilder().addAllDelete(keyList).build())
.build();
db.commit(request);
keyList.clear();
}
}

Can a thread run another fiber when the running fiber is in blocked

As far as I see,a thread can run another fiber when the running fiber is in blocked.But it is not the case.I create 100 fibers which will search solr.The result I find is all the fibers is executed in order.Another fiber can execute only if the previous one is finished just like a thread.This is my code.
import co.paralleluniverse.fibers.Fiber;
import co.paralleluniverse.fibers.FiberForkJoinScheduler;
import co.paralleluniverse.fibers.FiberScheduler;
import co.paralleluniverse.fibers.SuspendExecution;
public class FilterThreadTest {
static FiberForkJoinScheduler fiberForkJoinScheduler = new FiberForkJoinScheduler("fork-join-schedule", 1);
static SolrService solrService = new SolrService();
public static void main(String[] args) {
solrService.init();
for (int i = 0; i < 100; i++) {
new CountFiber(fiberForkJoinScheduler, i, solrService).start();
}
try {
Thread.sleep(10000000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
class CountFiber extends Fiber<Void> {
/**
*
*/
private static final long serialVersionUID = 1L;
private int count;
private SolrService solrService;
public CountFiber(FiberScheduler scheduler, int count, SolrService solrService) {
super(scheduler);
// TODO Auto-generated constructor stub
this.count = count;
this.solrService = solrService;
}
#Override
public Void run() throws SuspendExecution, InterruptedException {
System.out.println(count + " fiber is starting!");
solrService.search();
System.out.println(count + " fiber is ended!");
return null;
}
}
Did I misunderstand fiber?
Fibers will yield execution to other non-blocked fibers only when they perform fiber-blocking calls, not thread-blocking ones and Quasar doesn't automatically transform thread-blocking calls into fiber-blocking ones so you need to write (small, usually) integrations for pre-existing tools that don't know about Quasar.
The concurrent programming libraries provided by Quasar (Go-like channels, Erlang-like actors, dataflow programming, reactive streams and the java.util.concurrent port) support both fiber-blocking (when called from fibers) and thread-blocking (when called from threads); the same is true for Comsat integrations that cover many tools but, as of today, not Solr. Did you build a Solr integration yourself or is solrService.search() only thread-blocking?
For more information about integrating tools with Quasar (it's usually quite easy) see for example this blog post.

JavaFX - waiting for task to finish

I have a JavaFX application which instantiates several Task objects.
Currently, my implementation (see below) calls the behavior runFactory() which performs computation under a Task object. Parallel to this, nextFunction() is invoked. Is there a way to have nextFunction() "wait" until the prior Task is complete?
I understand thread.join() waits until the running thread is complete, but with GUIs, there are additional layers of complexity due to the event dispatch thread.
As a matter of fact, adding thread.join() to the end of the code-segment below only ceases UI interaction.
If there are any suggestions how to make nextFunction wait until its prior function, runFactory is complete, I'd be very appreciative.
Thanks,
// High-level class to run the Knuth-Morris-Pratt algorithm.
public class AlignmentFactory {
public void perform() {
KnuthMorrisPrattFactory factory = new KnuthMorrisPrattFactory();
factory.runFactory(); // nextFunction invoked w/out runFactory finishing.
// Code to run once runFactory() is complete.
nextFunction() // also invokes a Task.
...
}
}
// Implementation of Knuth-Morris-Pratt given a list of words and a sub-string.
public class KnuthMorrisPratt {
public void runFactory() throws InterruptedException {
Thread thread = null;
Task<Void> task = new Task<Void>() {
#Override public Void call() throws InterruptedException {
for (InputSequence seq: getSequences) {
KnuthMorrisPratt kmp = new KnuthMorrisPratt(seq, substring);
kmp.align();
}
return null;
}
};
thread = new Thread(task);
thread.setDaemon(true);
thread.start();
}
When using Tasks you need to use setOnSucceeded and possibly setOnFailed to create a logic flow in your program, I propose that you also make runFactory() return the task rather than running it:
// Implementation of Knuth-Morris-Pratt given a list of words and a sub-string.
public class KnuthMorrisPratt {
public Task<Void> runFactory() throws InterruptedException {
return new Task<Void>() {
#Override public Void call() throws InterruptedException {
for (InputSequence seq: getSequences) {
KnuthMorrisPratt kmp = new KnuthMorrisPratt(seq, substring);
kmp.align();
}
return null;
}
};
}
// High-level class to run the Knuth-Morris-Pratt algorithm.
public class AlignmentFactory {
public void perform() {
KnuthMorrisPrattFactory factory = new KnuthMorrisPrattFactory();
Task<Void> runFactoryTask = factory.runFactory();
runFactoryTask.setOnSucceeded(new EventHandler<WorkerStateEvent>() {
#Override
public void handle(WorkerStateEvent t)
{
// Code to run once runFactory() is completed **successfully**
nextFunction() // also invokes a Task.
}
});
runFactoryTask.setOnFailed(new EventHandler<WorkerStateEvent>() {
#Override
public void handle(WorkerStateEvent t)
{
// Code to run once runFactory() **fails**
}
});
new Thread(runFactoryTask).start();
}
}

Resources