I am trying to learn Reactor but I am having a lot of trouble with it. I wanted to do a very simple proof of concept where I simulate calling a slow down stream service 1 or more times. If you use reactor and stream the response the caller doesn't have to wait for all the results.
So I created a very simple controller but it is not behaving like I expect. When the delay is "inside" my flatMap (inside the method I call) the response is not returned until everything is complete. But when I add a delay after the flatMap the data is streamed.
Why does this code result in a stream of JSON
#GetMapping(value = "/test", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCall(k)).delayElements(Duration.ofMillis(5000));
}
But this does not
#GetMapping(value = "/test2", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCallWithDelay(k));
}
It think I am missing something very basic of the reactor API. On that note. can anyone point to a good book or tutorial on reactor? I can't seem to find anything good to learn this.
Thanks
The code inside the flatMap runs on the main thread (that is the thread the controller runs). As a result the whole process is blocked and the method doesnt return immediately. Have in mind that Reactor doesnt impose a particular threading model.
On the contrary, according to the documentation, in the delayElements method signals are delayed and continue on the parallel default Scheduler. That means that the main thread is not blocked and returns immediately.
Here are two corresponding examples:
Blokcing code:
Flux.range(1, 500)
.map(i -> {
//blocking code
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
return i;
})
.subscribe();
System.out.println("main completed");
Result:
main - Item : 1
main - Item : 2
main - Item : 3
...
main - Item : 500
main completed
Non-blocking code:
Flux.range(1, 500)
.delayElements(Duration.ofSeconds(1))
.subscribe(i -> {
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
});
System.out.println("main Completed");
//sleep main thread in order to be able to print the println of the flux
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
e.printStackTrace();
}
Result:
main Completed
parallel-1 - Item : 1
parallel-2 - Item : 2
parallel-3 - Item : 3
parallel-4 - Item : 4
...
Here is the project reactor reference guide
"delayElements" method only delay flux element by a given duration, see javadoc for more details
I think you should post details about methods "service.doRestCallWithDelay(k)" and "service.doRestCall(k)" if you need more help.
Related
I intend to execute some time consuming code using using parallelStream. This seems to work well but I have the problem that the subsequent code is not executed:
#PreDestroy
public void tearDown() {
final int mapSize = eventStreamProcessorMap.size();
LOG.info("There are {} subscriptions to be stopped!", mapSize);
final long start = System.currentTimeMillis();
LocalTime time = LocalTime.now();
final AtomicInteger count = new AtomicInteger();
eventStreamProcessorMap.entrySet().parallelStream().forEach(entry -> {
final Subscription sub = entry.getKey();
final StreamProcessor processor = entry.getValue();
LOG.info("Attempting to stop subscription {} of {} with id {} at {}", count.incrementAndGet(), mapSize, sub.id(), LocalTime.now().format(formatter));
LOG.info("Stopping processor...");
processor.stop();
LOG.info("Processor stopped.");
LOG.info("Removing subscription...");
eventStreamProcessorMap.remove(sub);
LOG.info("Subscription {} removed.", sub.id());
LOG.info("Finished stopping processor {} with subscription {} in ParallelStream at {}: ", processor, sub, LocalTime.now().format(formatter));
LOG.info(String.format("Duration: %02d:%02d:%02d:%03d (hh:mm:ss:SSS)",
TimeUnit.MILLISECONDS.toHours(System.currentTimeMillis() - start),
TimeUnit.MILLISECONDS.toMinutes(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis() - star0)%60,
TimeUnit.MILLISECONDS.toMillis(System.currentTimeMillis() - start)%1000));
LOG.info("--------------------------------------------------------------------------");
});
LOG.info("Helloooooooooooooo?????");
LOG.info(String.format("Overall shutdown duration: %02d:%02d:%02d:%03d (hh:mm:ss:SSS)",
TimeUnit.MILLISECONDS.toHours(System.currentTimeMillis() - start),
TimeUnit.MILLISECONDS.toMinutes(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis() - start)%60,
TimeUnit.MILLISECONDS.toMillis(System.currentTimeMillis() - start)%1000));
LOG.info("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
}
The code after the parallelStream processing is not executed:
LOG.info("Helloooooooooooooo?????");
does never appear in the log. Why not?
This is caused by eventStreamProcessorMap.remove(sub); (which you have removed from the code now with the edit that you made). You are streaming over a Map entrySet (eventStreamProcessorMap) and removing elements from it - this is not allowed, that is why you get that ConcurrentModificationException.
If you really want to remove while iterating, use an Iterator or map.entrySet().removeIf(x -> {...})
I wrote this code to spin off a large number of WebClients (limited by reactor.ipc.netty.workerCount), start the Mono immediately, and wait for the all Monos to complete:
List<Mono<List<MetricDataModel>>> monos = new ArrayList<>(metricConfigs.size());
for (MetricConfig metricConfig : metricConfigs) {
try {
monos.add(extractMetrics.queryMetricData(metricConfig)
.doOnSuccess(result -> {
metricDataList.addAll(result);
})
.cache());
} catch (Exception e) {
}
}
Mono.when(monos)
.doFinally(onFinally -> {
Map<String, Date> latestMap;
try {
latestMap = extractInsights.queryInsights();
Transform transform = new Transform(copierConfig.getEventType());
ArrayList<Event> eventList = transform.toEvents(latestMap, metricDataList);
} catch (Exception e) {
log.error("copy: mono: when: {}", e.getMessage(), e);
}
})
.block();
It 'works', that is the results are as expected.
Two questions:
Is this correct? Does cache() result in the when waiting for all Monos to complete?
Is it efficient? Is there a way to make this faster?
Thanks.
You should try as much as possible to:
use Reactor operators and compose a single reactive chain
avoid using doOn* operators for something other than side-effects (like logging)
avoid shared state
Your code could look a bit more like
List<MetricConfig> metricConfigs = //...
Mono<List<MetricDataModel>> data = Flux.fromIterable(metricConfigs)
.flatMap(config -> extractMetrics.queryMetricData(config))
.collectList();
Also, the cache() operator does not wait the completion of the stream (that's actually then()'s job).
All,
Here is a unit test for checking the size of a collection
main() {
test("Resource Manager Image Load", () {
ResourceManager rm = new ResourceManager();
int WRONG_SIZE = 1000000;
rm.loadImageManifest("data/rm/test_images.yaml").then((_){
print("Length="+ rm.images.length.toString()); // PRINTS '6' - WHICH IS CORRECT
expect(rm.images, hasLength(WRONG_SIZE));
});
});
}
I am running this from a browser (client-side Dart libraries are in use) and it ALWAYS passes, no matter what the value of WRONG_SIZE.
Help appreciated.
In such simple cases you can just return the future. The unit test framework recognizes it and waits for the future to complete. This also works for setUp/tearDown.
main() {
test("Resource Manager Image Load", () {
ResourceManager rm = new ResourceManager();
int WRONG_SIZE = 1000000;
return rm.loadImageManifest("data/rm/test_images.yaml").then((_) {
//^^^^
print("Length="+ rm.images.length.toString()); // PRINTS '6' - WHICH IS CORRECT
expect(rm.images, hasLength(WRONG_SIZE));
});
});
}
The problem is that your code returns a Future, and your test completes before the code in the Future has finished, so there's nothing to cause it to fail.
Check out the Asynchronous Tests section on the Dart site. There are methods like expectAsync that allow the future to be passed to the test framework so that it can wait for them to complete and handle the result correctly.
Here's an example (note the expect call is now inside the function passed to expectAsync)
test('callback is executed once', () {
// wrap the callback of an asynchronous call with [expectAsync] if
// the callback takes 0 arguments...
var timer = Timer.run(expectAsync(() {
int x = 2 + 3;
expect(x, equals(5));
}));
});
i use executor service to launch multiple thread to sent request to api and get data back. sometimes i see some threads haven't finished their job yet, the service kill that thread already, how can i force the service to wait until the thread finish their job?
here is my code:
ExecutorService pool = Executors.newFixedThreadPool(10);
List<Future<List<Book>>> futures = Lists.newArrayList();
final ObjectMapper mapper1 = new ObjectMapper();
for (final Author a : authors) {
futures.add(pool.submit(new Callable<List<Book>>() {
#Override
public List<Book> call() throws Exception {
String urlStr = "http://localhost/api/book?limit=5000&authorId=" + a.getId();
List<JsonBook> Jsbooks = mapper1.readValue(
new URL(urlStr), BOOK_LIST_TYPE_REFERENCE);
List<Book> books = Lists.newArrayList();
for (JsonBook jsonBook : Jsbooks) {
books.add(jsonBook.toAvro());
}
return books;
}
}));
}
pool.shutdown();
pool.awaitTermination(3, TimeUnit.MINUTES);
List<Book> bookList = Lists.newArrayList();
for (Future<List<Book>> future : futures) {
if (!future.isDone()) {
LogUtil.info("future " + future.toString()); <-- future not finished yet
throw new RuntimeException("Future to retrieve books: " + future + " did not complete");
}
bookList.addAll(future.get());
}
and i saw some excepitons at the (!future.isDone()) block. how can i make sure every future is done when executor service shutdown?
I like to use the countdown latch.
Set the latch to the size that you're iterating and pass that latch into your callables, then in your run / call method have a try/finally block that decrements the countdown latch.
After everything has been enqueued to your executor service, just call your latch's await method, which will block until it's all done. At that time all your callables will be finished, and you can properly shut down your executor service.
This link has an example of how to set it up.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html
I am in grails 2.3.1 - trying to use the async features.
This is bulk data processing. I am trying to synchronise 2 databases, which involves comparing both and returning a list of 'deltas'. I am trying to speed up the process
The documentation says I can just add a set of closures to a PromiseList and then call onComplete() to check that all the closures have completed. These are my attempts - directly building on "You can also construct a PromiseList manually" in the documentation:
def tasksMemberDeltas = new PromiseList()
pages.each {Integer page ->
tasksMemberDeltas << {findCreateMemberDeltas(page, (page + pageSize) - 1)}
if (page % 30 == 0) {
tasksMemberDeltas.onComplete {
tasksMemberDeltas = new PromiseList()
}
}
Returns:
Error groovy.lang.MissingMethodException:
No signature of method: java.util.ArrayList.onComplete()
In the end I called .get() which calls waitAll. Going into .get() and finding that it did waitAll was my revelation.
So if I have a single task I call:
waitAll finalDeltas
If I have a list I call:
taskFinalDeltas.get()
onComplete() logically relates to a single delta. Not the list. So this works OK:
Promise memberDeleteDeltas = task {
findDeleteAndTagDeltas()
}
memberDeleteDeltas.onError { Throwable err ->
println "An error occured ${err.message}"
}
memberDeleteDeltas.onComplete { result ->
println "Completed create deltas"
}
waitAll(memberDeleteDeltas)