When multiple onErrorContinue added to the pipeline to handle specific type of exception thrown from flatMap, the exception handling is not working as expected.
The below code, I expect, the elements 1 to 6 should be dropped and element 7 to 10 should be consumed by the subscriber.
public class FlatMapOnErrorContinueExample {
public static void main(String[] args) {
Flux.just(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.flatMap(number -> {
if (number <= 3) {
return Mono.error(new NumberLesserThanThree("Number is lesser than 3"));
} else if (number > 3 && number <= 6) {
return Mono.error(new NumberLesserThanSixButGretherThan3("Number is grether than 6"));
} else {
return Mono.just(number);
}
})
.onErrorContinue(NumberLesserThanThree.class,
(throwable, object) -> System.err.println("Exception: Dropping the element because it is lesser than 3"))
.onErrorContinue(NumberLesserThanSixButGretherThan3.class,
(throwable, object) -> System.err.println("Exception: Dropping the element because it is lesser than 6 but grether than 3"))
.onErrorContinue((throwable, object) ->
System.err.println("Exception: " + throwable.getMessage()))
.subscribe(number -> System.out.println("number is " + number),
error -> System.err.println("Exception in Subscription " + error.getMessage()));
}
public static class NumberLesserThanThree extends RuntimeException {
public NumberLesserThanThree(final String msg) {
super(msg);
}
}
public static class NumberLesserThanSixButGretherThan3 extends RuntimeException {
public NumberLesserThanSixButGretherThan3(final String msg) {
super(msg);
}
}
}
Here is the output what I am getting:
Exception: Dropping the element because it is lesser than 3
Exception: Dropping the element because it is lesser than 3
Exception: Dropping the element because it is lesser than 3
Exception in Subscription Number is grether than 6
Question: Why the 2nd onErrorContinue is not called but the exception send to subscriber?
Additional Note:
if i remove 1st and 2nd onErrorContinue, then all exception are handled by 3rd onErrorContinue. I could use this approach to receive all exception and check for the type of exception and proceed with handling. However, I would like to make it cleaner exception handling rather than adding if..else block.
How this question is different from Why does Thread.sleep() trigger the subscription to Flux.interval()?
1) This question about exception handling and the order of exception handling; The other question is about processing elements in parallel and making the main thread waiting for the all the element processing complete
3) This question dont have any concern about threading, even if add Thread.sleep(10000) after . subscribe, there is no change in behaviour.
This again comes down to the unusual behaviour of onErrorContinue. It breaks the rule in that it doesn't "catch" errors and then change the behaviour downstream as a result, it actually allows supporting operators to "look ahead" at it and behave accordingly, thus changing the result upstream.
This is weird, and leads to some behaviour that's not immediately obvious, such as is the case here. As far as I'm aware, all supporting operators only look ahead to the next onErrorContinue operator, rather than recursively searching ahead for all such operators. Instead, they will evaluate the predicate of the next onErrorContinue (in this case whether it's of a certain type), and then behave accordingly - either invoking the handler if the predicate returns true, or throwing the error downstream if not. (There's no case where it will then move onto the next onErrorContinue operator, then the next, until a predicate is matched.)
Clearly this is a contrived example - but because of these idiosyncrasies, I'd almost always recommend avoiding onErrorContinue. There's two "normal" ways that can happen where flatMap() is involved:
If flatMap() has an "inner reactive chain" in it, that is it calls another method or series of methods that return a publisher - then just use onErrorResume() at the end of the flatMap() call to handle those errors. You can chain onErrorResume() since it works with downstream, not upstream operators. This is by far the most common case.
If flatMap() is an imperative collection of if / else that's returning different publishers such as it is here and you want to / have to keep the imperative style, throw exceptions instead of using Mono.error(), and catch as appropriate, returning Mono.empty() in case of an error:
.flatMap(number -> {
try {
if (number <= 3) {
throw new NumberLessThanThree();
} else if (number <= 6) {
throw new NumberLessThanSixButGreaterThan3();
} else {
return Mono.just(number);
}
}
catch(NumberLessThanThree ex) {
//Handle it
return Mono.empty();
}
catch(NumberLessThanSixButGreaterThan3 ex) {
//As above
}
})
In general, using one of these two approaches will make it much easier to reason about what's going on.
(For the sake of completeness after reading the comments - this isn't anything to do with the reactive chain being unable to complete before the main thread exits.)
Related
I have two service calls. The second one accepts a value that the first returns. I need to return the result of the first call only if the second succeeds. The following is my prototype implementation, however, the resulting mono is always empty. Please explain why it doesn't work and how to implement it the proper way.
#Test
public void testPublish() {
callToService1().publish(
mono -> mono.flatMap(resultOfCall1 -> callToService2(resultOfCall1))
.then(mono)
)
.map(Integer::valueOf)
.as(StepVerifier::create)
.expectNext(1)
.verifyComplete();
}
Mono<String> callToService1() {
return Mono.just("1");
}
Mono<Integer> callToService2(String value) {
// parameter that used in a call to service2
return Mono.empty();
}
Not sure why you used publish(Function). Sounds like your requirement would be fulfilled by a simple direct flatMap:
callToService1()
.flatMap(v1 -> callToService2(v1)
.thenReturn(v1)
);
if callToService2 throws or produces an onError, that error will be propagated to the main sequence, terminating it.
(edited below for requirement of emitting value from service1)
otherwise, inside the flatMap the callToService2 is completed then we ignore the result and emit the still in scope v1 value thanks to thenReturn (which also propagates onError if callToService2 emits onError)
I am working on a project where we migrate massive number (more than 12000) views to Hadoop/Impala from Oracle. I have written a small Java utility to extract view DDL from Oracle and would like to use ANTLR4 to traverse the AST and generate an Impala-compatible view DDL statement.
The most of the work is relatively simple, only involves re-writing some Oracle specific syntax quirks to Impala style. However, I am facing an issue, where I am not sure I have the best answer yet: we have a number of special cases, where values from a date field are extracted in multiple nested function calls. For example, the following extracts the day from a Date field:
TO_NUMBER(TO_CHAR(d.R_DATE , 'DD' ))
I have an ANTLR4 grammar declared for Oracle SQL and hence get the visitor callback when it reaches TO_NUMBER and TO_CHAR as well, but I would like to have special handling for this special case.
Is not there any other way than implementing the handler method for the outer function and then resorting to manual traversal of the nested structure to see
I have something like in the generated Visitor class:
#Override
public String visitNumber_function(PlSqlParser.Number_functionContext ctx) {
// FIXME: seems to be dodgy code, can it be improved?
String functionName = ctx.name.getText();
if (functionName.equalsIgnoreCase("TO_NUMBER")) {
final int childCount = ctx.getChildCount();
if (childCount == 4) {
final int functionNameIndex = 0;
final int openRoundBracketIndex = 1;
final int encapsulatedValueIndex = 2;
final int closeRoundBracketIndex = 3;
ParseTree encapsulated = ctx.getChild(encapsulatedValueIndex);
if (encapsulated instanceof TerminalNode) {
throw new IllegalStateException("TerminalNode is found at: " + encapsulatedValueIndex);
}
String customDateConversionOrNullOnOtherType =
customDateConversionFromToNumberAndNestedToChar(encapsulated);
if (customDateConversionOrNullOnOtherType != null) {
// the child node contained our expected child element, so return the converted value
return customDateConversionOrNullOnOtherType;
}
// otherwise the child was something unexpected, signalled by null
// so simply fall-back to the default handler
}
}
// some other numeric function, default handling
return super.visitNumber_function(ctx);
}
private String customDateConversionFromToNumberAndNestedToChar(ParseTree parseTree) {
// ...
}
For anyone hitting the same issue, the way to go seems to be:
changing the grammar definition and introducing custom sub-types for
the encapsulated expression of the nested function.
Then, I it is possible to hook into the processing at precisely the desired location of the Parse tree.
Using a second custom ParseTreeVisitor that captures the values of function call and delegates back the processing of the rest of the sub-tree to the main, "outer" ParseTreeVisitor.
Once the second custom ParseTreeVisitor has finished visiting all the sub-ParseTrees I had the context information I required and all the sub-tree visited properly.
I have some initial state in my application and a few of policies that decorates this state with reactively fetched data (each of policy's Mono returns new instance of state with additional data). Eventually I get fully decorated state.
It basically looks like this:
public interface Policy {
Mono<State> apply(State currentState);
}
Usage for fixed number of policies would look like that:
Flux.just(baseState)
.flatMap(firstPolicy::apply)
.flatMap(secondPolicy::apply)
...
.subscribe();
It basically means that entry state for a Mono is result of accumulation of initial state and each of that Mono predecessors.
For my case policies number is not fixed and it comes from another layer of the application as a collection of objects that implements Policy interface.
Is there any way to achieve similar result as in the given code (with 2 flatMap), but for unknown number of policies? I have tried with Flux's reduce method, but it works only if policy returns value, not a Mono.
This seems difficult because you're streaming your baseState, then trying to do an arbitrary number of flatMap() calls on that. There's nothing inherently wrong with using a loop to achieve this, but I like to avoid that unless absolutely necessary, as it breaks the natural reactive flow of the code.
If you instead iterate and reduce the policies into a single policy, then the flatMap() call becomes trivial:
Flux.fromIterable(policies)
.reduce((p1,p2) -> s -> p1.apply(s).flatMap(p2::apply))
.flatMap(p -> p.apply(baseState))
.subscribe();
If you're able to edit your Policy interface, I'd strongly suggest adding a static combine() method to reference in your reduce() call to make that more readable:
interface Policy {
Mono<State> apply(State currentState);
public static Policy combine(Policy p1, Policy p2) {
return s -> p1.apply(s).flatMap(p2::apply);
}
}
The Flux then becomes much more descriptive and less verbose:
Flux.fromIterable(policies)
.reduce(Policy::combine)
.flatMap(p -> p.apply(baseState))
.subscribe();
As a complete demonstration, swapping out your State for a String to keep it shorter:
interface Policy {
Mono<String> apply(String currentState);
public static Policy combine(Policy p1, Policy p2) {
return s -> p1.apply(s).flatMap(p2::apply);
}
}
public static void main(String[] args) {
List<Policy> policies = new ArrayList<>();
policies.add(x -> Mono.just("blah " + x));
policies.add(x -> Mono.just("foo " + x));
String baseState = "bar";
Flux.fromIterable(policies)
.reduce(Policy::combine)
.flatMap(p -> p.apply(baseState))
.subscribe(System.out::println); //Prints "foo blah bar"
}
If I understand the problem correctly, then the most simple solution is to use a regular for loop:
Flux<State> flux = Flux.just(baseState);
for (Policy policy : policies)
{
flux = flux.flatMap(policy::apply);
}
flux.subscribe();
Also, note that if you have just a single baseSate you can use Mono instead of Flux.
UPDATE:
If you are concerned about breaking the flow, you can extract the for loop into a method and apply it via transform operator:
Flux.just(baseState)
.transform(this::applyPolicies)
.subscribe();
private Publisher<State> applyPolicies(Flux<State> originalFlux)
{
Flux<State> newFlux = originalFlux;
for (Policy policy : policies)
{
newFlux = newFlux.flatMap(policy::apply);
}
return newFlux;
}
In my Production code, I am getting errors in my logs when a Mono times out.
I have managed to recreate these errors with the following code:
#Test
public void testScheduler() {
Mono<String> callableMethod1 = callableMethod();
callableMethod1.block();
Mono<String> callableMethod2 = callableMethod();
callableMethod2.block();
}
private Mono<String> callableMethod() {
return Mono.fromCallable(() -> {
Thread.sleep(60);
return "Success";
})
.subscribeOn(Schedulers.elastic())
.timeout(Duration.ofMillis(50))
.onErrorResume(throwable -> Mono.just("Timeout"));
}
In the Mono.fromCallable I am making a blocking call using a third-party library. When this call times out, I get errors similar to
reactor.core.publisher.Operators - Operator called default onErrorDropped
reactor.core.publisher.Operators - Scheduler worker in group main failed with an uncaught exception
These errors also seem to be intermittent, sometimes when I run the code provided I get no errors at all. However when I repeat the call in a loop of say 10, I consistently get them.
Question: Why does this error happen?
Answer:
When the duration given to the timeout() operator has passed, it throws a TimeoutException. That results in the following outcomes:
An onError signal is sent to the main reactive chain. As a result, the main execution is resumed and the process moves on (i.e., onErrorResume() is executed).
Shortly after outcome #1, the async task defined within fromCallable() is interrupted, which triggers a 2nd exception (InterruptedException). The main reactive chain can no longer handle this InterruptedException because the TimeoutException happened first and already caused the main reactive chain to resume (Note: this behavior of not generating a 2nd onError signal conforms with the Reactive Stream Specification -> Publisher #7).
Since the 2nd exception (InterruptedException) can't be handled gracefully by the main chain, Reactor logs it at error level to let us know an unexpected exception occurred.
Question: How do I get rid of them?
Short Answer: Use Hooks.onErrorDropped() to change the log level:
Logger logger = Logger.getLogger(this.getClass().getName());
#Test
public void test() {
Hooks.onErrorDropped(error -> {
logger.log(Level.WARNING, "Exception happened:", error);
});
Mono.fromCallable(() -> {
Thread.sleep(60);
return "Success";
})
.subscribeOn(Schedulers.elastic())
.timeout(Duration.ofMillis(50))
.onErrorResume(throwable -> Mono.just("Timeout"))
.doOnSuccess(result -> logger.info("Result: " + result))
.block();
}
Long Answer: If your use-case allows, you could handle the exception happening within fromCallable() so that the only exception affecting the main chain is the TimeoutException. In that case, the onErrorDropped() wouldn't happen in the first place.
#Test
public void test() {
Mono.fromCallable(() -> {
try {
Thread.sleep(60);
} catch (InterruptedException ex) {
//release resources, rollback actions, etc
logger.log(Level.WARNING, "Something went wrong...", ex);
}
return "Success";
})
.subscribeOn(Schedulers.elastic())
.timeout(Duration.ofMillis(50))
.onErrorResume(throwable -> Mono.just("Timeout"))
.doOnSuccess(result -> logger.info("Result: " + result))
.block();
}
Extra References:
https://tacogrammer.com/onerrordropped-explained/
https://medium.com/#kalpads/configuring-timeouts-in-spring-reactive-webclient-4bc5faf56411
I'm trying to really get Futures in Dart and I've noticed that just about every example I come across uses handleException to deal with exceptions that complete the Future. Yet the API documentation states "In most cases it should not be necessary to call handleException, because the exception associated with this Future will propagate naturally if the future's value is being consumed. Only call handleException if you need to do some special local exception handling related to this particular Future's value."
So when would I need "special local exception handling"? Could someone explain that in a bit more detail? Is there some code that I honestly can't run easily by letting the exception propagate?
Mads Ager gave me this answer:
Basically, this is the equivalent of having a try-catch in straight-line code:
int doSomethingElse() {
try {
return thisMightFail();
} catch(e) {
return -1;
}
}
void doSomething() {
int value = doSomethingElse();
// operate on value
}
With Futures it is something like this (not tested):
Future<int> doSomethingElse() {
return thisMightFail().transformException((e) => -1);
}
void doSomething() {
doSomethingElse().then((value) {
// operate on value
});
}
So this is for local exception handling instead of global exception handling. If you never use handleException or transformException that would correspond to always dealing with exceptions at the top level in non-async code.