Custom handling Future failures in Akka Streams Flow - stream

I try to connect multiple Flows in Akka Streams and handle their errors in different ways depending on the Flow. It can be accomplished using sth like that:
Flow[String, Either[ProcessingError, String], NotUsed]
And then divert response to error handler based on Either value.
My problem is, that some Flows return Future[String] instead of String and I don't know how to evaluate it to be able to catch error after each Flow and handle it in custom way.

To turn the Future into an Either without failing the stream, you can use
.mapAsync(1){ e =>
val f: Future[T] = ...
f.transformWith(_.toEither)
}
mapAsync and mapAsyncUnordered are the idiomatic ways to evaluate futures in Akka Streams. Be aware that a failing future in those will fail the stream, to handle errors in-stream, you need to react "immediately" on the future to convert it into Try or Either.

Related

Should I use Exceptions while parsing complex user input

when looking for Information when and why to use Exceptions there are many people (also on this platform) making the point of not using exceptions when validating user-input because invalid input is not an exceptional thing to happen.
I now have the case where I have to parse a complex string of user input and map it to an Object-Tree basically, similar to a Parser.
Example in pseudo code:
input:
----
hello[5]
+
foo["ok"]
----
results in something like that:
class Hello {
int id = 5
}
class Add {}
class foo {
string name = 'ok'
}
Now in order to "validate" that input I have to parse it, having code that parses the input for validation and code to create the objects separately feels redundant.
Currently I'm using Exceptions while parsing single tokens to collect all Errors.
// one token is basically a single
try {
foreach (token in tokens) {
factory = getFactory(token) // throws ParseException
addObject(factory.create(token)) // throws ParseException
}
} catch (ParseException e) {
// e.g. "Foo Token expects value to be string"
addError(e)
}
is this bad use of exceptions?
An alternative would be to inject a validation class in every factory or mess around with return types (feels a bit dirty)
If exceptions work for your use case, go for it.
The usual problem with exceptions is that they don't let you fix things up and continue, which makes it hard to implement parser error recovery. You can't really fix up a bad input, and you probably shouldn't even in cases where you could, but error recovery lets you report more than one error from the same input, which is often considered convenient.
All of that depends on your needs and parsing strategy, so there's not a lot of information to go on here.

Why Project Reactor's Mono doesn't have a share() operator?

I'd like to "share" a Mono as I do with Flux.
Flux share() example with Kotlin:
fun `test flux share`() {
val countDownLatch = CountDownLatch(2)
val originalFlux = Flux.interval(Duration.ofMillis(200))
.map { "$it = ${Instant.now()}" }
.take(7)
.share()
.doOnTerminate {
countDownLatch.countDown()
}
println("Starting #1...")
originalFlux.subscribe {
println("#1: $it")
}
println("Waiting ##2...")
CountDownLatch(1).await(1000, TimeUnit.MILLISECONDS)
println("Starting ##2...")
originalFlux.subscribe {
println("##2: $it")
}
countDownLatch.await(10, TimeUnit.SECONDS)
println("End!")
}
I couldn't find a share() operator to Mono. Why doesn't it exist?
I couldn't find a share() operator to Mono. Why doesn't it exist?
The specific behaviour of share() doesn't make much sense with a Mono, but we have cache() which may be what you're after.
share() is equivalent to you calling publish().refcount() on your Flux. Specifically, publish() gives you a ConnectableFlux, or a "hot" flux. (refcount() just automatically connects / stops the flux based on the first / last subscriber.)
The "raison d'être" for ConnectableFlux is allowing multiple subscribers to subscribe whenever they wish, missing the data that was emitted before they subscribed. In the case of Mono this doesn't make a great deal of sense, as by definition there is only one value emitted - so if you've missed it, then you've missed it.
However, we do have cache() on Mono, which also turns it into a "hot" source (where the original supplier isn't called for each subscription, just once on first subscribe.) The obvious difference from above is that the value is replayed for every subscriber, but that's almost certainly what you want.
(Sidenote if you test the above - note that you'll need to use Mono.fromSupplier() rather than Mono.just(), as the latter will just grab the value once at instantiation, thus cache() has no meaningful effect.)
From Project Reactor 3.4.x onwards we have Mono#share()
Prepare a Mono which shares this Mono result similar to Flux.shareNext(). This will effectively turn this Mono into a hot task when the first Subscriber subscribes using subscribe() API. Further Subscriber will share the same Subscription and therefore the same result. It's worth noting this is an un-cancellable Subscription.

Avoid exception causes to stop Mono.zip immediately

Is it possible to avoid that if one mono in mono.zip throws exception all other monos are stopping immediately? I want them to end normally and perhaps to handle the erroneous one by something like „.doOnError“ or „.continueOnError. Is that a way to go?
Regards
Bernado
Yes, it's possible. You can use Mono.zipDelayError. As you can understand from the method's name, it delays errors from the Monos. If several Monos error, their exceptions are combined.
If you have to get the combined result anyway, zipDelayError is not the solution. Use the zip operator and handle the error case with a fallback operator like onErrorResume or retry on the zipped Mono or any upstream one.
I stated that my question is answered but it is not yet. The following example states my case: some mono will fail, but i want the result as the error too. i expected the follwoing code as to run to completion but it fails:
Mono<String> error = Mono.error(new RuntimeException());
error = error.onErrorResume(throwable -> Mono.just("hell0"));
Mono<String> test = Mono.just("test");
Mono<String> test1 = Mono.just("test1");
Mono<String> test2 = Mono.just("test2");
List<Mono<String>> monolist = new ArrayList<>();
monolist.add(test);
monolist.add(test1);
monolist.add(test2);
monolist.add(error);
Mono<Long> zipDelayError = Mono.zipDelayError(monolist, arrayObj -> Arrays.stream(arrayObj).count());
System.out.println(zipDelayError.block());

Using Guice's #SessionScoped with Netty

How do I implement #SessionScoped in a Netty based TCP server? Creating Custom Scopes is documented in Guice manual, but it seems that the solution only works for thread based and not asynchronous IO servers.
Is it enough to create the Channel Pipeline between scope.enter() and scope.exit()?
Disclaimer : this answer is for Netty 3. I've not had the opportunity to try Netty 4 yet, so I don't know if what follows can be applied to the newer version.
Netty is asynchronous on the network side, but unless you explicity submit tasks to Executors, or change threads by any other means, the handling of ChannelEvents by the ChannelHandlers on a pipeline is synchronous and sequential. For instance, if you use Netty 3 and have an ExecutionHandler on the pipeline, the scope handler should be upstream of the ExecutionHandler; for Netty 4, see Trustin Lee's comment.
Thus, you can put a handler near the beginning of your pipeline that manages the session scope, for example:
public class ScopeHandler implements ChannelUpstreamHandler {
#Override
public void handleUpstream(ChannelHandlerContext ctx, ChannelEvent e) {
if (e instanceof WriteCompletionEvent || e instanceof ExceptionEvent)
ctx.sendUpstream(e);
Session session = ...; // get session, presumably using e.getChannel()
scope.enter();
try {
scope.seed(Key.get(Session.class), session);
ctx.sendUpstream(e);
}
finally {
scope.exit();
}
}
private SessionScope scope;
}
A couple of quick remarks:
You will want to filter some event types out, especially WriteCompletionEvent and ExceptionEvent which the framework will put at the downstream end of the pipeline during event processing and wil cause reentrancy issues if not excluded. In our application, we use this kind of handler but actually only consider UpstreamMessageEvents.
The try/finally construct is not actually necessary as Netty will catch any Throwables and fire an ExceptionEvent, but it feels more idiomatic this way.
HTH

How to get try / catch to work in erlang

i'm pretty new to erlang and i'm trying to get a basic try / catch statement to work. I"m using webmachine to process some requests and all i really want to do is parse some JSON data and return it. In the event that the JSON data is invalid, I just want to return an error msg. Here is the code I have so far.
(the JSON data is invalid)
to_text(ReqData, Context) ->
Body = "{\"firstName\": \"John\"\"lastName\": \"Smith\"}",
try decode(Body) of
_ -> {"Success! Json decoded!",ReqData,Context}
catch
_ -> {"Error! Json is invalid",ReqData,Context}
end.
decode(Body) ->
{struct, MJ} = mochijson:decode(Body).
The code compiles, but when i run it, and send a request for the text, i get the following error back.
error,{error,{case_clause,{{const,"lastName"},
": \"Smith\"}",
{decoder,utf8,null,1,31,comma}}},
[{mochijson,decode_object,3},
{mochijson,json_decode,2},
{webmachine_demo_resource,test,1},
{webmachine_demo_resource,to_text,2},
{webmachine_demo_resource,to_html,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1}]}}
What exactly am i doing wrong? documentation says the "catch" statement handles all errors, or do i have to do something to catch a specific error that is thrown by mochijson:decode.
Please any leads or advice would be helpful. Thanks.
The catch-clause _ -> ... only catches exceptions of the 'throw' class. To catch other kinds of exceptions, you need to write a pattern on the form Class:Term -> ... (i.e., the default Class is throw). In your case:
catch
_:_ -> {"Error! Json is invalid", ReqData, Context}
end
When you do this, you should always ask yourself why you're catching every possible exception. If it's because you're calling third-party code that you don't know how it might behave, it's usually OK. If you're calling your own code, remember that you're basically throwing away all information about the failure, possibly making debugging a lot more difficult. If you can narrow it down to catching only particular expected cases and let any other exceptions fall through (so you see where the real failure occurred), then do so.

Resources