What is the significance of these 2 below overflow strategy? Exceptions are different. But in both cases when subscribers can not keep up, they are notified with an error call.
In which case we should choose one over the other!
FluxSink.OverflowStrategy.ERROR
FluxSink.OverflowStrategy.IGNORE
I found this from the documentation. But does not provide more info.
IGNORE to Completely ignore downstream backpressure requests. This may yield IllegalStateException when queues get full downstream.
ERROR to signal an IllegalStateException when the downstream can’t keep up.
downstream cannot keep up sounds a lot like queue gets full - so for me both look same.
IGNORE will ignore the requests from the downstream and simply push the items.
ERROR will check the request and fail if there is not enough demand.
Related
I'm writing a data pipeline using Reactor and Reactor Kafka and use spring's Message<> to save
the ReceiverOffset of ReceiverRecord in the headers, to be able to use ReciverOffset.acknowledge() when finish processing. I'm also using the out-of-order commit feature enabled.
When an event process fails I want to be able to log the error, write to another topic that represents all the failure events, and commit to the source topic. I'm currently solving that by returning Either<Message<Error>,Message<myPojo>> from each processing stage, that way the stream will not be stopped by exceptions and I'm able to save the original event headers and eventually commit the failed messages at the button of the pipeline.
The problem is that each step of the pipline gets Either<> as input and needs to filter the previous errors, apply the logic only on the Either.right and that could be cumbersome, especially when working with buffers and the operator get 'List<Either<>>' as input. So I would want to keep my business pipeline clean and get only Message<MyPojo> as input but also not missing errors that need to be handled.
I read that sending those message erros to other channel or stream is a soulution for that.
Spring Integration uses that pattern for error handling and I also read an article (link to article) that solves this problem in Akka Streams using 'divertTo()':
I couldn't find documentation or code examples of how to implement that in Reactor,
is there any way to use Spring Integration error channel with Reactor? or any other ideas to implement that?
Not familiar with reactor per se, but you can keep the stream linear. The trick, since Vavr's Either is right-biased is to use flatMap, which would take a function from Message<MyPojo> to Either<Message<Error>, Message<MyPojo>>. If the Either coming in is a right (i.e. a Message<MyPojo>, the function gets invoked and otherwise it just gets passed through.
// Apologies if the Java is atrocious... haven't written Java since pre-Java 8
incomingEither.flatMap(
myPojoMessage -> ... // compute a new Either
)
Presumably at some point you want to do something (publish to a dead-letter topic, tickle metrics, whatever) with the Message<Error> case, so for that, orElseRun will come in handy.
In many scenarios the response with the result of the operation execution is delivered asynchronously to the operation initiator (SMDP or Operator). For example step (13) in 3.3.1 of SGP.02 v4.2:
(13) The SM-SR SHALL return the response to the “ES3.EnableProfile” function to SM-DP, indicating that the Profile has been enabled
It is not clear how SMSR should act if the call that contains the result of the operation fails. Should SMSR retry such call all the time or it is ok to try just once and give up after that? Does this depend on the type of error that happened during such call?
I'm concerned about the cases when the result is sent and may have been processed by the initiator but the information about that was not properly delivered back to SMSR. In order for SMSR to be required to retry the initiator should be ready to receive the same operation result status again and process it accordingly that is ignore and just acknowledge.
But I can't see anything in the SGP02 v4.2 that specifies what the behaviour of SMSR and SMDP should be in this case. Any pointers to the documentation specifying this are much appreciated.
In general it is not clear how the rollback to a valid know state should happen in this situation. Who is responsible for that (SMSR or SMDP in this example of profile enabling)?
I'm not aware of any part of the specification defining this. Neither in SGP.02, SGP.01 and the test specification SGP.11. There are operational requirements in the SAS certification for a continuous service. But this is not technically defined.
I have experience in implementing the specification. The approach was a message queue with Kafka and a retry policy. The specification says SHALL, which means try very hard. Any implementation dropping the message after a single try is not very quality oriented. The common sense in distributed (micro service) based systems is that there are failures which have to be handled, so this assumption was taken without being expressed in the SGP specification.
The example of the status of a profile should be idempotent, sending a message twice should not be harmful. The MessageID and RelatesTo is also useful here. I assume for auditing the request / responses are recorded anyway in your system.
In case you are sitting at the other end and are facing a badly implemented SM-SR and nt status message arrives, the ES3.GetEIS can be called by the SM-DP later to get the current status.
I have already contacted the authors directly. At the end of the document the email is mentioned:
It is our intention to provide a quality product for your use. If you
find any errors or omissions, please contact us with your comments.
You may notify us at prd#gsma.com
I want to create SQS using code whenever it is required to send messages and delete it after all messages are consumed.
I just wanted to know if there is some delay required between creating an SQS using Java code and then sending messages to it.
Thanks.
Virendra Agarwal
You'll have to try it and make observations. SQS is a dostributed system, so there is a possibility that a queue might not immediately be usable, though I did not find a direct documentation reference for this.
Note the following:
If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html
This means your names will always need to be different, but it also implies something about the internals of SQS -- deleting a queue is not an instantaneous process. The same might be true of creation, though that is not necessarily the case.
Also, there is no way to know with absolute certainty that a queue is truly empty. A long poll that returns no messages is a strong indication that there are no messages remaining, as long as there are also no messages in flight (consumed but not deleted -- these will return to visibility if the consumer resets their visibility or improperly handles an exception and does not explicitly reset their visibility before the visibility timeout expires).
However, GetQueueAttributes does not provide a fail-safe way of assuring a queue is truly empty, because many of the counter attributes are the approximate number of messages (visible, in-flight, etc.). Again, this is related to the distributed architecture of SQS. Certain rare, internal failures could potentially cause messages to be stranded internally, only to appear later. The significance of this depends on the importance of the messages and the life cycle of the queue, and the risks of any such an issue seem -- to me -- increased when a queue does not have an indefinite lifetime (i.e. when the plan for a queue is to delete it when it is "empty"). This is not to imply that SQS is unreliable, only to make the point that any and all systems do eventually behave unexpectedly, however rare or unlikely.
How to enhance the flow to handle the duplicate requests and instead of moving to BO queue? We have multiple cases receiving on daily basis, where the customer is sending multiple requests within seconds and moving to BO incase of duplicate request.
The fact that the message is being backed out indicates to me that your flow is already somehow detecting the duplicate and must be throwing an exception somewhere.
If you can identify this particular exception then instead of throwing back all the way to the input node and initiating a rollback you can handle the exception in a branch of the flow wired up to the catch terminal. For example perhaps by logging the message id to a log of duplicates or similar.
The tricky part will be ensuring that you can adequately identify the exception so that you are not doing the wrong failure processing for other types of failures.
I'm using ASINetworkQueue to execute multiple ASIHTTPRequests, and if any request fails I'd like the queue to cancel any pending requests and end. From reading the docs this should be the default behaviour. But I'm finding that even after a request fails, I still get 'requestStarted' for most of the remaining requests, and 'requestFailed' for all of them - is this how it is supposed to be? I'm guessing it's maybe because my requests are quite small and the requests start before it has chance to cancel them once a failure is detected. I tried implicitly setting setShouldCancelAllRequestsOnFailure:YES but this made no difference.
Without knowing the exact nature of your requests ... short answer: Yes, it's working how it's supposed to. Your requests are starting before a failure occurs. Longer answer: Try setting the queue's maxConcurrentOperationCount property. This may help you control the request pipeline a bit better if you need to test for failure.