FlatMapDelayError Inconsistent Error Propagation - project-reactor

Specifically this deals with how the upstream subscription is cancelled when an error occurs in the mapping operation.
From my testing it appears that thrown errors cancel the upstream subscription, where as propagated errors like those produced by Flux.error do not cancel the upstream subscription. When the upstream subscription is never cancelled the error is never propagated downstream and any error handlers are never triggered either, additionally no termination signal is propagated and results in a hanging flux.
I would think that a thrown error and a propagated error would be handled identically.
Is there a fundamental issue or concern I am not aware of when handling propagated errors from flatMapDelayError?
Below is also a simple example illustrating the problem on release train 2022.0.2. Thanks in advance for any help.
Flux<Object> thrownFlux = Flux.just(0, 1, 2, 3).log()
.flatMapDelayError(integer -> {
throw new RuntimeException(); // Cancels upstream subscription
}, 1, 1);
// Completes as expected
StepVerifier.create(thrownFlux)
.expectError()
.verify();
Flux<Object> propagatedFlux = Flux.just(0, 1, 2, 3).log()
.flatMapDelayError(integer -> {
return Flux.error(new RuntimeException()); // Does not cancel upstream subscription
}, 1, 1);
// Never completes
StepVerifier.create(propagatedFlux)
.expectError()
.verify();

Related

Fine-grained control over what kind of failures trip a dapr circuit breaker

Question
Is it possible to further specifiy what kind of failures trip a dapr circuit breaker targeting a component (AWS SQS)?
Use case
I am sending emails via AWS SES. If you reach your sending limits, the AWS sdk throws an error (code 454). In this case, I want a circuit breaker to stop the processing of the queue and retry sending the emails later.
However, when you have another error, e.g. invalid email address, I don't want this to trip the circuit breaker as it is not transient. I would like to send the message to the DLQ though, as to manually examine these messages later (-> that's why I am still throwing here and not failing siltently).
Simplified setup
I have a circuit breaker defined that trips when my snssqs-pubsub component has more than 3 consecutive failures:
circuitBreakers:
pubsubCB:
# Available variables: requests, totalSuccesses, totalFailures, consecutiveSuccesses, consecutiveFailures
trip: consecutiveFailures > 3
# ...
targets:
components:
snssqs-pubsub-emails:
inbound:
circuitBreaker: pubsubCB
In my application, I want to retry sending emails that failed because the AWS SES sending limit was hit:
try {
await this.sendMail(options);
} catch (error) {
if (error.responseCode === '454') {
// This error should trip the cicuit breaker
throw new Error({ status: 429, 'Rate limited. Should be retried.' })
} else {
// This error should not trip the circuit breaker.
// Because status is 404, dapr directly puts the message into DLQ and skips retry
throw new NotFoundError({ status: 404 })
}
}
You may not have a problem to worry about if you have business case that does not violate AWS Terms of Service. You can put a support ticket and get SES Service Limit's raised.
It does not appear that dapr retry policies don't support the customization you need but .NET does.
If you don't want to process the message, then don't delete it. You can then set visibility timeout of the message in the SQS so they stay hidden to avoid processing again too quickly. Any exception thrown regardless will end up in the DLQ.

Deleted Scheduled Messages still sending

I am building a slack application that will schedule a message when someone posts a specific type of workflow in a channel.
It will schedule a message, and if someone from a specific group of users replies before it has sent, it will delete the scheduled message.
Unfortuantely these messages are still sending, even though the list of scheduled messages is empty and the response when deleting the message is a successful one. I am also deleting the message within the 60 second limit that is noted on the API.
Scheduling the message gives me a success response, and if I use the list scheduled messages I get:
[
{
id: 'MESSAGE_ID',
channel_id: 'CHANNEL_ID',
post_at: 1620428096, // 2 minutes in the future for testing
date_created: 1620428026,
text: 'thread_ts: 1620428024.001300'
}
]
Canceling the message:
async function cancelScheduledMessage(scheduled_message_id) {
const response = await slackApi.post("/chat.deleteScheduledMessage", {
channel: SLACK_CHANNEL,
scheduled_message_id
})
return response.data
}
response.data returns { "ok": true }
If I use the list scheduled message API to retrieve what is scheduled I get an empty array []
However, the message will still send to the thread.
Is there something I am missing? I have the proper scopes set up and the API calls appear to be working.
If it helps, I am using AWS Lambda, and DynamoDB to store/retrieve the thread_ts and message IDs.
Thanks all.
For messages due in 5 minutes or less, chat.deleteScheduleMessage has a bug (as of November 2021) [1]. Although this API call may return OK, the actual message will still be delivered due to the bug.
Note that for messages within 60 seconds, this API does return an proper error code, as described in the documentation [2]. For the range (60 seconds, ~5 minutes), the API call returns OK but fails behind the scenes.
Before this bug is fixed, the only thing one can do is to only delete messages scheduled 5 minutes (the exact threshold may vary, according to Slack) or more (yes not very ideal and may not be feasible for some applications).
[1] Private communication with Slack support.
[2] https://api.slack.com/methods/chat.deleteScheduledMessage

Cowboy/Ranch kills handler process when client closes connection

I have Phoenix application with complex business logic behind HTTP endpoint. This logic includes interaction with database and few external services and once request processing has been started it must not be interrupted till all operations will be done.
But it seems like Cowboy or Ranch kills request handler process (Phoenix controller) if client suddenly closes connection, which leads to partially executed business process. To debug this I have following code in controller action:
Process.flag(:trap_exit, true)
receive do
msg -> Logger.info("Message: #{inspect msg}")
after
10_000 -> Logger.info("Timeout")
end
And to simulate connection closing I set timeout: curl --request POST 'http://localhost:4003' --max-time 3.
After 3 seconds in IEx console I see that process is about to exit: Message: {:EXIT, #PID<0.4196.0>, :shutdown}.
So I need to make controller complete its job and reply to client if it is still there or do nothing if connection is lost. Which will be the best way to achieve this:
trap exits in controller action and ignore exit messages;
spawn not linked Task in controller action and wait for its results;
somehow configure Cowboy/Ranch so it will not kill handler process, if it is possible (tried exit_on_close with no luck)?
Handling processes will be killed after the request end, that is their purpose. If you want to process some data in the background, then start additional process. The simplest way to do so would be 2nd method you have proposed, but with slight modification of using Task.Supervisor.
So in your application supervisor you start Task.Supervisor with name of your choice:
children = [
{Task.Supervisor, name: MyApp.TaskSupervisor}
]
Supervisor.start_link(children, strategy: :one_for_one)
And then in your request handler:
parent = self()
ref = make_ref()
Task.Supervisor.start_child(MyApp.TaskSupervisor, fn() ->
send(parent, {ref, do_long_running_stuff()})
end)
receive do
{^ref, result} -> notify_user(result)
end
That way you do not need to worry about handling situation when user is no longer there to receive message.

RabbitTemplate and ReplyTimeOut

I have a project where I set a timeout of 5 seconds (getRabbitTemplate (). SetReplyTimeout (5000)) and I use the sendAndReceive method to send the messages:getXbidRabbitTemplate ()
SendAndReceive (exchange, routingkey, msg).
Today there was an error in the connection
(ShutdownSignalException)
but there has not been a TimeOut in two shipments.
The first shipment occurred at 09-04-2019 07: 25: 33.980; and the second at 09-04-2019 07: 25: 36.902;
I have not received an answer (or any error) and shortly after the connection error has jumped (at 09-04-2019 07: 25: 52.939)
Other times, we have detected a TimeOut error, and the only configuration change is that we have removed the retryTemplate from the RabbitTemplate configuration.
This is how we detect the TimeOut:
getRabbitTemplate().setReplyTimeout(5000);
mResponse = getRabbitTemplate().sendAndReceive(exchange, routingkey, msg);
if(mResponse == null)
{
// TIMEOUT
}
I expected that if no answer is obtained in those 5 seconds, I would enter the TIMEOUT part. Is it possible that if the connection is dropped and the message does not reach the server, that TIMEOUT will not occur?
The timeout has nothing to do with any rabbit communication, the calling thread simply calls get(timeout, TimeUnitMilliseconds) on a future. When the reply is receieved (on another thread), it completes the future and the get() returns that result. If no reply is received, the get() times out.
I don't see any way that the thread can never time out.

What happens to request in workbox backgroundSync queue after lastChance boolean is true?

I am seeing an issue that has my requests being popped off a workbox.backgroundSync.Queue queue after 3 unsuccessful requests. I'm also unable to find solid documentation about the expected behavior after 3 unsuccessful sync requests when the lastChance flag has been set to true.
What is supposed to happen next? Is the request supposed to remain in the queue and what can be done to eventually trigger a replay?
The request will remain in the queue until maxRetentionTime is reached.
see maxRetentionTime
If the flag lastChance is set to true, automatic retries will stop but you can trigger a replayRequests by sending a message to the service worker like:
self.addEventListener('message', (event) => {
if (event.data.type === 'replayQueue') {
myQueue.replayRequests();
}
});

Resources