SQS Event Bridge Once Per Minute - amazon-sqs

We are looking at Event Bridge to give us a scheduled task added to our SQS once per minute.
We are looking at Event Bridge to make it happen. So far it properly puts messages into the queue, but we are trying to schedule it for once per minute and noticing that the queue only gets messages once per five minutes sometimes six minutes.
The metrics seem to state invocation is happening; however, the queue isn't receiving them in the time frame specified.
Considerations
SQS FIFO Queue - Deduplication
Constant JSON String
The "duh" of not seeing messages at prescribed interval is because of this in the AWS documentation:
The token used for deduplication of sent messages. If a message with a
particular message deduplication ID is sent successfully, any messages
sent with the same message deduplication ID are accepted successfully
but aren’t delivered during the 5-minute deduplication interval
Open to suggestions and will be looking for a workaround.
Update
I tried using Input Transformer to fix by adding the time as uniquely changing item in the queue message; however, still not getting below 5 minutes.
Variable Input
{"addedOn":"$.time"}
Message
{"AddedOn":<addedOn>}

The queue polling built into SQS just wasn't polling for my updated count of greater than 10 messages. Once I deleted out the old messages the timing was correct and it was updating 1/min.
The answer is if you are going to use a constant string it'll have to be for scheduled jobs that are greater than 5 minutes.
Adding info here despite redundancy from question for linked Google searches:
The token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval
Despite the messages being a unique event once per minute the Constant (JSON text) not being unique still saw it as a duplicate to remove.
To solve I switched to Input transformer
Example event and what you other fields you can add as variables:
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail": {
"instance-id": "i-0123456789",
"state": "RUNNING"
}
}
I needed a unique variable so time was an obvious choice.
Input for Input Transformer
Input Path:
{"addedOn":"$.time"}
Template:
{"AddedOn":<addedOn>}
Documentation
Also, found that moving over to not using FIFO queues is a potential solution if that's an easy option for future SQS developers as well.

Related

Deleted Scheduled Messages still sending

I am building a slack application that will schedule a message when someone posts a specific type of workflow in a channel.
It will schedule a message, and if someone from a specific group of users replies before it has sent, it will delete the scheduled message.
Unfortuantely these messages are still sending, even though the list of scheduled messages is empty and the response when deleting the message is a successful one. I am also deleting the message within the 60 second limit that is noted on the API.
Scheduling the message gives me a success response, and if I use the list scheduled messages I get:
[
{
id: 'MESSAGE_ID',
channel_id: 'CHANNEL_ID',
post_at: 1620428096, // 2 minutes in the future for testing
date_created: 1620428026,
text: 'thread_ts: 1620428024.001300'
}
]
Canceling the message:
async function cancelScheduledMessage(scheduled_message_id) {
const response = await slackApi.post("/chat.deleteScheduledMessage", {
channel: SLACK_CHANNEL,
scheduled_message_id
})
return response.data
}
response.data returns { "ok": true }
If I use the list scheduled message API to retrieve what is scheduled I get an empty array []
However, the message will still send to the thread.
Is there something I am missing? I have the proper scopes set up and the API calls appear to be working.
If it helps, I am using AWS Lambda, and DynamoDB to store/retrieve the thread_ts and message IDs.
Thanks all.
For messages due in 5 minutes or less, chat.deleteScheduleMessage has a bug (as of November 2021) [1]. Although this API call may return OK, the actual message will still be delivered due to the bug.
Note that for messages within 60 seconds, this API does return an proper error code, as described in the documentation [2]. For the range (60 seconds, ~5 minutes), the API call returns OK but fails behind the scenes.
Before this bug is fixed, the only thing one can do is to only delete messages scheduled 5 minutes (the exact threshold may vary, according to Slack) or more (yes not very ideal and may not be feasible for some applications).
[1] Private communication with Slack support.
[2] https://api.slack.com/methods/chat.deleteScheduledMessage

Solace Queue's expiry time

I have the following code to set the expiry time on a message.
ObjectMessage message =...;
long expiration = 30*60*1000; //30 minutes
...
message.setJMSExpiration(expiration);
After 30 minutes, I am expecting the message to expire, and hence the subscriber will not be able to pick this message.
Question :
Will the message still sit in the Queue and available for browse?
Regards
Shankar
The message will be spooled to the queue and will be available to be browsed until the expiry time is reached. At that point, the message will be removed from the queue and either discarded or sent to the DMQ, according to how the queue is configured.

SQS + Shoryuken: Large Receive Count in FIFO despite auto_delete=true

I have an AWS SQS FIFO queue configured to deduplicate messages based on content. My rails app uses Shoryuken worker to get messages from SQS. Here is the worker code:
class MyJob
include Shoryuken::Worker
shoryuken_options queue: "myjobs-#{ENV['RAILS_ENV']}.fifo",
auto_delete: true,
body_parser: JSON
def perform(message_meta, message_body)
# do stuff
end
end
As you can see, it's configured to automatically delete messages from queue, once received. But today something strange happened. I noticed that the worker performs a large number of identical tasks. When I opened the SQS Queue in AWS Console, I saw there was a message in it, which looked it was received multiple times by the worker. Here are its attributes, notice the Receive Count:
Message ID: 9207017f-ad15-4de8-97c4-cf391c8f3840
Size: 1.3 KB
MD5 of Body: 55918bf431e31e4badae0720453aea35
Sent: 2018-12-11 10:40:53.978 GMT-08:00
First Received: 2018-12-11 10:40:54.045 GMT-08:00
Receive Count: 2654
Message Attribute Count: 0
Message Group ID: default Message
Deduplication ID: c5fb9acda5e3c9c82dc0ae3f0b1cff5bd7067d0cf942075c4c38dddd1fbc1ed1
Sequence Number: 37288893882837472512
Any idea how that could happen?
Platform details: Ubuntu, ruby 2.5.3, Rails: 5.2.2, Shoryuken: 4.0.2
Turns out, the problem was with the queue's VisibilityTimeout setting. By default it is set to 30 seconds, but often messages would arrive to the receiver side outside of the allowed 30 seconds, and this would mean that Shoryuken would fail to delete the received message from the queue with the following error:
ERROR: Could not delete 0, code: 'ReceiptHandleIsInvalid', message:
'The receipt handle has expired', sender_fault: true
The solution is to increase the VisibilityTimeout. I set it to the maximum allowed 12 hours, and that resolved the issue.
More about VisibilityTimeout:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
The thread that put me on the right track:
https://github.com/aws/aws-sdk-java/issues/705

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

How to explicitly acknowledge/fail Amazon SQS FIFO queue from the listener without throwing an exception?

My application only listens to a certain queue, the producer is the 3rd party application. I receive the messages but sometimes based on some logic I need to send fail message to the producer so that the message is resend to my listener again until I decide to consume it and acknowledge it. My current implementation of this process is just throwing some custom exception. But this is not a clean solution, therefore can any one help me to send FAIL to producer without throwing exception.
My JMS Listener Factory settings:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactoryForQexpress(SQSErrorHandler errorHandler) {
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder()
.withRegion(RegionUtils.getRegion(StaticSystemConstants.getQexpressSqsRegion()))
.withAWSCredentialsProvider(new ClasspathPropertiesFileCredentialsProvider(StaticSystemConstants.getQexpressSqsCredentials()))
.build();
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setErrorHandler(errorHandler);
return factory;
}
My Listener Settings:
#JmsListener(destination = StaticSystemConstants.QUEXPRESS_ORDER_STATUS_QUEUE, containerFactory = "jmsListenerContainerFactoryForQexpress")
public void receiveQExpressOrderStatusQueue(String text) throws JSONException {
LOG.debug("Consumed QExpress status {}", text);
//here i need to decide either acknowlege or fail
...
if (success) {
updateStatus();
} else {
//todo I need to replace this with explicit FAIL message
throw new CustomException("Not right time to update status");
}
}
Please, share your experience on this. Thank you!
SQS -- internally speaking -- is fully asynchronous and completely decouples the producer from the consumer.
Once the producer successfully hands off a message to SQS and receives the message-id in response, the producer only knows that SQS has received and committed the message to its internal storage and that the message will be delivered to a consumer at least once.¹ There is no further feedback to the producer.
A consumer can "snooze" a message for later retry by simply not deleting it (see setSessionAcknowledgeMode docs) or by actively resetting the visibility timeout on the message instead of deleting it, which triggers SQS to leave the message in the in flight status until the timer expires, at which point it will again deliver the message for the consumer to retry.
Note, too, that a single SQS queue can have multiple producers and/or multiple consumers, as long as all the producers ask for and consumers provide identical services, but there is no intrinsic concept of which consumer or which producer. There is no consumer-to-producer backwards communication channel, and no mechanism for a producer to inquire about the status of an earlier message -- the design assumption is that once SQS has received a message, it will be delivered,² so no such mechanism should be needed.
¹at least once. Unless the queue is a FIFO queue, SQS will typically deliver the message exactly once, but there is not an absolute guarantee that the message will not be delivered more than once. Because SQS is a massive, distributed system that stores redundant copies of messages, it is possible in some edge case conditions for messages to be delivered more than once. FIFO queues avoid this possibility by leveraging stronger internal consistency guarantees, at a cost of reduced throughput of 300 TPS.
²it will be delivered assuming of course that you actually have a consumer running. SQS does not block the producer, and will allow you to enqueue an unbounded number of messages waiting for a consumer to arrive. It accepts messages from producers regardless of whether there are currently any consumers listening. The messages are held until consumed or until the MessageRetentionPeriod (default 4 days, max 14 days) timer expires for each message, whichever comes first.

Resources