I have an AWS SQS FIFO queue configured to deduplicate messages based on content. My rails app uses Shoryuken worker to get messages from SQS. Here is the worker code:
class MyJob
include Shoryuken::Worker
shoryuken_options queue: "myjobs-#{ENV['RAILS_ENV']}.fifo",
auto_delete: true,
body_parser: JSON
def perform(message_meta, message_body)
# do stuff
end
end
As you can see, it's configured to automatically delete messages from queue, once received. But today something strange happened. I noticed that the worker performs a large number of identical tasks. When I opened the SQS Queue in AWS Console, I saw there was a message in it, which looked it was received multiple times by the worker. Here are its attributes, notice the Receive Count:
Message ID: 9207017f-ad15-4de8-97c4-cf391c8f3840
Size: 1.3 KB
MD5 of Body: 55918bf431e31e4badae0720453aea35
Sent: 2018-12-11 10:40:53.978 GMT-08:00
First Received: 2018-12-11 10:40:54.045 GMT-08:00
Receive Count: 2654
Message Attribute Count: 0
Message Group ID: default Message
Deduplication ID: c5fb9acda5e3c9c82dc0ae3f0b1cff5bd7067d0cf942075c4c38dddd1fbc1ed1
Sequence Number: 37288893882837472512
Any idea how that could happen?
Platform details: Ubuntu, ruby 2.5.3, Rails: 5.2.2, Shoryuken: 4.0.2
Turns out, the problem was with the queue's VisibilityTimeout setting. By default it is set to 30 seconds, but often messages would arrive to the receiver side outside of the allowed 30 seconds, and this would mean that Shoryuken would fail to delete the received message from the queue with the following error:
ERROR: Could not delete 0, code: 'ReceiptHandleIsInvalid', message:
'The receipt handle has expired', sender_fault: true
The solution is to increase the VisibilityTimeout. I set it to the maximum allowed 12 hours, and that resolved the issue.
More about VisibilityTimeout:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
The thread that put me on the right track:
https://github.com/aws/aws-sdk-java/issues/705
Related
I am building a slack application that will schedule a message when someone posts a specific type of workflow in a channel.
It will schedule a message, and if someone from a specific group of users replies before it has sent, it will delete the scheduled message.
Unfortuantely these messages are still sending, even though the list of scheduled messages is empty and the response when deleting the message is a successful one. I am also deleting the message within the 60 second limit that is noted on the API.
Scheduling the message gives me a success response, and if I use the list scheduled messages I get:
[
{
id: 'MESSAGE_ID',
channel_id: 'CHANNEL_ID',
post_at: 1620428096, // 2 minutes in the future for testing
date_created: 1620428026,
text: 'thread_ts: 1620428024.001300'
}
]
Canceling the message:
async function cancelScheduledMessage(scheduled_message_id) {
const response = await slackApi.post("/chat.deleteScheduledMessage", {
channel: SLACK_CHANNEL,
scheduled_message_id
})
return response.data
}
response.data returns { "ok": true }
If I use the list scheduled message API to retrieve what is scheduled I get an empty array []
However, the message will still send to the thread.
Is there something I am missing? I have the proper scopes set up and the API calls appear to be working.
If it helps, I am using AWS Lambda, and DynamoDB to store/retrieve the thread_ts and message IDs.
Thanks all.
For messages due in 5 minutes or less, chat.deleteScheduleMessage has a bug (as of November 2021) [1]. Although this API call may return OK, the actual message will still be delivered due to the bug.
Note that for messages within 60 seconds, this API does return an proper error code, as described in the documentation [2]. For the range (60 seconds, ~5 minutes), the API call returns OK but fails behind the scenes.
Before this bug is fixed, the only thing one can do is to only delete messages scheduled 5 minutes (the exact threshold may vary, according to Slack) or more (yes not very ideal and may not be feasible for some applications).
[1] Private communication with Slack support.
[2] https://api.slack.com/methods/chat.deleteScheduledMessage
We are looking at Event Bridge to give us a scheduled task added to our SQS once per minute.
We are looking at Event Bridge to make it happen. So far it properly puts messages into the queue, but we are trying to schedule it for once per minute and noticing that the queue only gets messages once per five minutes sometimes six minutes.
The metrics seem to state invocation is happening; however, the queue isn't receiving them in the time frame specified.
Considerations
SQS FIFO Queue - Deduplication
Constant JSON String
The "duh" of not seeing messages at prescribed interval is because of this in the AWS documentation:
The token used for deduplication of sent messages. If a message with a
particular message deduplication ID is sent successfully, any messages
sent with the same message deduplication ID are accepted successfully
but aren’t delivered during the 5-minute deduplication interval
Open to suggestions and will be looking for a workaround.
Update
I tried using Input Transformer to fix by adding the time as uniquely changing item in the queue message; however, still not getting below 5 minutes.
Variable Input
{"addedOn":"$.time"}
Message
{"AddedOn":<addedOn>}
The queue polling built into SQS just wasn't polling for my updated count of greater than 10 messages. Once I deleted out the old messages the timing was correct and it was updating 1/min.
The answer is if you are going to use a constant string it'll have to be for scheduled jobs that are greater than 5 minutes.
Adding info here despite redundancy from question for linked Google searches:
The token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval
Despite the messages being a unique event once per minute the Constant (JSON text) not being unique still saw it as a duplicate to remove.
To solve I switched to Input transformer
Example event and what you other fields you can add as variables:
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail": {
"instance-id": "i-0123456789",
"state": "RUNNING"
}
}
I needed a unique variable so time was an obvious choice.
Input for Input Transformer
Input Path:
{"addedOn":"$.time"}
Template:
{"AddedOn":<addedOn>}
Documentation
Also, found that moving over to not using FIFO queues is a potential solution if that's an easy option for future SQS developers as well.
Setup: A flat feed (representing an artist) is followed by another flat feed (representing users). Activities are added to the artist feed and the expectation is that those activities show on the feeds of all users following an artist and are then forwarded to a queue in AWS SQS.
Problem: The problem is that not all messages are appearing in SQS. We have extensive logging in place and never see some messages show either in the queue or lambda which consumes the queue. This functionality was working last week (2020-04-22) without any code changes in the meantime.
Debugging Notes:
Activities show in the artist and user feeds in the GetStream.io dashboard but not SQS
"Test SQS" button in GetStream.io dashboard generates a success message and the test message is visible in the queue and consuming lambda
Some messages are forwarded to SQS while others are not
Decoded sample message that did get through to SQS:
{'new': [{'actor': '30026dd3-c557-46a0-b1c3-20b6e2dc5e2d', 'foreign_id': 'social_spike_twitter:GlobalParticipant_30026dd3-c557-46a0-b1c3-20b6e2dc5e2d', 'id': '37114000-7f75-11ea-8080-8000273a69c4', 'object': '{"new_followers":100}', 'origin': 'participant_spike:GlobalParticipant_30026dd3-c557-46a0-b1c3-20b6e2dc5e2d', 'target': '', 'time': '2020-04-16T00:00:00.000000', 'verb': 'social_spike_twitter'}], 'deleted': [], 'deleted_foreign_ids': [], 'feed': 'user_mobile_push:ArtistProfile_19', 'app_id': 39400, 'published_at': '2020-04-22T18:11:58.160Z'}
As we know we can not read the message in Solace Appliance, However we can see the message ID in Solace Appliance.
So I Wanted to Get The Corresponding Message Details Against Message ID.
How to get the details for same.
As we know we can not read the message in Solace Appliance, However we
can see the message ID in Solace Appliance.
This is not accurate.
In order to protect confidential data, management users cannot view the content of messages. However, application users(with the necessary permissions) can create a browser to view the contents of a message without deleting it.
So I Wanted to Get The Corresponding Message Details Against Message
ID. How to get the details for same.
Use a queue browser to view the full contents of the message.
Alternatively, as a management user, you can view basic information.
solace> show queue myqueue message-vpn default messages detail
Name: myqueue
Message Id: 160443684
Date spooled: Jul 11 2016 12:34:02 UTC
Publisher Id: 19456
Sequence Number: n/a
Dead Message Queue Eligible: no
Content: 0.0000 MB
Attachment: 0.0001 MB
Replicated: no
Replicated Mate Message Id: n/a
Sent: no
Redeliveries: 0
I have an SMS app that sends the SMS message to be sent to a sidekiq worker, which then pings twilio to actually send the message. The problem I'm running into is that by sending the messages to a worker, sometimes messages above 160 characters get sent in the wrong order. I assume this is because sidekiq is running the jobs concurrently. How do I solve this problem?
Previously, I would cycle through each 160 characters of a message and send each 160 character string to a worker to be sent. This caused issues because the workers would get setup and run concurrently to the messages were out of order. To solve this, I moved the 160 character logic into the worker, which I believe solved the issue of a single message.
However, if multiple messages come through within 1-2 seconds, they get sent concurrently so it's possible it will be out of order again. How do I make sure sidekiq processes the messages in the order I call the perform_async method? Here's my code:
//messages_controller.rb
SendSMSWorker.new.perform(customer.id, message_text, 'sent', false, true)
//send_sms_worker.rb
def perform(customer_id, message_text, direction, viewed, via_api)
customer = Customer.find(customer_id)
company = customer.company
texts = message_text.scan(/.{1,160}/) # split the messages up into 160 char pieces
texts.each do |text|
message = customer.messages.new(
user_id: company.admin.id, # set the user_id to the admin's ID when using the api
company_id: company.id,
text: text,
direction: 'sent',
viewed: false,
via_api: true
)
# send_sms returns nil if successful
if error = message.send_sms
customer.mark_as_invalid! if error.code == 21211
else
# only save the message if the SMS was successfully sent
puts "API_SEND_MESSAGE company_id: #{company.id}, customer_id: #{customer.id}, message_id: #{message.id}, texts_count: #{texts.count}"
message.save
Helper.publish(company.admin, message.create_json_with_extra_attributes(true))
end
end
end
to be clear, the message.send_sms is the method on the message model that actually sends the sms via twilio. thanks!
If you're sending multiple messages, each message takes it's own route to the destination carrier. Even if they're sent in the correct sequence there isn't a guarantee they'll be received at the handset in the correct order. A way to overcome this is using concatenated messages up to 1600 characters (in the US). If you send a long message via the Messages resource it will be received as a single long message. Just make sure you're using the Messages resource:
#client.account.messages.create()
instead of
client.account.sms.messages.create()
You can read more here:
https://www.twilio.com/help/faq/sms/does-twilio-support-concatenated-sms-messages-or-messages-over-160-characters
http://twilio-ruby.readthedocs.org/en/latest/usage/messages.html