SQS - why limiting maximum message size? - amazon-sqs

Is there any reason why I should a lower than maximum limit in Maximum message size in AWS SQS? I'm not able to find anyone good one...

SQS provides many pro's like bulk message send , delayed messages , polling etc. So since we have all these pro's they definitely need to limit their sizing. But how we handled was ,
we check the message size and if the message size is above 256kb , we upload the message to s3 with unique id as file name and share the message in queue as { largeFile : true , id : (s3 File Name)} , now the consumer checks whether the largeFile is true , if so fetches from s3 and processes the data , simple :)
Or if u want only queue go with other message brokers like rabbitmq , where there isn't any size limits.

Related

SQS Event Bridge Once Per Minute

We are looking at Event Bridge to give us a scheduled task added to our SQS once per minute.
We are looking at Event Bridge to make it happen. So far it properly puts messages into the queue, but we are trying to schedule it for once per minute and noticing that the queue only gets messages once per five minutes sometimes six minutes.
The metrics seem to state invocation is happening; however, the queue isn't receiving them in the time frame specified.
Considerations
SQS FIFO Queue - Deduplication
Constant JSON String
The "duh" of not seeing messages at prescribed interval is because of this in the AWS documentation:
The token used for deduplication of sent messages. If a message with a
particular message deduplication ID is sent successfully, any messages
sent with the same message deduplication ID are accepted successfully
but aren’t delivered during the 5-minute deduplication interval
Open to suggestions and will be looking for a workaround.
Update
I tried using Input Transformer to fix by adding the time as uniquely changing item in the queue message; however, still not getting below 5 minutes.
Variable Input
{"addedOn":"$.time"}
Message
{"AddedOn":<addedOn>}
The queue polling built into SQS just wasn't polling for my updated count of greater than 10 messages. Once I deleted out the old messages the timing was correct and it was updating 1/min.
The answer is if you are going to use a constant string it'll have to be for scheduled jobs that are greater than 5 minutes.
Adding info here despite redundancy from question for linked Google searches:
The token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval
Despite the messages being a unique event once per minute the Constant (JSON text) not being unique still saw it as a duplicate to remove.
To solve I switched to Input transformer
Example event and what you other fields you can add as variables:
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail": {
"instance-id": "i-0123456789",
"state": "RUNNING"
}
}
I needed a unique variable so time was an obvious choice.
Input for Input Transformer
Input Path:
{"addedOn":"$.time"}
Template:
{"AddedOn":<addedOn>}
Documentation
Also, found that moving over to not using FIFO queues is a potential solution if that's an easy option for future SQS developers as well.

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

how to change mule's imap mail fetch order?

I have an IMAP connection to fetch emails using Mule. I'm running into an issue.
Here are my 2 simple requirements:
I want to fetch emails in reverse order. (latest first)
Ignore SEEN messages but don't delete them.
I was looking at the code that mule (3.3.1) uses:
org.mule.transport.email.RetrieveMessageReceiver.poll().
The code seems to be fetching messages from message 1.
348: Message[] messages = folder.getMessages(1, batchSize);
The messages fetched here are processed in a loop in :
org.mule.transport.email.RetrieveMessageReceiver.messagesAdded(MessageCountEvent)
142: if (!messages[i].getFlags().contains(Flags.Flag.DELETED)
143: && !messages[i].getFlags().contains(Flags.Flag.SEEN))
What this whole logic is doing is that it is trying to read OLD unread messages. The code comes back to line 348 and executes
folder.getMessages(1, batchSize);
again, and gets the same messages and it keeps on waiting. How can i change the order of fetch.
FYI: Using MS Exchange for IMAP
Not sure why you say that Mule tried to read "OLD unread messages"? It actually just tries to read unread messages, ie not DELETED nor SEEN.
Anyway, theoretically the Mulesque way of sorting the messages would be to use resequencer. Unfortunately the mail message receivers do not set any of the required control properties to let Mule process the received messages as a single batch so that won't work.
So the only solution I can think of is to extend org.mule.transport.email.RetrieveMessageReceiver and register your custom version on the IMAP connector with a <service-overrides /> child element.

Sending different body via Amazon SES Api

I am using Amazon-SES api for sending email to clients. It's very successfull but i have to send different body for each client. When i start to send mails about 200.000 clients, how the code below look like ? Is it loop 200.000 times or can i prepare an object and send one time (like n:n system, now it's 1:n).
var clientList=new List<String>(); //200.000 mail adress
foreach(var to in clientList)
{
SendEmailRequest email = new SendEmailRequest();
email.Message = new Message();
email.Message.Body = new Body();
email.Message.Body.Html = new Content(bodyhtml);
email.Message.Subject = new Content(subject);
email.WithDestination(new Destination() { ToAddresses = new List<String>() { to } })
.WithSource("mysite#mysite.com")
.WithReturnPath("mysite#mysite.com");
SendEmailResponse resp = client.SendEmail(email); //that's 1:n
}
SendEmailResponse resp = client.SendEmail(emailList); //that's n:n but it's a wrong usage
How can i send n:n algorithm in Amazon SES ?
Application is Asp.net MVC 3. So can i use Asynchronous Controller ? Is it good idea ?
Assuming you have production access for Amazon SES already (see What should I do after I'm finished testing and evaluating Amazon SES?) and a sufficiently increased Sending Quota to send 200.000 mails/day in the first place (see How Amazon SES Sets Sending Limits), the respective limits are documented for the SendEmail action:
The total size of the message cannot exceed 10 MB.
Amazon SES has a limit on the total number of recipients per message:
The combined number of To:, CC: and BCC: email addresses cannot exceed
50. If you need to send an email message to a larger audience, you can divide your recipient list into groups of 50 or fewer, and then call
Amazon SES repeatedly to send the message to each group. [emphasis mine]
Please note: It is strictly recommended to use Bcc: only for this kind of mass mailing operation, else your users will see their mail addresses exposed to each other and I can guarantee they won't be amused at all!
So you could prepare mails with 50 Bcc: recipients at a time, dropping the outbound mail amount for your use case to about 4.000, which is a considerable improvement already. However, please note a respective AWS Team response to Increase sending limit, and question on FAQ:
if you're sending to multiple ISPs [...], I would recommend
sending to one address at a time since certain ISPs are sensitive
about multiple addresses on the BCC: line in large quantities. [emphasis mine]
Whether or not this warning applies depends on your use case as usual (e.g. you might be able to shard the mails by ISP etc.).
Doing it asynchronously is fine and likely useful, but you need to ensure to stay within your Maximum Send Rate (mails/second) limit as well. These limits are visible in the SES tab of the AWS Management Console, but available via the API as well of course (see Monitoring Your Sending Limits for details).

How can I apply timeout function to LMAX Disruptor Queue?

To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?

Resources