Sending different body via Amazon SES Api - asp.net-mvc

I am using Amazon-SES api for sending email to clients. It's very successfull but i have to send different body for each client. When i start to send mails about 200.000 clients, how the code below look like ? Is it loop 200.000 times or can i prepare an object and send one time (like n:n system, now it's 1:n).
var clientList=new List<String>(); //200.000 mail adress
foreach(var to in clientList)
{
SendEmailRequest email = new SendEmailRequest();
email.Message = new Message();
email.Message.Body = new Body();
email.Message.Body.Html = new Content(bodyhtml);
email.Message.Subject = new Content(subject);
email.WithDestination(new Destination() { ToAddresses = new List<String>() { to } })
.WithSource("mysite#mysite.com")
.WithReturnPath("mysite#mysite.com");
SendEmailResponse resp = client.SendEmail(email); //that's 1:n
}
SendEmailResponse resp = client.SendEmail(emailList); //that's n:n but it's a wrong usage
How can i send n:n algorithm in Amazon SES ?
Application is Asp.net MVC 3. So can i use Asynchronous Controller ? Is it good idea ?

Assuming you have production access for Amazon SES already (see What should I do after I'm finished testing and evaluating Amazon SES?) and a sufficiently increased Sending Quota to send 200.000 mails/day in the first place (see How Amazon SES Sets Sending Limits), the respective limits are documented for the SendEmail action:
The total size of the message cannot exceed 10 MB.
Amazon SES has a limit on the total number of recipients per message:
The combined number of To:, CC: and BCC: email addresses cannot exceed
50. If you need to send an email message to a larger audience, you can divide your recipient list into groups of 50 or fewer, and then call
Amazon SES repeatedly to send the message to each group. [emphasis mine]
Please note: It is strictly recommended to use Bcc: only for this kind of mass mailing operation, else your users will see their mail addresses exposed to each other and I can guarantee they won't be amused at all!
So you could prepare mails with 50 Bcc: recipients at a time, dropping the outbound mail amount for your use case to about 4.000, which is a considerable improvement already. However, please note a respective AWS Team response to Increase sending limit, and question on FAQ:
if you're sending to multiple ISPs [...], I would recommend
sending to one address at a time since certain ISPs are sensitive
about multiple addresses on the BCC: line in large quantities. [emphasis mine]
Whether or not this warning applies depends on your use case as usual (e.g. you might be able to shard the mails by ISP etc.).
Doing it asynchronously is fine and likely useful, but you need to ensure to stay within your Maximum Send Rate (mails/second) limit as well. These limits are visible in the SES tab of the AWS Management Console, but available via the API as well of course (see Monitoring Your Sending Limits for details).

Related

How to get solace queue statistics from Solclient API? c#

I am looking to retrieve some Solace queue stats e.g. the current messages spooled count out of the maximum limit for us to set a threshold to stop publishing more messages to the queue.
Also, to subscribe to vpn events to track message discard rates.
By the time we receive errors e.g. MaxMsgUsageExceeded/SpoolOverQuota, it will be too late.
I can't seem to find any of these on SolaceSystems.Solclient.Messaging API
https://docs.solace.com/API-Developer-Online-Ref-Documentation/net/html/7f10bcf6-19f4-beff-0768-ced843e35168.htm
Would be great if someone could help
(using C# for this)
To poll for Solace queue stats from your C# application, you could use legacy SEMP over the message bus to make a SEMP request for the details that you want. Semp (Solace Element Management Protocol) is a request/reply protocol that uses an XML schema to identify all managed objects available in a message broker. Applications can use SEMP to manage and monitor a message broker.
To allow for legacy SEMP to be used over the message bus, as opposed to the management interface, it first needs to be enabled on the Solace PubSub+ message broker at the VPN level.
To publish a SEMP request with the Solace .Net Messaging API, perform the following steps:
Create a Session.
Create the message topic. “#SEMP//SHOW”
ITopic topic = ContextFactory.Instance.CreateTopic( “#SEMP/<router name>/SHOW”);
Create a request message and set its Destination to the topic in Step 2:
IMessage requestMsg = ContextFactory.Instance.CreateMessage();
requestMsg.Destination = topic;
Set the SEMP request string as the binary attachment.
string SOLTR_VERSION = "8_4_0" //change to the message-broker's version
string SEMP_SHOW_QUEUE = "<rpc semp-version=\"soltr/" + SOLTR_VERSION +
"<show><queue><name>queueName</name><detail></detail></queue></show></rpc>";
requestMsg.BinaryAttachment = Encoding.UTF8.GetBytes(SEMP_SHOW_QUEUE);
Call the SendRequest(…) method on Session.
IMessage replyMsg;
ReturnCode rc = session.SendRequest(requestMsg, out replyMsg, timeout);
The SEMP response is returned in replyMsg.
Obtain the binary attachment data from the reply message:
replyMsg.BinaryAttachment
The binary attachment contains the SEMP reply for the command topic in the publish request.
The Solace PubSub+ message broker does raise an event when an egress message is discarded. However, it is only sent out approximately once every 60 seconds for the specified client so it is not possible to get these exact rates.
It is possible for your .NET application to subscribe to VPN-level events over the message-bus. To do this, you must first enable the Solace PubSub+ message broker to publish the events. You can then subscribe to the special topic and receive the events as messages.
The topic to subscribe to is:
#LOG/<level>/VPN/<routerName>/<eventName>/<vpnName>
The different levels can use the * wildcard. For example, if you wish to subscribe to all VPN events of all levels for the VPN apple on router QA-NY1, the topic string would be:
#LOG/*/VPN/QA-NY1/*/apple
SEMP (starting in v2) is a RESTful API for configuring, monitoring, and administering a Solace PubSub+ broker.
1-The swapper page link is SEMP V2 API
2-The Swagger metadata definitions URL is located # http://{solace-sever-url}/SEMP/v2/config/spec
3- From Visual studio, add REST API Client
4-In the configuration dialog pass swagger metadata URL (defined at step 2), for code purpose I choose SolaceSemp as input value parameter for client namespace input.
4 Once you click ok, VS will create the client along with the models under SolaceSemp namespace
5 Start using the client as per following
using SolaceSemp;
using Microsoft.Rest;
var credentials = new BasicAuthenticationCredentials();
credentials.UserName = "place user name";
credentials.Password = "place password";
using (var client = new SolaceSempClient(credentials))
{
var model = client.GetAboutApi();
}

SQS - why limiting maximum message size?

Is there any reason why I should a lower than maximum limit in Maximum message size in AWS SQS? I'm not able to find anyone good one...
SQS provides many pro's like bulk message send , delayed messages , polling etc. So since we have all these pro's they definitely need to limit their sizing. But how we handled was ,
we check the message size and if the message size is above 256kb , we upload the message to s3 with unique id as file name and share the message in queue as { largeFile : true , id : (s3 File Name)} , now the consumer checks whether the largeFile is true , if so fetches from s3 and processes the data , simple :)
Or if u want only queue go with other message brokers like rabbitmq , where there isn't any size limits.

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

Twilio IP Messaging - get the last message index on REST API

Using the twilio-ruby package to connect to the REST API for Twilio's IP Messaging service and attempting to compute an unread message count.
The REST API is paginating the messages so that something like
channel.messages.list.last.index
Will return 49 once there are more than 50 messages in the channel.
Is there a way to get just the last message on the channel (as seems to be possible in the android/ios SDK) to avoid paginating through all message history?
In regards to computing an unread message count, take a look at the Message Consumption Horizon and subtract the lastConsumedMessageIndex from the total number of messages in the list - 1.
For the messages list (in Python):
https://www.twilio.com/docs/api/ip-messaging/rest/messages#list-all-messages
# Download the Python helper library from twilio.com/docs/python/install
from twilio.rest.ip_messaging import TwilioIpMessagingClient
# Your Account Sid and Auth Token from twilio.com/user/account
account = "ACCOUNT_SID"
token = "AUTH_TOKEN"
client = TwilioIpMessagingClient(account, token)
service = client.services.get(sid="SERVICE_SID")
channel = service.channels.get(sid="CHANNEL_ID")
messages = channel.messages.list()
See also, Sending a Consumption Report (the example in JavaScript):
//determine the newest message index
var newestMessageIndex = activeChannel.messages.length ?
activeChannel.messages[activeChannel.messages.length-1].index : 0;
//check if we we need to set the consumption horizon
if (activeChannel.lastConsumedMessageIndex !== newestMessageIndex) {
activeChannel.updateLastConsumedMessageIndex(newestMessageIndex);
}

How many direct messages does twitter store?

I've read the Twitter REST API docs, I know that it says you can fetch 200 at a time to a max of 800. However... I can't. I'm pulling 200, using the last tweet as max_id and then sending another request but I only receive the last tweet from the first request, not the remaining from my supposed 800 limit.
So I did a little research and I found that when I was sending more direct messages from other accounts my other direct messages were disappearing (i.e, if I had 200 received messages from an account called "sup," and I sent 5 messages from an account called "foo," "sup" would only show 195 direct messages and "foo" would show 5. Those 5 messages would disappear from "sup" in both the twitter DM window, as well as from the API calls.
I'm using Twython to do this, but I don't believe that switching back to requests would change anything, as I can visibly see the messages disappearing from the chat log. Does that mean that Twitter only stores 200 total DM's? Or am I doing something completely wrong.
This is the code I was using to pull for direct messages. Keep in mind that I still don't know how to explain DM's disappearing in the twitter DM console.
test_m = twitter.get_direct_messages(count=200)
i = 0
for x in test_m:
print 'dm number = ' + str(i) + '| dm id= '+ str(x['id']) + ' |text= ' + x['text']
i += 1
m_id = test_m[-1]['id']
test_m_2 = twitter.get_direct_messages(count=200, max_id=m_id)
This code will return test_m as an array of 200 items, and test_m_2 as an array of 1 item, containing the last element of test_m.
Edit: Well, no response yet but I figured I should add that this method successfully returns more than 200 messages for the other api calls I've made (user timeline, mentions timeline, retweets). From my testing I have to assume that only 200 incoming messages are stored by twitter throughout all DM interactions. If I'm wrong, let me know!
Brian,
Twitter stores more than the last 200 messages, if you were to delete 1 of the Direct messages using destroy_direct_message, then you can access 1 addition old direct Message.
Deleting 100 old Direct Messages will give you access an additional 100 messages etc.
I neither make max_id nor page work either. not sure if the bug it in Twython or Twitter ;-(
JJ
Currently, the API stands you can get up to the latest 3200 tweets of an account but only the 200 latest received direct messages (direct_messages endpoint) from a conversation or the 800 latest sent direct messages (direct_messages/sent endpoint).
To answer your question, I do not think there is a limitation of the number of direct messages "stored" by Twitter. Recently, I have been able to retrieve a complete conversation with more than 17000 direct messages (and all the uploaded media) using this tool that I have created for this purpose.

Resources