Handle AWS SQS failure scenario? - amazon-sqs

I understand AWS SQS is high reliability, but still there are still chance that network can be disconnected from our server to the AWS datacenter from time to time.
Are there any way tool to prevent this kind of error, e.g. by caching the request locally and resend if network is available again?

The AWS SDK has built-in retries to handle transient errors - you can configure the retry policy based on your needs. This should handle the most common types of network errors dealing with AWS.
If AWS service you depend on is down for longer than retry policy will handle, then you need to decide how to handle that - you could fallback to other AWS region(s) (very unlikely the service is down in multiple regions), or emit failures to your service callers, or cache locally, drop the requests, or something else entirely.
Handling failure cases is highly variable, and depends on the use case and needs of the system - there's definitely no "one right way". As a note, I've used each of the above failure mode suggestions I made on different systems I've built that depend on AWS.

Related

Spring cloud data flow distributed processing

How does Spring Cloud Data Flow take care of distributed processing? If the server is deployed on PCF and say there are 2 instances, how will the input data be distributed between these 2 instances?
Also, how are failures handled when deployed on PCF? PCF will spawn a new instance for failed one. But will it also take care of deploying the stream or manual intervention is required there?
You should make the distinction between what the Spring Cloud Dataflow documentation calls "the server" and the apps that make up a managed stream.
"The server" is only here to receive deployment requests and honor them, spawning apps that make up your stream(s). If you deploy multiple instances of "the server", then there is nothing special about it. PCF will front it with a LB and either instance will handle your REST requests. When deploying on PCF, state is maintained in a bound service, so there is nothing special here.
If you're rather referring to "the apps", ie deploying a stream with some or all of its part using more than one instance, ie
stream create foo --definition "time | log"
stream deploy foo --properties "app.log.count=3"
then by default, it's up to the binder implementation to choose how to distribute data. This often means round robin balancing.
If you want to control how data pertaining to the same conceptual domain object ends up on the same app instance, you should tell Dataflow how to do so. Something like
stream deploy bar --properties "app.x.producer.partitionKeyExpression=<someDomainConcept>"
As for handling failures, I'm not sure what you're asking. The deployed apps are the stream. Once a request to have that many instances of the stream components has been sent and received by PCF, it will take care of honouring that request. It's out of the hands of Dataflow at that point, and this is exactly why the boundary for the Spring Cloud Deployer contract has been set there (same for other runtimes)/

Sharing data between Elastic Beanstalk web and worker tiers

I have a platform (based on Rails 4/Postgres) running on an auto scaling Elastic Beanstalk web environment. I'm planning on offloading long running tasks (sync with 3rd parties, delivering email etc) to a Worker tier, which appears simple enough to get up and running.
However, I also want to run periodic batch processes. I've looked into using cron.yml and the scheduling seems pretty simple, however the batch process I'm trying to build needs to access the data from the web application to be able to work.
Does anybody have any opinion of the best way of doing this? Either a shared RDS database between web and worker tier, or perhaps a web service that the worker tier can access?
Thanks,
Dan
Note: I've added an extra question, which more broadly describes my
requirements as it struck me that this might not be the best approach.
What's the best way to implement this shared batch process with Elastic Beanstalk?
Unless you need a full relational database management system (RDBMS), consider using S3 for shared persistent data storage across your instances.
Also consider Amazon Simple Queue Service (SQS):
SQS is a fast, reliable, scalable, fully managed message queuing
service. SQS makes it simple and cost-effective to decouple the
components of a cloud application. You can use SQS to transmit any
volume of data, at any level of throughput, without losing messages or
requiring other services to be always available.

Background Tasks in Spring (AMQP)

I need to handle a time-consuming and error-prone task (e.g., invoking a SOAP endpoint that will trigger the delivery of an SMS) whenever a given endpoint of my REST API is invoked, but I'd prefer not to make my users wait for that before sending a response back. Spring AMQP is already part of my stack, so I though about leveraging it to establish a "work queue" and have a number of worker processes consuming from the queue and taking care of the "work units". I have, however, the following requirements:
A work unit is guaranteed to be delivered, and delivered to exactly one worker.
Shall a work unit fail to be completed for any reason it must get placed back in the queue so that another worker can pick it up later.
Work units survive server reboots and crashes. This is mandatory because I won't be using a DB of any kind to store them.
I know RabbitMQ and Spring AMQP can be configured in such a way that ensures these three requirements, but I've only ever used it to achieve RPC so I don't know much about anything other than that. Is there any example I might follow? What are some of the pitfalls to watch out for?
While creating queues, rabbitmq gives you two options; transient or durable. Durable messages will be available until you acknowledge them. And messages won't expire if you do not give queue a ttl. For starters you can enable rabbitmq management plugin and play around a little.
But if you really want to guarantee the safety of your messages against hard resets or hardware problems, i guess you need to use a rabbitmq cluster.
Rabbitmq Clustering and you can find high availability subject on the right side of the page.
This guy explaines how to cluster
By the way i like beanstalkd too. You can make it write messages to disk and they will be safe except disk failures.

What is a good practice to achieve the "Exactly-once delivery" behavior with Amazon SQS?

According to the documentation:
Q: How many times will I receive each message?
Amazon SQS is
engineered to provide “at least once” delivery of all messages in its
queues. Although most of the time each message will be delivered to
your application exactly once, you should design your system so that
processing a message more than once does not create any errors or
inconsistencies.
Is there any good practice to achieve the exactly-once delivery?
I was thinking about using the DynamoDB “Conditional Writes” as distributed locking mechanism but... any better idea?
Some reference to this topic:
At-least-once delivery (Service Behavior)
Exactly-once delivery (Service Behavior)
FIFO queues are now available and provide ordered, exactly once out of the box.
https://aws.amazon.com/sqs/faqs/#fifo-queues
Check your region for availability.
The best solution really depends on exactly how critical it is that you not perform the action suggested in the message more than once. For some actions such as deleting a file or resizing an image it doesn't really matter if it happens twice, so it is fine to do nothing. When it is more critical to not do the work a second time I use an identifier for each message (generated by the sender) and the receiver tracks dups by marking the ids as seen in memchachd. Fine for many things, but probably not if life or money depends on it, especially if there a multiple consumers.
Conditional writes sound like a clever solution, but it has me wondering if perhaps AWS isn't such a great solution for your problem if you need a bullet proof exactly-once solution.
Another alternative for distributed locking is Redis cluster, which can also be provisioned with AWS ElasticCache. Redis supports transactions which guarantee that concurrent calls will get executed in sequence.
One of the advantages of using cache is that you can set expiration timeouts, so if your message processing fails the lock will get timed release.
In this blog post the usage of a low-latency control database like Amazon DynamoDB is also recommended:
https://aws.amazon.com/blogs/compute/new-for-aws-lambda-sqs-fifo-as-an-event-source/
Amazon SQS FIFO queues ensure that the order of processing follows the
message order within a message group. However, it does not guarantee
only once delivery when used as a Lambda trigger. If only once
delivery is important in your serverless application, it’s recommended
to make your function idempotent. You could achieve this by tracking a
unique attribute of the message using a scalable, low-latency control
database like Amazon DynamoDB.
In short - we can put item or update item in dynamodb table with condition expretion attribute_not_exists(for put) or if_not_exists(for update), please check example here
https://stackoverflow.com/a/55110463/9783262
If we get an exception during put/update operations, we have to return from a lambda without further processing, if not get it then process the message (https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-idempotent/)
The following resources were helpful for me too:
https://ably.com/blog/sqs-fifo-queues-message-ordering-and-exactly-once-processing-guaranteed
https://aws.amazon.com/blogs/aws/introducing-amazon-sns-fifo-first-in-first-out-pub-sub-messaging/
https://youtu.be/8zysQqxgj0I

Amazon SDB - PUTS per second limit explained?

I believe the max PUT requests to Amazon's Simple DB is 300?
What happens when I throw 500 or 1,000 requests to it? Is it queued on the Amazon side, do I get 504's or should I build my own queuing server on EC2?
The max request volume is not a fixed number, but a combination of factors. There is a per-domain throttling policy but there seems to be some room for bursting requests before throttling kicks in. Also, every SimpleDB node handles many domains and every domain is handled by multiple nodes. The load on the node handling your request also contributes to your max request volume. So you can get higher throughput (in general) during off-peak hours.
If you send more requests than SimpleDB is willing or able to service, you will get back a 503 HTTP code. 503 Service unavailable responses are business as usual and should be retried. There is no request queuing going on within SimpleDB.
If you want to get the absolute max available throughput you have to be able to (or have a SimpleDB client that can) micro manage your request transmission rate. When the 503 response rate reaches about 10% you have to back off your request volume and subsequently build it back up. Also, spreading the requests across multiple domains is the primary means of scaling.
I wouldn't recommend building your own queuing server on EC2. I would try to get SimpleDB to handle the request volume directly. An extra layer could smooth things out, but it won't let you handle higher load.
I would use the work done at Netflix as an inspiration for high throughput writes:
http://practicalcloudcomputing.com/post/313922691/5-steps-simpledb-performance

Resources