Split Heroku Web Workers by URL - ruby-on-rails

Firstly: I realise in an ideal world I could achieve this using SOA. Humour me :)
Background
Imagine I have a rails app running on heroku with very minimal traffic in terms of user requests, they can be happily served by 1 web dyno.
I also have a machine somewhere in the world which is regularly and repeatedly submitting large files to my application via http://example.com/api/bigupload as fast as it is able.
The large files eat up my web dynos and so the user experience is bad. I increase the web dynos, but the large file uploads continue to tie them all up in long requests.
Question
Is there some way I can keep one worker in 'reserve' which will not respond to the big upload requests and concentrate on serving user traffic for other URLs?
Note: I have a similar situation to this one where automated large image uploads are eating my requests and delaying users accessing the API, albeit on a larger scale.

I think you're effectively asking: "Is there a way to partition my web dynos so that only some respond to a certain subset of requests".
The answer (today) is no unfortunately. Heroku routes randomly across all your web dynos.
What web server are you running on your web dyno? Are you using a concurrent web server? If you're not, that may have a large impact (in that it won't tie the dyno up nearly as much).
Have you explored a different architecture where instead of your other app submitting big uploads, it submits pointers to the big payloads. That way your web dyno can simply dump them on a queue, and your workers can fetch the payloads and process - and then you can scale by increasing the number of workers.

Related

How to decrease the memory consumption of Sidekiq processes?

I've a server that launches 10 web applications that are almost identical (only assets and content are different). These applications uses Sidekiq to send emails after successful submitting. The problem is in memory usage. Each process consumes 80-100MB of RAM.
I've already set up concurrency: 1 for every project. Since the amount of jobs is small I want to somewhat combine these processes into one. How to do this? Is it reliable solution? Maybe, I should search for some memory leaks?
I'm not so experienced in this field so any advices are welcome.
You could centralize all the email sending into a single sidekiq process to send out the emails of all applications.
Here's a good answer about the details of sharing sidekiq between various apps:
How to share worker among two different applications on heroku?

What is worker , dyno and zero-downtime deploys in heroku

These three terms have given a lot of importance in understanding different app server in heroku tutorials but I can't understand the purpose and definition of these three terms.
Can anybody have info about that. Kindly share
Thanks
The Heroku reference guide has a lot of information on all of this, and lots more, but in answer to your question;
A dyno is effectively a small virtual server instance set up to run one app (it's behind an invisible load balancer, so you can have any number of them running side-by-side). You don't need to worry about the server admin side of things, as it just takes your source code from a Git push and runs it.
A worker is a type of dyno, usually designed to process tasks in the background (in contrast to a web dyno, which just serves web pages. For example, Rails has ActiveJob, which plugs into something like Resque or Sidekiq, completes tasks which would slow down the web interface if it has to complete them, like sending e-mails, or geocoding addresses.
Zero-downtime deploys is really marketing speak for "if you push your code, it will wait until the new version is up and running before swapping the web interface to use it". It means you don't need to do anything, and your web app won't go offline while it is switched over to.

Amazon Web Service Micro Instance - Server Crash

I am currently using an AWS micro instance as a web server for a website that allows users to upload photos. Two questions:
1) When looking at my CloudWatch metrics, I have recently noticed CPU spikes, the website receives very little traffic at the moment, but becomes utterly unusable during these spikes. These spikes can last several hours and resetting the server does not eliminate the spikes.
2) Although seemingly unrelated, whenever I post a link of my website on Twitter, the server crashes (i.e.,Error Establishing a Database Connection). Once restarting Apache and MySQL, the website returns to normal functionality.
My only guess would be that the issue is somehow the result of deficiencies with the micro instance. Unfortunately, when I upgraded to the small instance, the site was actually slower due to fact that the micro instances can have two EC2 compute units.
Any suggestions?
If you want to stay in the free tier of AWS (micro instance), you should off load as much as possible away from your EC2 instance.
I would suggest you to upload the images directly to S3 instead of going through your web server (see some example for it here: http://aws.amazon.com/articles/1434).
S3 can also be used to serve most of your web pages (images, js, css...), instead of your weak web server. You can also add these files in S3 as origin to Amazon CloudFront (CDN) distribution to improve your application performance.
Another service that can help you in off loading the work is SQS (Simple Queue Service). Instead of working with online requests from users, you can send some requests (upload done, for example) as a message to SQS and have your reader process these messages on its own pace. This is good way to handel momentary load cause by several users working simultaneously with your service.
Another service is DynamoDB (managed NoSQL DB service). You can put on dynamoDB most of your current MySQL data and queries. Amazon DynamoDB also has a free tier that you can enjoy.
With the combination of the above, you can have your micro instance handling the few remaining dynamic pages until you need to scale your service with your growing success.
Wait… I'm sorry. Did you say you were running both Apache and MySQL Server on a micro instance?
First of all, that's never a good idea. Secondly, as documented, micros have low I/O and can only burst to 2 ECUs.
If you want to continue using a resource-constrained micro instance, you need to (a) put MySQL somewhere else, and (b) use something like Nginx instead of Apache as it requires far fewer resources to run. Otherwise, you should seriously consider sizing up to something larger.
I had the same issue: As far as I understand the problem is that AWS will slow you down when you reach a predefined usage. This means that they allow for a small burst but after that things will become horribly slow.
You can test that by logging in and doing something. If you use the CPU for a couple of seconds then the whole box will become extremely slow. After that you'll have to wait without doing anything at all to get things back to "normal".
That was the main reason I went for VPS instead of AWS.

Heroku | Different performance parameters for different parts of your application

I have an Rails 3 application hosted on heroku, it has pretty common configuration where I have a client facing part of my application say: www.myapplication.com and an admin part of my application admin.myapplication.com.
I want my client facing part of my application to be fast, and I don't really care about how fast my admin module is. What I do care about is that I do not want usage on my admin site to slow down the client facing part of my application.
Ideally my client-side of the app with have 3 dedicated dynos, and my admin side will have 1 dedicated dyno.
Does anyone have any idea on the best way to accomplish this?
Thanks!
If you split the applications you're going to have share the database connections between the two apps. To be honest, I'd just have it one single app and give it 4 dynos :)
Also, Dynos don't increase performance, they increase throughput so you're capable of dealing with more requests a second.
For example,
Roughly - If a typical page response is 100ms, 1 dyno could process 10 requests a second. If you only have a single dyno and your app suddenly receives 10 requests per second then the excess requests will be queued until the dyno is free'd up to process those requests. Also requests > 30s will be timed out.
If you add a second dyno requests would be shared between the 2 dynos so you'd now be able to process 20 requests a second (in an ideal world) and so on as you add more dynos.
And remember a dyno is single threaded, so if it's doing something ANYTHING ie rendering a page, building a pdf and including receiving an uploaded image etc then it's busy and unable to process further requests until it's finished and if you don't have an more dynos requests will be queued.
My advice is to split your application into it's logical parts. Having a separate application for the admin interface is a good thing.
It does not have to be on the same domain as the main application. It could have a global client IP restriction or just a simple global Basic Auth.
Why complicate things and stuff two things into one application? This also lets you eperimenting more with the admin part and redeploy it without affecting your users.

Ruby on Rails: How would I handle 10 concurrent users? Do I need more CPU?

Sorry if this might seem obvious. I've monitored that a web request on my Rails app uses 30-33% of CPU every time. For example, if I load a web page, then 30% of CPU is used. Does that mean that my box can only handle 3 concurrent web requests, and will stall if there are more than 3 web requests (i.e. I'll get a 100% CPU)?
If so, does that also mean that if I want to handle more than 3 concurrent web requests, then I'll have to get more servers to handle the load using a load balancer? (e.g. to handle 6 concurrent web requests, I'll need 2 servers; for 9 concurrent requests, I'll need 3 servers; for 12, I'll need 4 servers -- and so on?)
I think you should start with load tests. I wouldn't trust manual testing that much.
Load tests tell you how long the response takes for each client, and how many clients
simply time-out.
Also you will be able to measure the improvements objectively for any changes that you make.
Look at ab, or httperf; there are many other tools available.
Stephan
Your Apache or Nginx in front of the Passenger will queue requests until a Passenger worker becomes available. You can limit the number of concurrent workers so your server never stalls (but new visitors will have to wait longer until it's their turn).
It's difficult to tell based on this information. It depends very much on the web server stack you're using and which environment you're running. Different servers (Mongrel, Webrick, Apache using various mechanisms, Unicorn) all have different memory characteristics. Different environments (development vs. test vs. production) all exhibit radically different memory usage characteristics.

Resources