I have an Rails 3 application hosted on heroku, it has pretty common configuration where I have a client facing part of my application say: www.myapplication.com and an admin part of my application admin.myapplication.com.
I want my client facing part of my application to be fast, and I don't really care about how fast my admin module is. What I do care about is that I do not want usage on my admin site to slow down the client facing part of my application.
Ideally my client-side of the app with have 3 dedicated dynos, and my admin side will have 1 dedicated dyno.
Does anyone have any idea on the best way to accomplish this?
Thanks!
If you split the applications you're going to have share the database connections between the two apps. To be honest, I'd just have it one single app and give it 4 dynos :)
Also, Dynos don't increase performance, they increase throughput so you're capable of dealing with more requests a second.
For example,
Roughly - If a typical page response is 100ms, 1 dyno could process 10 requests a second. If you only have a single dyno and your app suddenly receives 10 requests per second then the excess requests will be queued until the dyno is free'd up to process those requests. Also requests > 30s will be timed out.
If you add a second dyno requests would be shared between the 2 dynos so you'd now be able to process 20 requests a second (in an ideal world) and so on as you add more dynos.
And remember a dyno is single threaded, so if it's doing something ANYTHING ie rendering a page, building a pdf and including receiving an uploaded image etc then it's busy and unable to process further requests until it's finished and if you don't have an more dynos requests will be queued.
My advice is to split your application into it's logical parts. Having a separate application for the admin interface is a good thing.
It does not have to be on the same domain as the main application. It could have a global client IP restriction or just a simple global Basic Auth.
Why complicate things and stuff two things into one application? This also lets you eperimenting more with the admin part and redeploy it without affecting your users.
Related
These three terms have given a lot of importance in understanding different app server in heroku tutorials but I can't understand the purpose and definition of these three terms.
Can anybody have info about that. Kindly share
Thanks
The Heroku reference guide has a lot of information on all of this, and lots more, but in answer to your question;
A dyno is effectively a small virtual server instance set up to run one app (it's behind an invisible load balancer, so you can have any number of them running side-by-side). You don't need to worry about the server admin side of things, as it just takes your source code from a Git push and runs it.
A worker is a type of dyno, usually designed to process tasks in the background (in contrast to a web dyno, which just serves web pages. For example, Rails has ActiveJob, which plugs into something like Resque or Sidekiq, completes tasks which would slow down the web interface if it has to complete them, like sending e-mails, or geocoding addresses.
Zero-downtime deploys is really marketing speak for "if you push your code, it will wait until the new version is up and running before swapping the web interface to use it". It means you don't need to do anything, and your web app won't go offline while it is switched over to.
Firstly: I realise in an ideal world I could achieve this using SOA. Humour me :)
Background
Imagine I have a rails app running on heroku with very minimal traffic in terms of user requests, they can be happily served by 1 web dyno.
I also have a machine somewhere in the world which is regularly and repeatedly submitting large files to my application via http://example.com/api/bigupload as fast as it is able.
The large files eat up my web dynos and so the user experience is bad. I increase the web dynos, but the large file uploads continue to tie them all up in long requests.
Question
Is there some way I can keep one worker in 'reserve' which will not respond to the big upload requests and concentrate on serving user traffic for other URLs?
Note: I have a similar situation to this one where automated large image uploads are eating my requests and delaying users accessing the API, albeit on a larger scale.
I think you're effectively asking: "Is there a way to partition my web dynos so that only some respond to a certain subset of requests".
The answer (today) is no unfortunately. Heroku routes randomly across all your web dynos.
What web server are you running on your web dyno? Are you using a concurrent web server? If you're not, that may have a large impact (in that it won't tie the dyno up nearly as much).
Have you explored a different architecture where instead of your other app submitting big uploads, it submits pointers to the big payloads. That way your web dyno can simply dump them on a queue, and your workers can fetch the payloads and process - and then you can scale by increasing the number of workers.
I am currently using an AWS micro instance as a web server for a website that allows users to upload photos. Two questions:
1) When looking at my CloudWatch metrics, I have recently noticed CPU spikes, the website receives very little traffic at the moment, but becomes utterly unusable during these spikes. These spikes can last several hours and resetting the server does not eliminate the spikes.
2) Although seemingly unrelated, whenever I post a link of my website on Twitter, the server crashes (i.e.,Error Establishing a Database Connection). Once restarting Apache and MySQL, the website returns to normal functionality.
My only guess would be that the issue is somehow the result of deficiencies with the micro instance. Unfortunately, when I upgraded to the small instance, the site was actually slower due to fact that the micro instances can have two EC2 compute units.
Any suggestions?
If you want to stay in the free tier of AWS (micro instance), you should off load as much as possible away from your EC2 instance.
I would suggest you to upload the images directly to S3 instead of going through your web server (see some example for it here: http://aws.amazon.com/articles/1434).
S3 can also be used to serve most of your web pages (images, js, css...), instead of your weak web server. You can also add these files in S3 as origin to Amazon CloudFront (CDN) distribution to improve your application performance.
Another service that can help you in off loading the work is SQS (Simple Queue Service). Instead of working with online requests from users, you can send some requests (upload done, for example) as a message to SQS and have your reader process these messages on its own pace. This is good way to handel momentary load cause by several users working simultaneously with your service.
Another service is DynamoDB (managed NoSQL DB service). You can put on dynamoDB most of your current MySQL data and queries. Amazon DynamoDB also has a free tier that you can enjoy.
With the combination of the above, you can have your micro instance handling the few remaining dynamic pages until you need to scale your service with your growing success.
Wait… I'm sorry. Did you say you were running both Apache and MySQL Server on a micro instance?
First of all, that's never a good idea. Secondly, as documented, micros have low I/O and can only burst to 2 ECUs.
If you want to continue using a resource-constrained micro instance, you need to (a) put MySQL somewhere else, and (b) use something like Nginx instead of Apache as it requires far fewer resources to run. Otherwise, you should seriously consider sizing up to something larger.
I had the same issue: As far as I understand the problem is that AWS will slow you down when you reach a predefined usage. This means that they allow for a small burst but after that things will become horribly slow.
You can test that by logging in and doing something. If you use the CPU for a couple of seconds then the whole box will become extremely slow. After that you'll have to wait without doing anything at all to get things back to "normal".
That was the main reason I went for VPS instead of AWS.
Up until now, I've always built my VPS dedicated to running just a single application in multiple instances, mostly with Unicorn. That way I could set up the whole environment to fit perfectly for that one specific application and be happy with it.
But now I need to build a VPS, that will host multiple small Ruby applications. Some of them will be Rails and some Sinatra. They will have basically zero traffic (under 100 visits per day), which means I don't even need multiple instances of a single app.
I don't really have experience with other servers than unicorn + nginx, but what I think I need would look something like this.
request to app1, gets loaded into memory and serves the request
request to app2, gets loaded into memory and serves the request
request to app3, there is not enough free memory
app1 gets killed before the app3 is booted to serve the request
I know this isn't an exactly perfect scenario, but imagine having a 10 or 20 small apps on a single server, where each app gets 5 hits a day. They don't exactly need to be up and running at all times.
As far as I know, Heroku does this with their free tier, where Dynos get killed after some idle time, and then they get loaded back up when a request comes in. That's basically what I need to do on my own server.
I would recommend using Apache + Passenger. Passenger by default loads the application only when you need it, e.g. the first request will take a bit longer (actually as long as it takes to load your framework).
If the application is idle for some predefined time, it will be removed from memory.
Setup is very easy and adding new applications is just adding one line in your apache configuration.
Sorry if this might seem obvious. I've monitored that a web request on my Rails app uses 30-33% of CPU every time. For example, if I load a web page, then 30% of CPU is used. Does that mean that my box can only handle 3 concurrent web requests, and will stall if there are more than 3 web requests (i.e. I'll get a 100% CPU)?
If so, does that also mean that if I want to handle more than 3 concurrent web requests, then I'll have to get more servers to handle the load using a load balancer? (e.g. to handle 6 concurrent web requests, I'll need 2 servers; for 9 concurrent requests, I'll need 3 servers; for 12, I'll need 4 servers -- and so on?)
I think you should start with load tests. I wouldn't trust manual testing that much.
Load tests tell you how long the response takes for each client, and how many clients
simply time-out.
Also you will be able to measure the improvements objectively for any changes that you make.
Look at ab, or httperf; there are many other tools available.
Stephan
Your Apache or Nginx in front of the Passenger will queue requests until a Passenger worker becomes available. You can limit the number of concurrent workers so your server never stalls (but new visitors will have to wait longer until it's their turn).
It's difficult to tell based on this information. It depends very much on the web server stack you're using and which environment you're running. Different servers (Mongrel, Webrick, Apache using various mechanisms, Unicorn) all have different memory characteristics. Different environments (development vs. test vs. production) all exhibit radically different memory usage characteristics.