Autoscale Rails app on Amazon EC2 - ruby-on-rails

Currently we are using a classic configuration: Nginx in front of the instances with 3 mongrels on each.
We want to autoscale our app.
So, we need to use either Elastic Load Balancer + AutoScaling or somehow manually update our Nginx config, then AutoScaling launches new instances, so nginx can route traffic to them.
The problem with ELB is it can't pass the requests to a number of ports on a EC2 instance, only to one. So, we can't run a bunch of mongrels on our instances to gain more performance from a single instance. The only way I see is to use HAproxy on each instance to proxy requests to a bunch of mongrels.
What should we do? Manually update the Nginx config, or use ELB and HAProxy on each working instance? Is there a better way to autoscale a Rails app on Amazon?

We use ELB + AutoScaler + LaunchConfig on Amazon Web Services to scale Rails applications.
Our configuration is nginx + Passenger + Rails and that only needs one port on the instances running Rails. So you can have any size instance you want running as many Rails processes (requests) as it can handle: when the instance gets busy, the AutoScaler will kick in and the ELB will distribute load across all of them. Again, it only needs one port on the instance.

Related

AWS ELB/ECS Http response headers changed

Some context here:
An old Symfony app is used in multiple EC2 instances. Handles millions of requests each day without issues.
For dev purposes, the app was added to a container and that container is used locally by the developers without having to install all the requirements. The dockerized app uses the same nginx/supervisor/php-fpm configs that productive ec2 instances.
To make easier some dev processes, it was decided to create multiple dev environments using AWS Fargate, instead of EC2 instances.
The image is pushed to ECR and is deployed using FARGATE strategy to clusters.
The approach perhaps is too much, since we have 1 Cluster running 1 service only with 1 task. That Service uses an ELB -> Target group.
The application is working fine, but after some time (hours, or days), some requests are returned with different headers. The response is a JSON, but the content type is returned as HTML, other headers are dropped from the request like access-control-allow-headers, access-control-allow-credentials, access-control-allow-methods, triggering a CORS error in the client's browser.
The weird part is that if 1 page creates 10 requests to this service, 9 will work correctly, but 1 request will return 200 with different headers. That endpoint consistently will behave in the same way to any user until the task is restarted.
The response headers are returned by the Symfony app. I also tried to force those headers including those in nginx config by default for every response, and the result is the same.
The docker image exposes port 80 to the service.
The load balancer has the rule to forward HTTPS (443) traffic to port 80, so traffic can reach the container.
The load balancer has enabled the use of HTTP/2
The only notable difference besides EC2/Fargate implementations is the load balancer. The production load balancer is an old class load balancer with only HTTP/1 enabled and the new ones are Applications load balancers using HTTP/2.
This is driving me crazy. Has anyone experienced something like this?
Incorrect headers
Correct headers

google run - does each container instance get 443, if that is what I need

I am trying to understand google run to deploy docker containers on demand. I may have load balancer at 443 and all that, but assume without load balancer will I be able to get 443 for all say 10s or 100s or instances? Thanks!
It's serverless! It's mysterious and powerful!! In fact, on only have to worry about your code (here, your container with Cloud Run). You have to host a webserver (in HTTP (by default on the port 8080 but you can change it), not HTTPS) that answer to HTTP requests. That's all!!
Then deploy it. The deployment create a service and a revision. Each new deployment, create a new revision (set of container + param unique, like this, if your new container and/or the new params of the new revision break your service, you can easily rollback to a previous stable revision).
When you serve traffic, Cloud Run is behind GFE (Google Front End). A Google wide proxy in charge of SSL management (that's why you don't have to worry about HTTPS in your container) and to route the traffic to your Cloud Run revisions. Here, Cloud Run engine is in charge of the instance creation (because Cloud Run scale to 0), and the loadbalancing of the traffic between all the created instances. You have nothing to do, it's native.
So, take it easy, that's the future for the developers!

How to deploy frontend and backend in same heroku applications but different docker images

I'm creating a web application that has Angular for front end and Flask for back end. I already 'dockered' my applications but I'm not sure on how to get them in Heroku as the same application
I've been reading that some people has used a reverse proxy server (this means that both applications are in different heroku app and they connect them using a proxy like traeffik or haproxy). But I don't want to do this, I want them to be in the same application (Example: grupo-camporota.herokuapp.com)
I was thinking that I should push both images, one as web dyno (front) and the other one as a worker (backend) but I've read that the worked dyno it's not for this, but for EXTERNAL apis. I would like to upload both image into heroku and make them communicate between them.
I would like to know how to get this done (I'm pretty sure it's possible), since i'm kinda lost
Your backend can't be a worker dyno: only web dynos can receive traffic from the internet. And one app will only get a single port to listen on, so you can't run two services on a single app.
You could serve your front-end up from your back-end as static files, but I don't think that will work with Docker. Also, Flask doesn't like to serve static files itself, so that may not be a good fit either.
It also looks like you can't communicate between Docker containers using a private network on Heroku. You may just have to deploy two apps (or host your front-end on a more appropriate static host).

How to automatically start a RubyOnRails app on server startup" (hopefully with more details about environment and nginx logs)?

I am having troubles with auto-scaling group in AWS. I am running a Ruby on Rails App in EC2 instance with ELB. I applied auto-scaling group so it scales-up automatically when heavy traffics come. However, the app server upon nginx does not start automatically so it becomes "OutOfService" in ELB. Any solution?
You can either use Monit to monitor your services, in this case Nginx and/or ( Puma, Unicorn, Passenger etc.. ) or use unix's own 'service' for things like this, Upstart.
https://www.digitalocean.com/community/tutorials/how-to-configure-a-linux-service-to-start-automatically-after-a-crash-or-reboot-part-1-practical-examples#auto-starting-services-with-upstart

What is the best practice for Nginx/ELB/Unicorn architecture on AWS?

We have an RoR application in AWS Beijing. AWS Beijing does not have Route 53 (We can't use Alias to apply ELB to Apex domain), so we must use a Front-end Server running Nginx in front of ELB.
Now our architecture likes below:
Front-end (Nginx) -- ELB --- App-(1~n) (Nginx--Unicorn)
We have noticed the words from Unicorn description below:
"Unicorn must never be exposed to slow clients, as it will never ever use new-fangled things like non-blocking socket I/O, threads, epoll or kqueue. Unicorn must be used with a fully-buffering reverse proxy such as nginx for slow clients."
So my question are:
1. Before Unicorn, do we need nginx on the App Server?
2. If we remove nginx on App Server, can nginx on Front-end Server play such the effect like unicorn describing?
I would recommend replacing the ELB with HAProxy in this scenario where you don't have the alias feature from Route53 to point to your apex domain. Putting a Nginx instance in front of the ELB doesn't seem to be a good idea because you are adding a new layer just because you can't reference the ELB on Route53. You also lose the benefit of high availably by putting a Nginx instance in front of it the ELB.
My suggestion is that you keep one instance of Nginx on each of your app servers in front of Unicorn and use HAProxy as load balancer: HAProxy > [Nginx > Unicorn]. In a simple setup of HAProxy you also don't have the same availability of the ELB but you can setup a high available configuration if needed.
1) Nginx must be always in front Unicorn because Unicorn can't deal with slow clients efficiently, it just locked by those clients
2) Never talk to Unicorn via network, it means each app server need to have its own Nginx. Nginx as Load Balancer is a way better than ELB black box.

Resources