Using Kubernetes with ruby on rails application - ruby-on-rails

I have a ruby on rails application running on AWS. As usual each application server have an nginx and multiple unicorn workers of the application instance.
I am going to move the workload to Kubernetes. I have couple of question regarding this, please help if anyone out there who have kubernetised there ror application.
What will be the role of nginx?. Do i need to install nginx in all the pods or i should have a an nginx pod which will reverse proxy to all rails/unicorn pods?
Which one is the best for ror in kubernetes, passenger or unicorn?

How will you use nginx?
A kubernetes service can be backed by several kubernetes pods. Whenever anyone makes a request to the kubernetes service, the request is sent to one of the upstream pods in a round robin fashion.
If you were planning to use nginx as a 'load-balancer' or reverse proxy to front your rails app, you don't really need that anymore. Each pod ofcourse will need to have something like passenger/unicorn to serve the rails app.
Here'a a guide I found that talks about a rails deployment from start to end: http://www.thagomizer.com/blog/2015/07/01/kubernetes-and-deploying-to-google-container-engine.html
If you're planning to use nginx as a static file server, my recommendation would be to have a different pod for the static files that just contains nginx.
What is better to use with k8s?
K8s doesn't really care, because this is outside k8s's concern. Use whatever you like, or whatever you think works better in a container environment. The better question to ask might be which one of passenger/unicorn is a better fit for containerised rails apps.

Related

Rails, Ember, Redis, nginx and docker

Colleagues, I have a front-end application based on Ember and Rails (running on nginx) which also uses redis as a cache.
I want to dockerize this application, but not sure about best practices. Would it be best to create one container with a dockerfile that pulls in all these pieces, or should each component be in its own container?
For bonus points: I have to retrieve the code from private bitbucket repos and.. how are we meant to store our secrets and other config files when using containers?
So, I'll try my best from a phone,
Secrets are to be kept in environment variables, so you may need to update your application code to work with those.
As for dockerizing, I typically do backend (rails in this case) in one (or more) container(s) and nginx in a single container bundled with a single page app (ember in this case)
So, you should have two dockerfiles total.
Here are some resources that hopefully provide enough to get started:
Dotnet + react: https://github.com/sillsdev/appbuilder-portal/
Modern bleeding edge ember: https://gitlab.com/NullVoxPopuli/emberclear/
Old ember: https://gitlab.com/precognition-llc/aeonvera-ui
Rails: https://gitlab.com/precognition-llc/aeonvera
For the nginx, that first link shows a dotnet core and react app with nginx and has the deployment strategy I've described. For nginx, you'll start with a node container, or the ember-cli image from danlynn (who still hasn't responded to me about getting those on the official ember docketed), and use multistage builds to eventually copy your dist folder to a directory in the nginx container in the last stage.
Hope this helps. I can clarify more if needed.

How to setup nginx as reverse proxy for rest microservice in kubernetes?

I have a rest microservice and would like to setup nginx as a reverse proxy for it. I am little confused about which approach to follow:
Run nginx in each pod where application code is running.
Run nginx in separate pods and redirect http requests to application code running in separate pods.
Can someone explain which one is better
In my opinion, running nginx in a separate pod is a better option because that way you can scale up and down application separately from a proxy. Usually, we use one container with proxy and few with API.
Option 1 will work but it is looks to be inefficient way to do what you have mentioned. Nginx is a highly capable server (footprint/runtime resources) and can easily be able to serve multiple applications from a separate pod.
So I think the option 2 is a better option.
Running nginx separately will have following advantages:
Efficient (save on resources and money) because a single nginx will be able to serve multiple applications
Possibility to use other nginx capabilities in future (e.g. load balancing)
Maintainability - only a single pod to maintain, monitor and troubleshoot (e.g. Upgrade rollout, monitoring etc.) and many more
I have had a similar requirement. I used a single nginx on a separate pod to serve multiple (250) application deployments running on different pods. I used proxy_pass directive to get the job done.

When to use a multi-container docker in Elastic Beanstalk for running a Rails App?

I would like to deploy a rails API app to AWS Elastic Beanstalk and noticed that there are two options for docker.
Single container
Multi-container
I think it is enough with a single container for this app however, I was wondering when is the case to use multi-container. If I would like to deploy two rails apps(one is an API app and the other is an admin app) to a single EC2 instance then is this the case?
Well.. Not really. Multicontainer, as it stays, has more than a one container within overall definition (done with Dockerrun.aws.json file). You can still deploy just one container with whatever application you want, let's say django, Python based framework, where there's an API and admin panel as well and it all sits within one application.
But you may want to deploy your application behind some reverse proxy, it might be Nignx let's say, so there's a need for a second container. That's the case where you would use Multicontainer. The main advantage of using Multicontainer is that each container can talk to each other using local network and some DNS host mapping, so your Nginx container can invoke with proxy_pass any application by its name, like just "backend", where Rails or Django application is living.

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

deploy two applications on the same domain on heroku

I have a back end api deployed on heroku,
mydomain.com
The front end is an angularjs application, I want to host it on the same url so that I will avoid the cors restriction.
Is that possible ?
The easiest ways to solve this:
by using Multiple Buildpacks on Heroku and buildpack-nginx you can have a nginx instance in your dynos that can serve your static files, and also pass requests to your backend server (unicorn) processes.
The frontend code has to reside inside the same repo as the backend code, or (as an alternative) be pulled out of a different repo in the build process.
similar to the first solution, but without nginx. Possible if you get ruby/unicorn to serve your static JS files too.
use Heroku's Docker Support to build your own app image and deploy it.
All of the above combined :)
This most likely could include adding the nodejs buildpack to setup a proper build pipeline.

Resources