Rails deployment to production the right way - ruby-on-rails

I have a Rails application that I develop on my local workstation and want to deploy this app to my Amazon AWS VPC in a best practice way. Currently, I give my web-server and database server a public IP and SSH into these boxes to configure. I am pretty sure this is nasty and want to explore better ways of doing this.
How should one correctly deploy code and database migrations to servers that sit within a private sub-net on AWS VPC? I have read that automation is key and people should disable SSH and port 22 all together, but I have no idea where to start configuring without logging in via SSH.

There is no right answer.
Rails via elastic beanstalk is great, it can be automated with CI.
Ansible, puppet, any configuration manager would be an improvement.
The only thing that is safe to say: manual deployment is never the best practice. It's prone to error and create "user specific knowlege". Best practice would say to do anything that removes manual process, even if that's doing it via SSH from CI.

Related

Replicating Heroku's Review Apps on AWS

I'm currently working for a client that are using Heroku and migrating to AWS. However, we're having trouble in understanding how the Review Apps feature can be replicated in AWS.
Specifically, we want a Jenkins job that will allow us to specify a branch name and a set of environment variables. That job will then spin up our entire stack, so that the developer can test their changes in isolation, before moving to staging.
Our stack is 5 different Ruby on Rails applications, all of which must know each other's URLs, which does complicate things.
I'm told that tools like AWS Fargate or EKS might be suitable, but I'm not sure.

Automated deployment of a dockerized application on a single machine

I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:
I manually install all services like the database on the VM
a bash script scheduled by crontab polls a remote git repository for changes every N minutes
if there were changes, it would simply restart all services using supervisord (job queue, web, etc)
Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.
I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.
I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
I wonder if docker-compose is a reasonable choice for a simple web application.
It can be, sure, if the development time is best spent focused on the web application and less on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the work of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under docker-compose, then it is with high likelihood that you can take the docker image that is constructed on behalf of docker-compose and roll it out to the cluster.
Do people use it in production
I hope not; I am sure there are people who use docker-compose to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.
Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.
Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the other things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.
Many companies I have worked with are shifting their entire production environment towards Kubernetes. That makes sense because all cloud providers are currently pushing Kubernetes and we can be quite positive about Kubernetes being the future of cloud-based deployment. If your application is meant to run in any private or public cloud, I would personally choose Kubernetes as operating platform for it. If you plan to add additional services, you will be easily able to connect them and scale your infrastructure with a growing number of requests to your application. However, if you already know that you do not expect to scale your application, it may be over-powered to use a Kubernetes cluster to run it although Google Cloud etc. make it fairly easy to setup such a cluster with a few clicks.
Regarding an automated development workflow for Kubernetes, you can take a look at my answer to this question: How to best utilize Kubernetes/minikube DNS for local development

How to simply use docker for deployment?

Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.

How could travis CI prepare the test environment for Ruby on Rails and its backend

My infrastructure is based on AWS, 3 for EC2 instances for Rails App Server, 1 for RDS (MongoDB), 1 for EC2 instance as Redis server.
Will the TravisCI launch similar services (eg. MongoDB, Redis) for pass the RSpec tests.
If not, what's the logic behind the TravisCI?
Would it be more practical to run the test on my real infrastructure rather than in TravisCI?
Yes! Travis CI fully supports Ruby on Rails, and can launch the same services you need for the tests, so I expect you'd be all set. When you go to create your travis.yml file, you'll be able to set the configuration for your build environment, including setting up services including MongoDB and/or Redis. Here's some sample code on how that looks:
services
- mongodb
- redis
From a practical standpoint, using a separate environment makes it easier to ensure test integrity, though you do have to do the additional software setup. The main benefit though is that you get a clean slate at each build for all your tests and it's well away from your production code in case there's a problem.

Can I use Docker for production deployment of a Rails application?

I want to use Docker to deploy my Rails application. I want to know if there is someone tried this? And what problems can I face?
Deploying Rails apps to production with Docker is not only possible, but something you'd want to do, to make sure your app runs on any server you deploy.
This comes with some challenges. First, it's advisable to run your database server and your Rails app different containers to keep things isolated. You can also set up your production server Docker environment with Docker Machine. Machine allows you to configure AWS, Digital Ocean, Azure and Compute Engine instances (among many others), and manage your containers from your computer. I assume you're just getting started with Docker, so I suggest you take a look at this cool guide about setting up a Rails + Postgres app with Docker.

Resources