Docker and Rails environments - ruby-on-rails

Just starting with this whole Docker thing and I can't wrap my head around one thing:
How does one deal with different dependencies? Let's say in production I don't want to have git, grunt, etc installed, but in development I do.
There's a difference between a container that can run tests, and container that runs in production.
Am I thinking about this wrong?

There are different philosophies on this, but personally, I use Docker to match my production environment as closely as possible so testing with that container anywhere let's me be pretty sure things will just work once I deploy to prod. This is one of the major benefits of Docker-- that you can mimic OS, environment, dependencies, versions, etc... locally before deploying anywhere.
There's nothing wrong with having a separate container development container with added dependencies that you can pass around your team, but to me the main benefit of Docker for development is the ability to test on that simulated prod environment and run the exact same container locally that you will be using in prod once you're ready. No more "but it worked on my machine!" bugs.

docker-rails is a project I just created to make rails with docker (and CI) very easy. I think it can help you and reduce the amount of configuration necessary to get up and running with rails on docker. It deals with multiple environments i.e. development | test | production in one docker-rails.yml file, which is really just a meta configuration/inheritance wrapper over the standard docker-compose.yml.
It will allow you to run a test command set in test, vs development server, or a production setup with different containers. The example in the readme shows elasticsearch used in dev and test, but not staging or production.
I hope that helps.

Related

How to use docker in the development phase of a devops life cycle?

I have a couple of questions related to the usage of Docker in a development phase.
I am going to propose three different scenarios of how I think Docker could be used in a development environment. Let's imagine that we are creating a REST API in Java and Spring Boot. For this I will need a MySQL database.
The first scenario is to have a docker-compose for development with the MySQL container and a production docker-compose with MySQL and the Java application (jar) in another container. To develop I launch the docker-compose-dev.yml to start only the database. The application is launched and debugged using the IDE, for example, IntelliJ Idea. Any changes made to the code, the IDE will recognize and relaunch the application by applying the changes.
The second scenario is to have, for both the development and production environment, a docker-compose with the database and application containers. That way, every time I make a change in the code, I have to rebuild the image so that the changes are loaded in the image and the containers are lauched again. This scenario may be the most typical and used for development with Docker, but it seems very slow due to the need to rebuild the image every time there is a change.
The third scenario consists of the mixture of the previous two. Two docker-compose. The development docker-compose contains both containers, but with mechanisms that allow a live reload of the application, mapping volumes and using, for example, Spring Dev Tools. In this way, the containers are launched and, in case of any change in the files, the application container will detect that there is a change and will be relaunched. For production, a docker-compose would be created simply with both containers, but without the functionality of live reload. This would be the ideal scenario, in my opinion, but I think it is very dependent on the technologies used since not all allow live reload.
The questions are as follows.
Which of these scenarios is the most typical when using Docker for phase?
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
The doubts and the scenarios that I raise came up after I raised the problem that scenario 2 has. With each change in the code, having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Thanks in advance for your time.
NOTE: It may be a question subject to opinion, but it would be nice to know how developers usually deal with these problems.
Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense
Which of these scenarios is the most typical when using Docker for development?
I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:
Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.
In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.
having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.
I personally use a workflow roughly matching scenario 3 such as:
a docker-compose.yml file corresponding to my Production environment
a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as
docker-compose -f docker-compose.yml -f docker-compose.dev.yml
but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override
in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)
I've seen them all used in different scenarios. There are some gotchas to avoid:
Applications inside of a container shouldn't depend on something running outside of a container on the host. So all of your dependencies should be containerized first.
File permissions with host volumes can be complicated depending on your version of docker. Some of the newer Docker Desktop installs automatically handle uid mappings, but if you develop directly on Linux you'll need to ensure the containers run as the same uid as your host user.
Avoid making changing inside the container if that isn't mapped into a host volume, since those changes will be lost when the container is recreated.
Looking at each of the options, here's my assessment of each:
Containerizing just the DB: This works well when developers already have a development environment for the language of choice, and there's no risk of external dependencies creeping in, e.g. a developer upgrading their JDK install to a newer version than the image is built with. It follows the idea of containerizing the dependencies first, while also giving developers the familiar IDE integration with their application.
Rebuilding the Image for Every Change: This tends to be the least ideal for developer workflow, but the quickest to implement when you're not familiar with the tooling. I'll give a 4th option that I consider an improvement to this.
Everything in a container, volume mounts, and live reloading: This is the most complicated to implement, and requires the language itself to support things like live reloading. However, when they do, it is nearly seamless for the developers and gets them up to speed on a new project quickly without needing to install any other tooling to get started.
Rebuild the app in the container with volume mounts: This is a halfway point between 2 and 3. When you don't have live reloading, you likely need to recompile or restart the interpreter to see any change. Rather than rebuilding the image, I put the recompile step in the entrypoint of a development image. I'll mount the code into the container, and run a full JDK instead of just a JRE (or whatever compiler is needed). I use named volumes for any dependency caches so they don't need to download on every restart. Then the method to see the changes is to restart that one container. The steps are identical to a compiled binary outside of a container, stop the old service, recompile, and restart the service, but now it happens inside of a container that should have the same tools used when building the production image.
For option 4, I tend to use a multi-stage build that has stages for build, develop, and release. The build stage pulls in the code and compiles it, the develop stage is the same base image as build but with an entrypoint that does the compile/run, and the release stage copies the result of the build stage into a minimal runtime. Developers then have a compose file for development that creates the development image and runs that with volume mounts and any debugging ports opened.
First of all, docker-compose is just for development and also testing phase, not for production. Example:
With a minimal and basic docker-compose, all your containers will run in the same machine? For development purposes it is ok, but in production, put all the apps in just one machine is a risk
Official link https://docs.docker.com/compose/production/
We will assume
01 java api
01 mysql database
01 web application that needs the api
all of these applications are already in production
Quick Answer
If you need to fix or add new feature to the java api, I advice you to use an ide like eclipse or IntelliJ Idea. Why?
Because java needs compilation.
Compile inside a docker container will take more time due to maven dependencies
IDE has code auto completion
etc
In this development phase, Docker helps you with one of its most powerful features: "Bring the production containers to your localhost". Yeah, in this case, docker-compose.yml is the best option because with one file, you could start everything you need : mysql database and web app but not your java api. Open your java api with your favorite ide.
Anyway if you want to use docker to "develop", you just need the Dockerfile and perform a docker build ... when you need to run your source code in your localhost
Basic Devops Life cycle with docker
Developer push source code changes using git
Your continuous integration (C.I) platform detect this change and perform
docker build ... (In this step, unit test are triggered)
docker push to your private hub. Container is uploaded in this step and will be used to deployments on other servers.
docker run or container deploy to the next environment : testing
Human testers ,selenium or another automation start their work
If no errors are detected, your C.I perform perform a final deploy of the uploaded container to your production environment. No docker build are required, just deploy or docker run.
Some Tips
Docker features are awesome but sometimes add too much complexity. So stop using volumes , hard disk dependency, logs, or complex configurations. If you use volumes, what will happen when your containers are in different hosts?
Java and Nodejs are a stable languages and your rest api or web apps does not need crazy configurations. Just maven compilation and java -jar ... or npm install and npm run start.
For logs you could use https://www.graylog.org/, google stasckdriver or another log management.
And like Heroku, stop using hard disk dependency as much as possible. In heroku platform disk are disposable, it means disappear when app is restarted. So instead of local file storage, you could use another file storage service with a lot of functionalities.
With this approaches, your containers can be deployed anywhere in a simple way
I'm using something similar to your 3rd scenario for my web dev, but it is Node-based. So I have 3 docker-compose files (actually 4, one is base and having all common stuff for others) for dev, staging and production environments.
Staging docker-compose config is similar to production config excluding SSL, ports and other things that may not allow to use it locally.
I have a separate container for each service (like DB, queue), and for dev, I also have additional dev DB and queue containers mostly for running auto-tests. In dev environment, all source are mounted into containers, so it allows to use IDE/editor of choice outside the container, and see changes inside.
I use supervisor to manage my workers inside a container with workers and have some commands to restart my workers manually when I need this. Maybe you can have something similar to recompile/restart your Java app. Or if you have an idea of how to organize app source code changes detection and your app auto-reloading, than could be the best variant. By the way, you gave me an idea to research something similar suitable for my case.
For staging and production environment, my source code is included inside the corresponding container using production Dockerfile. And I have some commands to restart all stuff using an environment I need, and this typically includes rebuilding containers, but because of Docker cache, it doesn't take much time (about 20 seconds). And taking into account that switching between environments is not a too frequent operation I feel quite comfortable with this.
Production docker-compose config is used only during deployment because it enables SSL, proper ports and has some additional production stuff.
Update for details on backend app restarting using Supervisor:
This is how I use it in my projects:
A part of my Dockerfile with installing Supervisor:
FROM node:10.15.2-stretch-slim
RUN apt-get update && apt-get install -y \
# Supervisor
supervisor \
...
...
# Configs for services/workers managed by supervisor
COPY some/path/worker-configs/*.conf /etc/supervisor/conf.d/
This is an example of one of Supervisor configs for a worker:
[program:myWorkerName]
command=/usr/local/bin/node /app/workers/my-worker.js
user=root
numprocs=1
stopsignal=INT
autostart=true
autorestart=true
startretries=10
In this example in your case command should run your Java app.
And this is an example of command aliases for convenient managing Supervisor from outside of containers. I'm using Makefile as a universal runner of all commands, but this can be something else.
# Used to run all workers
su-start:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl start all
# Used to stop all workers
su-stop:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl stop all
# Used to restart all workers
su-restart:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl restart all
# Used to check status of all workers
su-status:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl status
As I described above these Supervisor commands need to be run manually, but I think it is possible to implement maybe another Node-based worker or some watcher outside of a container with workers that will detect file system changes for sources directory and run these commands automatically. I think it is possible to implement something like this using Java as well like this or this.
On the other hand, it needs to be done carefully to avoid constant restarting workers on every little change.

Docker's standardization of environments

I am struggling on a question that nobody seems to answer in detail on the Internet.
"Standardizing service infrastructure across the entire pipeline allows every team member to work in a production parity environment"
This is a key benefit of Docker : it allows everybody to develop, test or whatever in a production-like environment. Because the container that is passed through the pipeline is always the same.
I get that. I understand that this is necessary and that Docker allows this easily.
But what I don't understand, is why was it so hard before Docker ? If I have a production machine and a testing machine, I won't have any problem building a script that installs the right dependencies, no matter what the machine is. So my environment in terms of libraries or frameworks will be the same.
The only thing that I understand with this whole environment-related benefit, is that Docker allows a developer to choose his OS without fear of the platform-related bugs. I've already run into features that worked on Windows and not on Mac. Worst kind of bugs in my opinion. So yeah if I had Docker at the time, I wouldn't have had this problem. But I don't understand why Docker was such a miracle for other environment-related stuff.
I think I am not understanding this because I've only worked on small scale projects. Maybe I also don't realize the full meaning of the word "environment".
What am I missing here ? Why containers were a breakthrough for standardizing environments, whereas scripts can achieve that ?
The following list is not exhaustive, it represents only three important advantages of docker. Please note that docker is not a magical solution and may not be adapted in specific contexts.
Firstly, with containers you don't have conflicts between dependencies.
If you have two programs using the same library at different version you'll have to manually install both versions and specify custom environment variables before executing your programs. (For example LD_LIBRARY_PATH). Please note that some tools exists to address this issue but only in specific cases (virtualenvs in python for example).
Secondly, with containers you don't have persistence.
For example if you write a little bash script to install your development environment based on Nginx and PHP and by mistake I install Apache, my package will still be present even if you run again your script. The thing is Apache will sometimes starts before Nginx and block the 80 port, breaking your development environment.
To sum up, without docker you're not sure about the state of untracked elements and they may break your environment.
Thirdly, docker allows you to reduce the gap between development and production.
The close environment is "everything needed for your code to run". For example libraries, config files, your interpreter (python, php, ...). Docker packages the application with its close environment so you don't have mismatches between what your app needs and the environment you provided.
This is especially important when you update dependencies during development and may forget to update them in production.
A false argument is security and isolation. The security process starts with defining your threat model and then choosing countermeasures. Adding docker because it increases security in a risky environment won't be enough (there is no kernelspace isolation) and adding docker for security if you don't need more is called paranoïa. Docker adds userspace isolation and default seccomp profiles, but this is not a reason to use it, except if it matches your threat model.

Docker, How to replace capistrano tasks in docker

I am trying to dockerize my production rails application.
Currently app is configured using Ansible and deployed using capistrano.
I researched for different docker deployment strategy's and thought of getting rid of capistrano in docker and will be using docker with docker-compose
I am writing dockerfile to configure and deploy app, but it is somewhat complex to deploy app similar to capistrano as deploy.rb is using few rake tasks to setup predeployment tasks like creating directories setting app name and fetching few variables.
How can I duplicate cap tasks in dockerfile or is there a way to use current cap rake tasks in docker file or running docker container ?
Now is a good time to step back and consider if the benefits of Docker outweigh the added complexity, for your situation. Assuming it is, here are a few suggestions on how to make these components work together better.
While Ansible is a configuration management system, it's also designed for orchestration (that is, running commands across a series of remote machines). This has some cross-over with Capistrano, and as such, you may find it useful to port your Capistrano tasks to Ansible and eliminate a tool (and thus complexity) from your stack. This would likely come about from creating a deploy.yaml playbook that you run to deploy your application.
Docker also overlaps responsibilities with Ansible, but in a different area, configuration. Any part of your system configuration that's necessary for the app can be configured inside the container using the Dockerfile, rather than on a system-wide level using Ansible.
If you have rake tasks that set up the application environment, you can put them in a RUN command in the Dockerfile. Keep in mind, however, that these will only be executed when you build the image, not when you run it.
Generally speaking, I view it this way: Docker sets up a container that has everything required to run one piece of your app (including a specific checkout of your code). Ansible configures the environment in which you run the containers and manages all the work to update them and put them in the right places.

Docker compose in production?

I plan to use docker to build my dev and production environment. I build Django based app.
On dev I use docker-compose to mange all local containers. It's a nice and convenient solution. I run Django, 3 celery queues, rabbitmq, 2 postgresql DBs.
But my production environment is quite different. I need to run gunicorn and nginx. Moreover DBs will be ran using AWS RDS. Of course Django app will require more stuff, like different settings file or more env vars.
I'm wandering how to divide it. Should I docker-compose there as well? That will require separate files for dev and prod, maybe more in future for staging etc... If yes, how to deploy it? Using Jenkins, pull, restart all using compose?
Or maybe I should use ansible to run docker commands directly? But then I have no confidence that my dev is the same as live and it's harder to predict its behaviour.
I like the idea of running compose files on all environments, but I'm not sure if maintaining multiple files for different environments is a good idea. Dev requires less env vars and less configuration. I can use env file to set all of them on production. But should I keep my live settings in the se repo? Previously I was setting all env vars while provisioning and this was separate process. Now it looks like provisioning and deploy are the same? Maybe this is the way with Docker?
Using http://docs.docker.com/compose/extends/#multiple-compose-files you can keep all the common stuff in a docker-compose.yml and use docker-compose.prod.yml to add extra services, change links, environment, and ports.

Whats exactly mean by build, ship and run any app, anywhere with Docker

Docker says "it possible to build, ship and run any app, anywhere"
Dockerising the app looks promising solution to ship and run any app, anywhere with less pain.
But how it's going to help us in building an application?
There are a few interesting use cases at build time for docker.
You could use docker to brink up database with known state inside containers for your integration tests to hit. using the Docker Maven Plugin.
Having a predefined container for your application which will not change during the development cycle is useful. Especially as you can use the same container when you go to prod deployments. This is different from say vagrant which you would not use for your production deployment.
Since there are so many docker containers already available your would not need to spend time figuring out how to deploy and mange all the various tools and services your deployment may need.

Resources