Rails 5+, WebPacker and Docker development workflow - ruby-on-rails

One of the advantages of using Docker is single environment for entire team. Some time ago I was using Vagrant to unify development environment in a team, and it worked pretty well. Our workflow was like:
Run vagrant up, command takes some time to download the base image, run provisioning scripts. It also maps directory from local filesystem to container filesystem.
Change file on the host system, all changes will be mapped to guest filesystem (container), so no container restart needed.
Some folks use Docker for the similar development workflow, but I usually use docker-compose just to run satellite services. And I was always running Rails monolith inside of host operating system, like natively.
So my development workflow is pretty standard:
All the satellite services are up and located inside of Docker containers, I just have a bunch of exposed ports. I don't need to brew-install lots of software to support them, it's good.
Rails monolith runs in host OS, so every time I make, for example, JavaScript file change, WebPacker comes into play, rebuilds, and applies changes without page refresh. It's important to emphasize, because page refresh takes time, I don't want to refresh the page every time I do JavaScript or CSS file change.
With Vagrant the above scheme works well as well. But with Docker things are different.
The development workflow some folks use with Docker is as follows:
Run a bunch of services with docker-compose command, except Rails monolith (same step as with my development workflow above).
Every time you make change in your app (for example, JavaScript file) you need to rebuild container, because you're making changes on your local filesystem, not inside of a docker container. So you 1) stop 2) build 3) run Docker container again.
In other words, with Docker-only approach we have the following cons:
No webpacker js/css refresh
Container rebuild, which takes time
Application restart, which takes a lot sometimes, even zero-code "Rails" app starts in ~3 seconds
So my question is: what's the best way to go with Docker-only approach? How you can take advantage of Docker while using WebPacker with Rails and avoid page refresh and application restart?

I've been reading a good book on this recently (Docker for Rails developers). The gist seems to be that you run Rails in a Docker container and use a volume to 'link' your local files into the container, so that any file changes will take immediate effect. With that, you should not need to restart/rebuild the container. On top of that, you should run webpack-dev-server as a separate container (which also needs the local files mounted as a volume) which will do JavaScript hot reloading - so no need to reload the page to see JS updates.
Your docker-compose.yml file would look something like this (uses Redis and Postgres as well):
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACKER_DEV_SERVER_HOST=webpack_dev_server
webpack_dev_server:
build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:

Related

Proper way to build a CICD pipeline with Docker images and docker-compose

I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.

Docker: Multiple Compositions

I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default

Docker images: how to manage them between development and production

Assuming i have created a DJANGO project using docker-compose as per the example given in https://docs.docker.com/compose/django/
Below is a simple docker-compose.yml file for understanding sake
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
image: python:3
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Here i am using two images python:3 (called "web") and postgres (called "db") which are automatically found from hub.docker.com and build accordingly. We also want web container depends on db container. Just to recollect whats in the docker-compose.yml above
Once i set everything i do docker-compose up and we can see two containers are running and django project is running on my local machine.
Once i have worked with my django application now i want to deploy on the production server.
So how to copy the images to the development server so that i am working on the same docker images again there also.
Because i try to create a docker-compose.yml file at production server thn there will be chance that the db image and web image may change.
Like:
When I build the postgres image on my development computer say i have postgres version 9.5
But If i again build the postgres image on the production server then i may have postgres version 10.1 installed.
SO i will not be working on the same environment, may be on the same os but not the same version of packages.
So how to check this when i am shifting things to development
Partially Solved:
As per the answer of #Yogesh_D,
If i am using prebuilt images from Dokcer hub, we can easily get the same environment on the production server using the version number like postgres:9.5.1 or python:3.
Partially UnSolved:
But If i created an image on my own using my own Dockerfile and then tagged it while building. Now i want to use the same image in production how to do that. Since its not on the Docker Hub and also i may not intereseted to put it on Docker hub.
So will copying manually my image to the production server is a good idea or i just the copy the Dockerfile and again build the image there on the production server.
This should be fairly simple as the docker compose image directive allows you to even specify the tags for that image.
So in your case it would be something like this:
version: '3'
services:
db:
image: postgres:9.5.14
web:
image: python:3.6.6
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
The above file ensures that you get the 9.5.14 version of postgres and 3.6.6 version of python.
And no matter where you deploy this is exactly what you get.
Look at https://hub.docker.com//python for all the tags/versions available in the python image and look at https://hub.docker.com//postgres to figure out all the versions/tags available for the postgres images.
Edit:
To solve the problem of custom images you have a few ways:
a. depending on where you deploy (your datacenter vs public cloud providers) you can start you own docker image registry. And there are lot of options here like this
b. If you are running in one of the popular cloud providers like aws, gcp, azure, most of them provide their own registries
c. Or you can use docker hub to setup private repositories.
And each of them support tags, so still use tags to ensure your own custom images are deployed just like the public images.

Using docker-compose (formerly fig) to link a cron image

I'm runing a simple rails app in docker using docker-compose (formerly fig) like this:
docker-compose.yml
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/usr/src/app
ports:
- "3011:3000"
links:
- db
Dockerfile
FROM rails:onbuild
I need to run some periodical maintainance scripts, such as database backups, pinging sitemaps to search engines etc.
I'd prefer not to use cron on my host machine, since I prefer to keep the application portable and my idea is to use docker-compose to link an image such as https://registry.hub.docker.com/u/hamiltont/docker-cron/ using docker-compose.
The rails official image does not have ssh enabled so I cannot just have the cron container to ssh into the web container and run the scripts.
Does docker-compose have a way for a container to gain a shell into a linked container to execute some commands?
What actually would you like to do with your containers? If you need to access some objects from container's file system, you should just mount the volume to the ancillary container (consider --volumes-from option).
Any SSH interaction between containers is considered as a bad practice (at least since docker 1.3, when docker exec has been implemented). Running more than one process inside the container (e.g. smth but the postgres or rails in your case) will result in a large overhead: in order to have a sshd along with rails you'll have to deploy something like supervisord.
But if you really need to provide some kind of nonstandard interaction between the containers and you're sure that you really need it, I would suggest you to use one of the full-featured docker client libraries (like docker-py). It will allow you to launch docker exec in a programmable way.

Use separate docker container for code only

Can I use one container for software (e.g. apache, php) and other container just for application code - /var/www/ folder ?
If so, how? Any caveats here?
I need it to speed up deployment - building full image takes more time as well as uploading, downloading full image on all instances
Yes you can!
Example(s):
docker-compose.yml:
web:
build: nginx
volumes_from:
- app
...
app:
build: app
...
You would want your "nginx" Dockerfile to look like:
FROM nginx
VOLUME /var/www/html
...
Where `/var/www/html`` is part of your "app" container.
You would hack on "app" either locally and/or via Docker (docker build app, docker run ... app, etc).
When you're reasonably satisfied you can then test the whole integration by doing something like docker-compose up.

Resources