Docker Compose for Rails - ruby-on-rails

I'm trying to replicate this docker command in a docker-compose.yml file
docker run --name rails -d -p 80:3000 -v "$PWD"/app:/www -w /www -ti rails
My docker-compose.yml file:
rails:
image: rails
container_name: rails
ports:
- 80:3000
volumes:
- ./app:/wwww
When I'm doing docker-compose up -d, the container is created but it does not strat.
When I'm adding tty: true to my docker docker-compose.yml file, the container start well but my volume is not mounted.
How can I replicate excatly my docker command in a docker-compose.yml?

There are some ways to solve your problem.
Solution 1: If you want to use the rails image in your docker-compose.yml, you need to set the command and working directory for it like
rails:
image: rails
container_name: rails
command: bash -c "bundle install && rails server -b 0.0.0.0"
ports:
- 80:3000
volumes:
- ./app:/www
working_dir: /www
This will create a new container from the rails image every time you run docker-compose up.
Solution 2: Move your docker-compose.yml to the same directory with Gemfile, and create Dockerfile in that directory in order to build a docker container in advance (to avoid running bundle installevery time)
#Dockerfile
FROM rails:onbuild
I use rails:onbuild here for simplicity reasons (about the differences between rails:onbuild and rails:<version>, please see the documentation).
After that, modify the docker-compose.yml to
rails:
build: .
container_name: rails
ports:
- 80:3000
volumes:
- .:/www
working_dir: /www
Run docker-compose up and this should work!
If you modify your Gemfile, you may also need to rebuild your container by docker-compose build before running docker-compose up.

Thanks for your answer. It helped me to find the solutions.
It was actually a volume problem. I wanted to mount the volume with the directory /www. But it was not possible.
So I used the directory used by default with the rails images:
/usr/src/app
rails:
image: rails
container_name: rails
ports:
- 80:3000
working_dir: /usr/src/app
volumes:
- ./app:/usr/src/app
tty: true
Now my docker-compose up -d command works

Related

Docker-compose how to update celery without rebuild?

I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article

Docker-compose entrypoint could not locate Gemfile but works fine with docker

When I use docker-compose up
the container exits with code 10 and says
Could not locate Gemfile or .bundle/ directory
but if I do docker run web entrypoint.sh
The rails app seems to start without an issue.
What could be the cause of this inconsistent behavior?
Entrypoint.sh
#!/bin/bash
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails s -b 0.0.0.0 -p 8080
Relevant part from the docker-compose file.
docker-compose.yml
...
web:
build:
context: "./api/"
args:
RUBY_VERSION: '2.7.2'
BUNDLER_VERSION: '2.2.29'
entrypoint: entrypoint.sh
volumes:
- .:/app
tty: true
stdin_open: true
ports:
- "8080:8080"
environment:
- RAILS_ENV=development
depends_on:
- mongodb
...
When you docker run web ..., you're running exactly what's in the image, no more and no less. On the other hand, the volumes: directive in the docker-compose.yml file replaces the container's /app directory with arbitrary content from the host. If your Dockerfile RUN bundle install expecting to put content in /app/vendor in the image, the volumes: hide that.
You can frequently resolve problems like this by deleting volumes: from the Compose setup. Since you're running the code that's built into your image, this also means you're running the exact same image and environment you'll eventually run in production, which is a big benefit of using Docker here.
(You should also be able to delete the tty: and stdin_open: options, which aren't usually necessary, and the entrypoint: and those specific build: { args: }, which replicate settings that should be in the Dockerfile.)
(The Compose file suggests you're building a Docker image out of the api subdirectory, but then bind-mounting the current directory . -- api's parent directory -- over the image contents. That's probably the immediate cause of the inconsistency you see.)

docker-compose as a production environment without internet

I'm a beginner with docker and I created a docker-compose file that can provide our production environment and I want to use it for our client servers for production environment also I want to use it locally and without internet.
Now, I have binaries of docker and docker compose and saved images that I want to load to a server without internet. this is my init bash script on Linux :
#!/bin/sh -e
#docker
tar xzvf docker-18.09.0.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
#docker-compose
cp docker-compose-Linux-x86_64 /ussr/local/bin/docker-compose
chmod +x /ussr/local/bin/docker-compose
#load images
docker load --input images.tar
my structure :
code/*
nginx/
site.conf
logs/
phpfpm/
postgres/
data/
custom.ini
.env
docker-compose.yml
docker-compose file:
version: '3'
services:
web:
image: nginx:1.15.6
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
restart: always
depends_on:
- php
php:
build: ./phpfpm
restart: always
volumes:
- ./phpfpm/custom.ini:/opt/bitnami/php/etc/conf.d/custom.ini
- ./code:/code
db:
image: postgres:10.1
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5400:5432
There are some questions :
Why docker doesn't exist in Linux services? but when I install docker by apt-get it goes to Linux services list. How can I set docker as a service and enable it for loading on startup?
How can I set docker-compose in Linux services to run when system startup?
Install docker with package sudo dpkg -i /path/to/package.deb that you can download from https://download.docker.com/linux/ubuntu/dists/.
Then do post install, sudo systemctl enable docker. This will start docker at system boots, combined with restart: always your previous compose will be restarted automatically.
I think that dockerd is creating a daemon, but you have to enable it.
$ sudo systemctl enable docker
Add restart: always to your db container.
How the docker restart policies work

Mounted volume is empty inside container

I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server

Docker compose, manage environments

I have the following docker-compose.yml file to work locally:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
When I deploy to my server, I need to change two things in the docker-compose.yml file:
web:
command: npm start
environment:
NODE_ENV: 'production'
I guess editing the file after each deploy ain't the most comfortable way to do that. Any suggestion on how to cleanly manage environments in the docker-compose.yml file?
The usual way is to use a Compose overrides file. By default docker-compose reads two files at startup, docker-compose.yml and docker-compose.override.yml. You can put anything you want to override in the latter. So:
# docker-compose.yml
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm run dev
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
seed:
build: ./seed
links:
- mongodb
Also:
# docker-compose.override.yml
web:
command: npm start
environment:
NODE_ENV: 'production'
Then you can run docker-compose up and will get the production settings. If you just want dev then you can run docker-compose -f docker-compose.yml up.
An even better way is to name your compose files in a relevant way. So, docker-compose.yml becomes development.yml and docker-compose.override.yml becomes production.yml or something. Then you can run docker-compose -f development -f production up for production, and just docker-compose -f development for development. You may also want to look into the extends functionality of docker-compose.
Just try using my way.
This is an example of my django project that I've run on docker.
First in docker-compose.yml you have to defined two containers.
First is web it means for production then second is devweb it
means for development.
If you use dockerfile you can create separated dockerfile
(Dockerfile : for production, and Dockerfile-dev for development).
By that you can run by using docker-compose command.
For example :
docker-compose -p $(PROJECT) up -d web for production
docker-compose -p $(PROJECT) up --no-deps -d devweb for
development
Anyway, I use Makefile to manage all docker-compose's command, and its make me very easy. I just need to run make command name to execute a command.
May this answer help you.

Resources