I'm learning Docker and I'm trying to configure a Ruby on Rails project to run on it (on development environment). But I'm having some trouble.
I managed to configure docker-compose to start a container with the terminal open, so I can do bundle install, start a server or use rails generators. However, every time I run the command to start, it starts a new container, where I have to do bundle install again (it takes a while).
So I'd like to know if there is a way to reuse components already created.
Here is my Dockerfile.dev
FROM ruby:2.7.4-bullseye
WORKDIR '/apps/gaia_api'
EXPOSE 3000
RUN gem install rails bundler
CMD ["/bin/bash"]
And here is my docker-compose file:
version: "3.8"
services:
gaia_api:
build:
dockerfile: Dockerfile.dev
context: "."
volumes:
- .:/apps/gaia_api
environment:
- USER_DB_RAILS
- PASSWORD_DB_RAILS
ports:
- "3000:3000"
The command I'm using to run is: docker-compose run --service-ports gaia_api.
I tried to use the docker commands build, create and start, however the volume mapping doesn't work. On the terminal of the container, the files of the volume are not there.
The commands I tried.
docker build -t gaia -f Dockerfile.dev .
docker create -v ${pwd}:/apps/gaia_api -it -p 3000:3000 gaia
docker start -i f36d4d9044b08e42b2b9ec1b02b03b86b3ae7da243f5268db2180f3194823e48
There is probably something I still don't understand. So I ask: Whats the best way to configure docker for ruby on rails development? And will it be possible to add new services later (I plan once I get the first part to work, to add postgres and a vue project).
EDIT: Forgot to say that I'm on Mac OS Big Sur
EDIT 2: I found what was wrong with the volumes, I was tying -v ${pwd}:/apps instead of $(pwd):/apps.
Related
I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.
I've been working in a sample ruby-on-rails application and deploying docker image in a linux server (ubuntu 14.04).
Here is my Dockerfile:
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
# CMD bundle exec rails s -p 3000 -b 0.0.0.0
# EXPOSE 3000
docker-compose.yml:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
image: atulkhanduri/rails_docker_demos
volumes:
- .:/rails_docker_demo
ports:
- "3000:3000"
depends_on:
- db
deploy.sh:
#!/bin/bash
docker build -t atulkhanduri/rails_docker_demo .
docker push atulkhanduri/rails_docker_demo
ssh username#ip-address << EOF
docker pull atulkhanduri/rails_docker_demo:latest
docker stop web || true
docker rm web || true
docker rmi atulkhanduri/rails_docker_demo:current || true
docker tag atulkhanduri/rails_docker_demo:latest atulkhanduri/rails_docker_demo:current
docker run -d --restart always --name web -p 3000:3000 atulkhanduri/rails_docker_demo:current
EOF
Now my problem is that I'm not able to use docker-compose commands like docker-compose up, to run the application server.
When I uncomment the last two lines fromDockerfile i.e,
CMD bundle exec rails s -p 3000 -b 0.0.0.0
EXPOSE 3000
then I'm able to run the server on port 3000 but getting error could not translate host name "db" to address: Name or service not known. (my database.yml has "db" as host.) This is because postgres image is not used as I'm not using docker-compose file is not.
EDIT:
Output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
b466c9f566a4 bridge bridge local
7cce2e53ee5b host host local
bfa28a6fe173 none null local
P.S: I've searched a lot in the internet but not yet able to use the docker-compose file.
Assumptions
If I am reading what you've done here correctly, my answer assumes the following two things.
You are using docker-compose to run the database container.
You are using plain docker commands (not docker-compose) to start the application server ("web").
First, I would suggest not doing that, it is a lot simpler to use docker-compose for both. However, I'll answer based on the above, assuming that there is some valid reason you cannot use docker-compose to run the "web" container.
About container and network names
When you run the docker-compose command to start the db container, among other things, two things happen.
The container is given a new name, composed of the directory you run the compose setup from, the static name in compose (db), and a number. So let's say you have this all in a directory name myapp, you would have a new container named myapp_db_1. You can see what it is named using docker ps.
A network bridge is created if it didn't already exist, named something like myapp_default - again, named after the directory that the compose setup is inside of.
Connecting to the right network
The problem is that your non-compose container is attached to the default network (probably docker_default), but your db container is attached to myapp_default. The two networks do not know about each other. You need to connect them. It probably makes more sense to tell the web app container to attach to the compose network.
First, get the correct network name. You can see all networks using docker network ls. It might look like this:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c1f5764a112b bridge bridge local
175efb89adef docker_default bridge local
5185ff0e1054 myapp_default bridge local
Once you have the correct name, update your run command to know about the network using the --network option.
docker run -d --restart always --name web \
-p 3000:3000 --network myapp_default \
atulkhanduri/rails_docker_demo:current
Once it is attached to the proper network, the name "db" should resolve correctly.
If you used docker-compose to start both of them, this would not be necessary (this is one of the things docker-compose just takes care of for you silently).
Getting this to run on your server
In the comments, you mention that you are having some issues with compose on the server. Specifically you said:
Do I need to copy my complete project on the server? Can't I run the application from docker image only? Actually, I've copied docker-compose in server and it throws errors for Gemfile, then I copied Gemfile, then it says it should be a rails app. So I guess I need to copy my complete folder in server. Can you please confirm?
Let's look at some parts of your Dockerfile. I'll add some comments inline.
## Make a new directory, and then make it the current directory
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
## Copy Gemfile and Gemfile.lock into this directory from outside
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
## Run the bundle installer, which will install to this directory
RUN bundle install
## Finally, copy everything from the outside local dir to here
ADD . /rails_docker_demo
So, clearly, /rails_docker_demo is your application directory within the container. You've installed a bunch of stuff here, and this will become a part of your image. When you push your image to the registry, then pull it down on the server (as you do in the deploy script), this will all come with it.
Now let's look at (some of) docker-compose.yml.
services:
web:
volumes:
- .:/rails_docker_demo
Here you have defined a volume mount, mounting the current directory (wherever docker-compose.yml lives) as /rails_docker_demo. When you do that, whatever happens to exist on the server is now available in /rails_docker_demo, but this mount undoes all the work from Dockerfile that I just mentioned above. Instead of having the resources you installed when you built the image, you have only whatever is on the server in the . directory. The mount is on top of the image's existing /rails_docker_demo directory, hiding its contents and replacing them with whatever is on the server at the moment.
Unless there is a reason you put this mount here, you probably just need to remove that volume mount from docker-compose.yml. You will still need docker-compose.yml on the server, but you should not need the rest of it (aside from the image, of course).
This mount you have done is a useful thing - for development purposes. It would let you use the container to run the application and quickly have code changes show up (without rebuilding the image). But in the case of your deployment, it is just causing trouble.
Try moving the EXPOSE above CMD, .e.g.
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
EXPOSE 3000
CMD bundle exec rails s -p 3000 -b 0.0.0.0
Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.
I'm using docker-machine and docker-compose to develop a Django app with React frontend. The volumes don't get mounted on Debian environment but works properly on OSX and Windows, I've been struggling with this issue for days, I created a light version of my project that still replicate the issue you can find it in https://github.com/firetix/docker_bug.
my docker-compose.yml:
django:
build: django
volumes:
- ./django/:/home/docker/django/
My Dockerfile is as follow
FROM python:2.7
RUN mkdir -p /home/docker/django/
ADD . /home/docker/django/
WORKDIR /home/docker/django/
CMD ["./command.sh"]
When I run docker-compose build everything works properly. But when I run docker-compose up I get
[8] System error: exec: "./command.sh": stat ./command.sh: no such file or directory
I found this question on stackoverflow
How to mount local volumes in docker machine followed the proposed workarounds with no success.
I'm I doing something wrong? Why does this work on osx and windows but not on Debian environment? Is there any workaround that works on a Debian environment? Both Osx and Debian have /Users/ folders as a shared folder when I check VirtualBox GUI.
This shouldn't work for you on OSX, yet alone Debian. Here's why:
When you add ./command.sh to the volume /home/docker/django/django/ the image builds fine, with the file in the correct directory. But when you up the container, you are mounting your local directory "on top of" the one you created in the image. So, there is no longer anything there...
I recommend adding command.sh to a different location, e.g., /opt/django/ or something, and changing your docker command to ./opt/command.sh.
Or more simply, something like this, here's the full code:
# Dockerfile
FROM python:2.7
RUN mkdir -p /home/docker/django/
WORKDIR /home/docker/django/
# docker-compose.yml
django:
build: django
command: ./command.sh
volumes:
- ./django/:/home/docker/django/
I believe this should work. there were some problems with docker-compose versions using relative paths.
django:
build: django
volumes:
- ${PWD}/django:/home/docker/django
What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting