docker-compose volume is empty even from initialize - docker

I try to create a docker-compose image for different website.
Everything is working fine except for my volumes.
Here is an exemple of the docker-compose.yml:
version: '2'
services:
website:
build:
context: ./dockerfiles/
args:
MYSQL_ROOT_PASSWORD: mysqlp#ssword
volumes:
- ./logs:/var/log
- ./html:/var/www
- ./nginx:/etc/nginx
- ./mysql-data:/var/lib/mysql
ports:
- "8082:80"
- "3307:3306"
Anf here is my Dockerfile:
FROM php:5.6-fpm
ARG MYSQL_ROOT_PASSWORD
RUN export DEBIAN_FRONTEND=noninteractive; \
echo mysql-server mysql-server/root_password password $MYSQL_ROOT_PASSWORD | debconf-set-selections; \
echo mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PASSWORD | debconf-set-selections;
RUN apt-get update && apt-get install -y -q mysql-server php5-mysql nginx wget
EXPOSE 80 3306
VOLUME ["/var/www", "/etc/nginx", "/var/lib/mysql", "/var/log"]
Everything is working well, expect that all my folders are empty into my host volumes. I want to see the nginx conf and mysql data into my folders.
What am I doing wrong?
EDIT 1 :
Actually the problem is that I want docker-compose to create the volume in my docker directory if it not exist, and to use this volume if it exist, as it is explain in https://stackoverflow.com/a/39181484 . But it doesn't seems to work.

The problem is that you're expecting files from the Container to be mounted on your host.
This is not the way it works: it's the other way around:
Docker mounts your host folder in the container folder you specify.
If you go inside the container, you will see that where there were supposed to be the init files, there will be nothing (or whatever was in your host folder(s)), and you can write a file in the folder and it will show up on your host.
Your best bet to get the init files and modify them for your container is to:
Create a container without mounting the folders (original container data will be there)
Run the container (the container will have the files in the right place from the installation of nginx etc...) docker run <image>
Copy the files out of the container with docker cp <container>:<container_folder>/* <host_folder>
Now you have the 'original' files from the container init on your host.
Modify the files as needed for the container.
Run the container mounting your host folders with the new files in them.
Notes:
You might want to go inside the container with shell (docker run -it <image> /bin/sh) and zip up all the folders to make sure you got everything if there are nested folders, then docker cp ... the zip file
Also, be careful about filesystem case sensitivity: on linux files are case sensitive. On Mac OS X, they're not. So if you have Init.conf and init.conf in the same folder, they will collide when you copy them to a Mac OS X host.

Related

How to find volume files from host while inside docker container?

In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

Access port of one container from another container

I have a postgres database in one container, and a java application in another container. Postgres database is accessible from port 1310 in localhost, but the java container is not able to access it.
I tried this command:
docker run modelpolisher_java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
But it gives error java.net.UnknownHostException: biggdb.
Here is my docker-compose.yml file:
version: '3'
services:
biggdb:
container_name: modelpolisher_biggdb
build: ./docker/bigg_docker
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bigg
ports:
- "1310:5432"
java:
container_name: modelpolisher_java
build: ./docker/java_docker
stdin_open: true
tty: true
Dockerfile for biggdb:
FROM postgres:11.4
RUN apt update &&\
apt install wget -y &&\
# Create directory '/bigg_database_dump/' and download bigg_database dump as 'database.dump'
wget -P /bigg_database_dump/ https://modelpolisher.s3.ap-south-1.amazonaws.com/bigg_database.dump &&\
rm -rf /var/lib/apt/lists/*
COPY ./scripts/restore_biggdb.sh /docker-entrypoint-initdb.d/restore_biggdb.sh
EXPOSE 1310:5432
Can somebody please tell what changes I need to do in the docker-compose.yml, or in the command, to make java container access ports of biggdb (postgres) container?
The two containers have to be on the same Docker-internal network to be able to talk to each other. Docker Compose automatically creates a network for you and attaches containers to that network. If you're docker run a container alongside that, you need to find that network's name.
Run
docker network ls
This will list the Docker-internal networks you have. One of them will be named something like bigg_default, where the first part is (probably) your current directory name. Then when you actually run the container, you can attach to that network with
docker run --net bigg_default ...
Consider setting a command: in your docker-compose.yml file to pass these arguments when you docker-compose up. If the --host option is your code and doesn't come from a framework, passing settings like this via environment variables can be a little easier to manage than command-line arguments.
As you use docker-compose to bring up the two containers, they already share a common network. To be able to access that you should use docker-compose run and not docker run. Also, pass the service name (java) and not the container name (modelpolisher_java) in docker-compose run command.
So just use the following command to run your jar:
docker-compose run java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg

Running docker container in Jenkins container, How can I set the volume from host?

I'm running a container with jenkins using "docker outside of docker". My docker compose is:
---
version: '2'
services:
jenkins-master:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/urandom:/dev/random
- /home/jj/jenkins/jenkins_home/:/var/jenkins_home
ports:
- "8080:8080"
So all containers launched from "jenkins container" are running in host machine.
But when I try to run docker-compose in "jenkins container" in a job thats needs a volume, it takes the path from host instead of jenkins. I mean, when I run docker-compose with
volumes:
- .:/app
It is mounted in /var/jenkins_home/workspace/JOB_NAME in the host but I want that it is mounted in /home/jj/jenkins/jenkins_home/workspace/JOB_NAME
Any idea for doing this with a "clean" mode?
P.D.: I did a workaround using environments variables.
Docker on the host will map the path as is from the request, and docker-compose will make the request with the path it sees inside the container. This leaves you with a few options:
Don't use host volumes in your builds. If you need volumes, you can use named volumes and use docker io to read in and out of those volumes. That would look like:
tar -cC data . | docker run -i --rm -v app-data:/target busybox /bin/sh -c "tar -xC /target". You'd reverse the docker/tar commands to pull data back out.
Make the path on the host match that of the container. On your host, if you have access to make a symlink in var, you can ln -s /home/jj/jenkins/jenkins_home /var/jenkins_home and then update your compose file to have the same path (you may need to specify /var/jenkins_home/. to follow the symlink).
Make the path of the container match that of the host. This may be the easiest option, but I'm not positive it would work (depends on where compose thinks it's running). Your Dockerfile for the jenkins master can include the following:
RUN mkdir -p /home/jj/jenkins \
&& ln -s /var/jenkins_home /home/jj/jenkins/jenkins_home
ENV JENKINS_HOME /home/jj/jenkins/jenkins_home
If the easy option doesn't work, you can rebuild the image from jenkins and change the JENKINS_HOME variable to match your environment.
Make your compose paths absolute. You can add some code to set a variable:
export CUR_DIR=$(pwd | sed 's#/var/jenkins_home#/home/jj/jenkins/jenkins_home#'). Then you can set your volume with that variable:
volumes:
- ${CUR_DIR:-.}:/app

Confusion while deploying docker-composer image

I've been working in a sample ruby-on-rails application and deploying docker image in a linux server (ubuntu 14.04).
Here is my Dockerfile:
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
# CMD bundle exec rails s -p 3000 -b 0.0.0.0
# EXPOSE 3000
docker-compose.yml:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
image: atulkhanduri/rails_docker_demos
volumes:
- .:/rails_docker_demo
ports:
- "3000:3000"
depends_on:
- db
deploy.sh:
#!/bin/bash
docker build -t atulkhanduri/rails_docker_demo .
docker push atulkhanduri/rails_docker_demo
ssh username#ip-address << EOF
docker pull atulkhanduri/rails_docker_demo:latest
docker stop web || true
docker rm web || true
docker rmi atulkhanduri/rails_docker_demo:current || true
docker tag atulkhanduri/rails_docker_demo:latest atulkhanduri/rails_docker_demo:current
docker run -d --restart always --name web -p 3000:3000 atulkhanduri/rails_docker_demo:current
EOF
Now my problem is that I'm not able to use docker-compose commands like docker-compose up, to run the application server.
When I uncomment the last two lines fromDockerfile i.e,
CMD bundle exec rails s -p 3000 -b 0.0.0.0
EXPOSE 3000
then I'm able to run the server on port 3000 but getting error could not translate host name "db" to address: Name or service not known. (my database.yml has "db" as host.) This is because postgres image is not used as I'm not using docker-compose file is not.
EDIT:
Output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
b466c9f566a4 bridge bridge local
7cce2e53ee5b host host local
bfa28a6fe173 none null local
P.S: I've searched a lot in the internet but not yet able to use the docker-compose file.
Assumptions
If I am reading what you've done here correctly, my answer assumes the following two things.
You are using docker-compose to run the database container.
You are using plain docker commands (not docker-compose) to start the application server ("web").
First, I would suggest not doing that, it is a lot simpler to use docker-compose for both. However, I'll answer based on the above, assuming that there is some valid reason you cannot use docker-compose to run the "web" container.
About container and network names
When you run the docker-compose command to start the db container, among other things, two things happen.
The container is given a new name, composed of the directory you run the compose setup from, the static name in compose (db), and a number. So let's say you have this all in a directory name myapp, you would have a new container named myapp_db_1. You can see what it is named using docker ps.
A network bridge is created if it didn't already exist, named something like myapp_default - again, named after the directory that the compose setup is inside of.
Connecting to the right network
The problem is that your non-compose container is attached to the default network (probably docker_default), but your db container is attached to myapp_default. The two networks do not know about each other. You need to connect them. It probably makes more sense to tell the web app container to attach to the compose network.
First, get the correct network name. You can see all networks using docker network ls. It might look like this:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c1f5764a112b bridge bridge local
175efb89adef docker_default bridge local
5185ff0e1054 myapp_default bridge local
Once you have the correct name, update your run command to know about the network using the --network option.
docker run -d --restart always --name web \
-p 3000:3000 --network myapp_default \
atulkhanduri/rails_docker_demo:current
Once it is attached to the proper network, the name "db" should resolve correctly.
If you used docker-compose to start both of them, this would not be necessary (this is one of the things docker-compose just takes care of for you silently).
Getting this to run on your server
In the comments, you mention that you are having some issues with compose on the server. Specifically you said:
Do I need to copy my complete project on the server? Can't I run the application from docker image only? Actually, I've copied docker-compose in server and it throws errors for Gemfile, then I copied Gemfile, then it says it should be a rails app. So I guess I need to copy my complete folder in server. Can you please confirm?
Let's look at some parts of your Dockerfile. I'll add some comments inline.
## Make a new directory, and then make it the current directory
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
## Copy Gemfile and Gemfile.lock into this directory from outside
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
## Run the bundle installer, which will install to this directory
RUN bundle install
## Finally, copy everything from the outside local dir to here
ADD . /rails_docker_demo
So, clearly, /rails_docker_demo is your application directory within the container. You've installed a bunch of stuff here, and this will become a part of your image. When you push your image to the registry, then pull it down on the server (as you do in the deploy script), this will all come with it.
Now let's look at (some of) docker-compose.yml.
services:
web:
volumes:
- .:/rails_docker_demo
Here you have defined a volume mount, mounting the current directory (wherever docker-compose.yml lives) as /rails_docker_demo. When you do that, whatever happens to exist on the server is now available in /rails_docker_demo, but this mount undoes all the work from Dockerfile that I just mentioned above. Instead of having the resources you installed when you built the image, you have only whatever is on the server in the . directory. The mount is on top of the image's existing /rails_docker_demo directory, hiding its contents and replacing them with whatever is on the server at the moment.
Unless there is a reason you put this mount here, you probably just need to remove that volume mount from docker-compose.yml. You will still need docker-compose.yml on the server, but you should not need the rest of it (aside from the image, of course).
This mount you have done is a useful thing - for development purposes. It would let you use the container to run the application and quickly have code changes show up (without rebuilding the image). But in the case of your deployment, it is just causing trouble.
Try moving the EXPOSE above CMD, .e.g.
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
EXPOSE 3000
CMD bundle exec rails s -p 3000 -b 0.0.0.0

Resources