Host Volumes Not getting mounted on 'Docker-compose up' - docker

I'm using docker-machine and docker-compose to develop a Django app with React frontend. The volumes don't get mounted on Debian environment but works properly on OSX and Windows, I've been struggling with this issue for days, I created a light version of my project that still replicate the issue you can find it in https://github.com/firetix/docker_bug.
my docker-compose.yml:
django:
build: django
volumes:
- ./django/:/home/docker/django/
My Dockerfile is as follow
FROM python:2.7
RUN mkdir -p /home/docker/django/
ADD . /home/docker/django/
WORKDIR /home/docker/django/
CMD ["./command.sh"]
When I run docker-compose build everything works properly. But when I run docker-compose up I get
[8] System error: exec: "./command.sh": stat ./command.sh: no such file or directory
I found this question on stackoverflow
How to mount local volumes in docker machine followed the proposed workarounds with no success.
I'm I doing something wrong? Why does this work on osx and windows but not on Debian environment? Is there any workaround that works on a Debian environment? Both Osx and Debian have /Users/ folders as a shared folder when I check VirtualBox GUI.

This shouldn't work for you on OSX, yet alone Debian. Here's why:
When you add ./command.sh to the volume /home/docker/django/django/ the image builds fine, with the file in the correct directory. But when you up the container, you are mounting your local directory "on top of" the one you created in the image. So, there is no longer anything there...
I recommend adding command.sh to a different location, e.g., /opt/django/ or something, and changing your docker command to ./opt/command.sh.
Or more simply, something like this, here's the full code:
# Dockerfile
FROM python:2.7
RUN mkdir -p /home/docker/django/
WORKDIR /home/docker/django/
# docker-compose.yml
django:
build: django
command: ./command.sh
volumes:
- ./django/:/home/docker/django/

I believe this should work. there were some problems with docker-compose versions using relative paths.
django:
build: django
volumes:
- ${PWD}/django:/home/docker/django

Related

Mounting local folder to windows container using docker compose

There are lots of post on this topic but either they are discussing about docker run or linux or discussing about earlier version of docker/ docker compose. I am simply trying to share a config file that resides locally, with my container. The following is my docker compose
version: "3.8"
services:
TestService:
image: testservicelogmon
build:
context: .
dockerfile: Dockerfile
volumes:
- C:\ProgramData\Solution Name\Project Name:C:\ProgramData\Solution Name\Project Name:RW
Note: "Solution Name" and "Project Name" have space between them, and my config file resides in Project name folder.
The images gets created successfully but I always have "Mounts": [] and "Volumes": null.
I went through the documentation and some posts at SO but couldn't find anything that would solve this problem for me. Below are couple of links I referred to. Any help on this would be appreciated. Thanks !
Update: It's just not with this folder, it seems like I am not able to mount any folder, whereas if I do a docker runlike below it works perfectly fine.
docker run -it -v 'C:\ProgramData\Solution Name\Project Name:C:\ProgramData\C:\ProgramData\Solution Name\Project Name:RW' testservicelogmon:latest powershell
docker named volume with targeting windows local folder
volume binding using docker compose on windows
The code seems to work fine. I was making a stupid mistake of running the docker-compose build command followed by docker run. This way everything was working fine but the mounts would always show empty. All I had to do was run docker-compose build followed by docker-compose up and the mounts were populated as expected.

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

Why is docker-compose ignoring changes to the Dockerfile?

I have made changes in my Dockerfile and yet when I run either
docker-compose up
dock-compose rm && docker-compose build && docker-compose up
an old image is used, as the shown steps states are outdated.
I specifically tell it to build the container in the docker-compose.yml:
my-app:
build: ./
hostname: my-app
...
Yet when I build the container just via docker:
docker build .
The right container is built. What am I missing? I have tried this to no avail.
Check what dockerfile is configured in your docker-compose.yml.
My app has two dockerfiles, and docker-compose used a different one than docker itself, as it should:
my-app:
build: ./
dockerfile: Dockerfile.dev
Adapting that dockerfile as well fixed the problem.
And, oh, if you are using multiple dockerfiles, it's nice to add that in the project's documentation.
I had the same question but my answer was different.
I moved from a large nginx image to a slim, Alpine nginx image.
I thought docker compose was ignoring my Dockerfile as the error looked like it had not copied a script. The error was:
/bin/sh: /usr/bin/start.sh: not found
Well, the file was there. The file had the correct permissions. All I needed to do was to resolve the wrong shebang in my script and all worked with docker-compose:
#!/bin/sh WORKED
#!/bin/bash FAILED

Dockerfile build volume changes not reflected on mounted local folder? (OS X / boot2Docker)

Using docker-compose I have a Dockerfile which builds an environment for a Sails JS app. In my Dockerfile I generate a new Sails scaffold project using sails new . (a folder mapped to my local filesystem through docker-compose).
When I run docker-compose build everything seems to build successfully. I follow through with a docker-compose up -d on my local machine. Then I navigate to the folder that is mapped on the host machine to my local, the folder where on the HOST machine (vm) I generated the new Sails project. I'm expecting to see all the files that were generated on the HOST machine during docker-compose build sitting there in my local folder. The folder is BLANK though? What's up?
My basic docker-compose file:
node:
restart: "always"
build: ./cnt/node
ports:
- "8080:8080"
volumes:
- ./src:/var/www/html
# DEBUG: conveniently used to keep a container running
command: /bin/bash -c 'tail -f /dev/null'
NOTE: ./src is my local folder where my source code will reside. This is mapped to /var/www/html (webroot) on the HOST machine.
Here are the last couple lines from the Dockerfile found in ./cnt/node used to generate the Sails scaffold:
WORKDIR /var/www/html
RUN sails new .
RUN touch test.txt
Everything runs successfully when I execute (no errors):
docker-compose build
docker-compose up -d
When done I cd src to examine the source directory where I expected to see my Sails app scaffold, but it's EMPTY. What the heck?! There were no errors? What am I missing?
Is it something to do with the docker-compose file and that the image I'm buidling is built via build: ./cnt/node and LATER I mount the volumes in compose file? Do I need to mount the VOLUMES first before I generate the Sails scaffold?
Thanks!
Volumes are only mounted during run time, build is specifically designed to be reproducible, so nothing external is allowed during build.
You'll have to generate the scaffolding on the host by using docker-compose run node ...

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources