Files in the folder don't appear on the host after mounting to container - docker

I have dedicated data volume container which map source code to php container and data volume which map another one folder to data volume container file structure.
common-services.yml
code:
image: debian:jessie
volumes:
- ../:/var/www
docker-compose-dev.yml
php:
extends:
file: common-services.yml
service: php
volumes_from:
- code
links:
- mysql
volumes:
- "~/Projects/test-sampledata:/var/www/app/code/TEST/SampleData/"
On the host machine I see all files&folders of /var/www but not sub-folder /var/www/app/code/TEST/SampleData/. When I enter php container I see file structure as expected.
The question is why /var/www/app/code/TEST/SampleData/ with its sub-folders don't map to the host.
OS: Ubuntu 16.04.1 LTS
Thanks in advance !

If you're running on MacOS or Windows, you need the host volume within the /Users folder. Each of these implementations run a VM under the covers to provide the docker host on Linux, and these VM's mount /Users as a share to your parent OS by default. This can be modified in the settings of Docker if needed (or at least it can be on the recent MacOS releases).

Related

Docker bind propagation mount error "is not a shared mount"

I am trying to mount a FUSE virtual filesystem from inside a Docker container and expose the mount point to the host.
Docker is installed via snap on Ubuntu 20.04
The software is a fresh install of Seafile (a Dropbox alternative), but this problem I believe is more related to Docker, snap, and mounting file systems on Ubuntu. For what it's worth, I was following the official instructions here.
Inside the container (when it runs successfully), a script mounts a FUSE virtual filesystem to /seafile-fuse that makes the all files stored within Seafile visible.
docker-compose.yml exerpt:
version: '3.3'
services:
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
volumes:
- /home/jonathan/seafile/seafile-data:/shared
- type: bind
source: /home/jonathan/seafile/seafile-fuse
target: /seafile-fuse
bind:
propagation: rshared
privileged: true
cap_add:
- SYS_ADMIN
This leads to:
ERROR: for seafile Cannot start service seafile: path /home/jonathan/seafile/seafile-fuse is mounted on /home but it is not a shared mount
I found this somewhat related answer which hints that the issue may to do with the docker daemon running in a different namespace. But I am unable to get his solution to work.
What do I need to do to connect the host directory /home/jonathan/seafile/seafile-fuse so that it sees the container directory /seafile-fuse?
Bonus question...
Given that this is to be an internet facing home-server, is it necessary this this becomes a privileged container? Are there better options?
Thanks!!

Docker-compose volume key: what protocol is used behind

I am not sure if this is a right question to ask. In the tutorial of Docker compose, https://docs.docker.com/compose/gettingstarted/#step-5-edit-the-compose-file-to-add-a-bind-mount, there is a volume key in the docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: "redis:alpine"
And according to the tutorial, the volume key mounts the local file to the remote, and therefore we can change the code on the fly without restarting the Docker. My question is what internet protocol is used behind to transfer the updated code file.
Furthermore, I guess there would be more framework having this feature. What are the common protocols behind and why?
The tutorial doesn't say "the volume key mounts the local file to the remote". It says:
...in your project directory to add a bind mount for the web service:
[...]
The new volumes key mounts the project directory (current directory) on the host to /code inside the container, allowing you to modify the code on the fly, without having to rebuild the image.
If you click on the bind mount link, it will take you to
documentation that should answer all of your questions.
Briefly, a bind mount is way of making one directory on your system
appear in another location. For example, if I were run:
mkdir /tmp/newetc
mount -o bind /etc /tmp/newetc
Then running ls /tmp/newetc would show the same contents as /etc.
Docker uses this feature to expose host directories inside your
containers.
A bind mount only works on the same host; it cannot be used to expose
files on your local system to a remote system. It is a kernel feature and there are no internet protocols involved.

Docker compose, it is possible to mount a folder inside an already mounted volume?

When using compose for development I have my app mounted inside /var/www/html with this:
volumes:
- ./app:/var/www/html
My local copy needs all the images that are in the production website, that are quite a lot so I don't want to store them in my tiny ssd but in a big extenal disk.
So my images are located in my /media/storage/bigdisk/images.
it is possible to mount this location inside the already mounted /var/www/html?
This way doesn't seem to work:
volumes:
- ./app:/var/www/html
- /media/storage/bigdisk/images:/var/www/html/images
This should work normally, the only downside of this solution is that docker will create additional directory in ./app/images - so it can mount images volume.
For this directory tree:
- app
--- index.php
- docker-compose.yml
- media
--- picture.png
And docker-compose.yml:
version: '2'
services:
app:
image: ubuntu
volumes:
- ./app:/var/www/html
- ./media:/var/www/html/images
You get:
$ docker-compose run --rm app find /var/www/html
/var/www/html
/var/www/html/index.php
/var/www/html/images
/var/www/html/images/picture.png
This works even when ./app/images directory is present locally with some content. If it not exists then docker creates empty directory there with root:root persmission (if container runs as root).
Tested on Docker version 1.12.6 and docker-compose version 1.8.0

How to sync code between container and host using docker-compose?

Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources