How can Docker on Windows 10 access Network Drives? - docker

I wrote some scripts I use in a docker container. For the scripts to be usefull I need to access some network mounts.
On my Mac it's easy. In my docker-compose.yml I have:
volumes:
- type: bind
source: /Volumes/Teams/myteam/folder/subfolder
target: /subfolder
On my colleagues Windows Laptop /Volumes/Teams is mounted as T: so my naive approach was to use
volumes:
- type: bind
source: /t/myteam/folder/subfolder
target: /subfolder
From the git shell this path can be used. But when starting docker-compose up from that shell, he gets error messages
ERROR: for 255d3d7d2944_my-tools_helpscripts_1 Cannot create container for service helpscripts: b'Mount denied:\nThe source path "T:/myteam/folder/subfolder"\ndoesn\'t exist and is not known to Docker'
Encountered errors while bringing up the project.
In docker's settings for shared drives, the T: drive is not listed.
How can we solve this issue?

I think I found a solution that should fit for me:
Start my container with capabilities SYS_ADMIN and DAC_READ_SEARCH
Mount inside the containter with
mount -t cifs -o user=USER,domain=DOMAIN //SERVER/Teams /mnt/T

Related

Docker bind propagation mount error "is not a shared mount"

I am trying to mount a FUSE virtual filesystem from inside a Docker container and expose the mount point to the host.
Docker is installed via snap on Ubuntu 20.04
The software is a fresh install of Seafile (a Dropbox alternative), but this problem I believe is more related to Docker, snap, and mounting file systems on Ubuntu. For what it's worth, I was following the official instructions here.
Inside the container (when it runs successfully), a script mounts a FUSE virtual filesystem to /seafile-fuse that makes the all files stored within Seafile visible.
docker-compose.yml exerpt:
version: '3.3'
services:
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
volumes:
- /home/jonathan/seafile/seafile-data:/shared
- type: bind
source: /home/jonathan/seafile/seafile-fuse
target: /seafile-fuse
bind:
propagation: rshared
privileged: true
cap_add:
- SYS_ADMIN
This leads to:
ERROR: for seafile Cannot start service seafile: path /home/jonathan/seafile/seafile-fuse is mounted on /home but it is not a shared mount
I found this somewhat related answer which hints that the issue may to do with the docker daemon running in a different namespace. But I am unable to get his solution to work.
What do I need to do to connect the host directory /home/jonathan/seafile/seafile-fuse so that it sees the container directory /seafile-fuse?
Bonus question...
Given that this is to be an internet facing home-server, is it necessary this this becomes a privileged container? Are there better options?
Thanks!!

Docker volume mount issue to a mounted folder

We have mounted a folder in a Linux machine to our docker container application using (docker-compose)
volumes:
- /mnt/share:/mnt/share
The /mnt/share is a mounted folder in the machine(Not a real folder in the machine, its our file server). IF for some reason that mount is lost and then remounted again.
The application running in the docker container is not having access to the mounted folder until the container is restarted.
You might want to use to use a Volume Driver instead of bind-mounting a local filesystem.
See Share data among machines
Without knowing more about your environment it is impossible to give a more detailed answer. It would be helpful to know if your container runs in a AWS data center or if you use nfsv3, nfsv4 or cifs for mounting.
The following solution helped me to continue.
I wrote a script to check whether the folder exists.
The script is then called a command in the docker-compose file.
version:"3"
services:
flowable-task-handler:
build: flowable-task-handler
ports:
- "8085:8085"
command: bash -c "/wait_for_file_mount.sh /mnt/share/fileshares/ && java -jar /app.jar"
wait_for_file_mount.sh
#!/bin/sh
# Used to check whether the mount folder is ready for flowable to use
mountedfolder="$1"
until [ -d "$mountedfolder" ];
do sleep 2;
echo error "Mounted folder not found : $mountedfolder";
done;
Its a spring boot application. I have removed the entrypoint in the DockerFile and application is started using the command in docker compose(java -jar /app.jar")
defining the mount propagation as ":shared" should fix this:
-v /autofs:/autofs:shared \
not sure about docker-compose - I don't really use that. but you can define a docker volume with mount propagation and put this into your DC file.

Docker volumes newbie questions

For a service I've defined a volume as (an extract of my yml file)
services:
wordpress:
volumes:
- wp_data:/var/www/html
networks:
- wpsite
networks:
wpsite:
volumes:
wp_data:
driver: local
I'm aware on a Windows 10 filesystem that the WP volumes won't be readily visible to me as they'll exist within the linux VM. Alternatively I'd have to provide a path argument to be able view my WP installation e.g.
volumes:
- ./mysql:/var/lib/mysql
But my question is what is the point of the 'driver: local' option, is this default. I've tried with & without this option and can't see the difference.
Secondly what does this do? In my yml file I've commented it out to no ill effect that I can see!?
networks:
wpsite:
First question:
The --driver or -d option defaults to local. driver: local is redundant. On Windows, the local driver does not support any options. If you were running docker on a Linux machine, you would've had some options: Official documentation here - https://docs.docker.com/engine/reference/commandline/volume_create/
Second question:
In each section networks:/volumes:/services: you basically declare the resources you need for your deployment.
In other words, creating an analogy with a virtual machine, you can think about it like this: you need to create a virtual disk named wp_data and a virtual network named wpsite.
Then, you want your wordpress service, to mount the the wp_data disk under /var/www/html and to connect to the wpsite subnet.
You can use the following docker commands to display the resources that are created behind the scenes by your compose file:
docker ps - show containers
docker volume ls - show docker volumes
docker network ls - show docker networks
Hint: once you created a network or a volume, unless you manually delete it, it will not be destroyed automatically. You can clean-up manually the resources and experiment yourself by removing/adding more resources from your compose file.
Updated to answer question in comment:
If you run your docker on a Windows host, you probably enabled hyper-v. This allowed Windows to create a Linux VM, on top of which your docker engine is running.
With the docker engine installed, docker can then create "virtual resources" such as virtual networks, virtual disks(volumes), containers(people often compare containers to VMs), services, containers etc.
Let's look at the following section from your compose file:
volumes:
wp_data:
driver: local
This will create a virtual disk managed by docker, named wp_data. The volume is not created directly on your Windows host file system, but instead it is being created inside the Linux VM that is running on top of the HyperV that is running on your Windows host. If you want to know precisely where, you can either execute docker inspect <containerID> and look for the mounts that you have on that container, or docker volume ls then docker volume inspect <volumeID> and look for the key "Mountpoint" to get the actual location.

define inline file in docker-compose

I'm currently using a bind mount to mount a file from the host to a container:
volumes:
- type: bind
source: ./localstack_setup.sh
target: /docker-entrypoint-initaws.d/init.sh
Is there a way to define the ./localstack_setup.sh inline in the docker-compose.yml? I want to use a remote Docker host, and docker-compose up fails because the remote host doesn't have the file.
I don't know about any opinion to run a script into docker-compose itself. I recommend you to parametrize your shell script with ENVRIOMENT variable replacers to be in general in terms of the native docker image.

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources