I am trying to mount a FUSE virtual filesystem from inside a Docker container and expose the mount point to the host.
Docker is installed via snap on Ubuntu 20.04
The software is a fresh install of Seafile (a Dropbox alternative), but this problem I believe is more related to Docker, snap, and mounting file systems on Ubuntu. For what it's worth, I was following the official instructions here.
Inside the container (when it runs successfully), a script mounts a FUSE virtual filesystem to /seafile-fuse that makes the all files stored within Seafile visible.
docker-compose.yml exerpt:
version: '3.3'
services:
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
volumes:
- /home/jonathan/seafile/seafile-data:/shared
- type: bind
source: /home/jonathan/seafile/seafile-fuse
target: /seafile-fuse
bind:
propagation: rshared
privileged: true
cap_add:
- SYS_ADMIN
This leads to:
ERROR: for seafile Cannot start service seafile: path /home/jonathan/seafile/seafile-fuse is mounted on /home but it is not a shared mount
I found this somewhat related answer which hints that the issue may to do with the docker daemon running in a different namespace. But I am unable to get his solution to work.
What do I need to do to connect the host directory /home/jonathan/seafile/seafile-fuse so that it sees the container directory /seafile-fuse?
Bonus question...
Given that this is to be an internet facing home-server, is it necessary this this becomes a privileged container? Are there better options?
Thanks!!
Related
For a service I've defined a volume as (an extract of my yml file)
services:
wordpress:
volumes:
- wp_data:/var/www/html
networks:
- wpsite
networks:
wpsite:
volumes:
wp_data:
driver: local
I'm aware on a Windows 10 filesystem that the WP volumes won't be readily visible to me as they'll exist within the linux VM. Alternatively I'd have to provide a path argument to be able view my WP installation e.g.
volumes:
- ./mysql:/var/lib/mysql
But my question is what is the point of the 'driver: local' option, is this default. I've tried with & without this option and can't see the difference.
Secondly what does this do? In my yml file I've commented it out to no ill effect that I can see!?
networks:
wpsite:
First question:
The --driver or -d option defaults to local. driver: local is redundant. On Windows, the local driver does not support any options. If you were running docker on a Linux machine, you would've had some options: Official documentation here - https://docs.docker.com/engine/reference/commandline/volume_create/
Second question:
In each section networks:/volumes:/services: you basically declare the resources you need for your deployment.
In other words, creating an analogy with a virtual machine, you can think about it like this: you need to create a virtual disk named wp_data and a virtual network named wpsite.
Then, you want your wordpress service, to mount the the wp_data disk under /var/www/html and to connect to the wpsite subnet.
You can use the following docker commands to display the resources that are created behind the scenes by your compose file:
docker ps - show containers
docker volume ls - show docker volumes
docker network ls - show docker networks
Hint: once you created a network or a volume, unless you manually delete it, it will not be destroyed automatically. You can clean-up manually the resources and experiment yourself by removing/adding more resources from your compose file.
Updated to answer question in comment:
If you run your docker on a Windows host, you probably enabled hyper-v. This allowed Windows to create a Linux VM, on top of which your docker engine is running.
With the docker engine installed, docker can then create "virtual resources" such as virtual networks, virtual disks(volumes), containers(people often compare containers to VMs), services, containers etc.
Let's look at the following section from your compose file:
volumes:
wp_data:
driver: local
This will create a virtual disk managed by docker, named wp_data. The volume is not created directly on your Windows host file system, but instead it is being created inside the Linux VM that is running on top of the HyperV that is running on your Windows host. If you want to know precisely where, you can either execute docker inspect <containerID> and look for the mounts that you have on that container, or docker volume ls then docker volume inspect <volumeID> and look for the key "Mountpoint" to get the actual location.
Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.
I wrote some scripts I use in a docker container. For the scripts to be usefull I need to access some network mounts.
On my Mac it's easy. In my docker-compose.yml I have:
volumes:
- type: bind
source: /Volumes/Teams/myteam/folder/subfolder
target: /subfolder
On my colleagues Windows Laptop /Volumes/Teams is mounted as T: so my naive approach was to use
volumes:
- type: bind
source: /t/myteam/folder/subfolder
target: /subfolder
From the git shell this path can be used. But when starting docker-compose up from that shell, he gets error messages
ERROR: for 255d3d7d2944_my-tools_helpscripts_1 Cannot create container for service helpscripts: b'Mount denied:\nThe source path "T:/myteam/folder/subfolder"\ndoesn\'t exist and is not known to Docker'
Encountered errors while bringing up the project.
In docker's settings for shared drives, the T: drive is not listed.
How can we solve this issue?
I think I found a solution that should fit for me:
Start my container with capabilities SYS_ADMIN and DAC_READ_SEARCH
Mount inside the containter with
mount -t cifs -o user=USER,domain=DOMAIN //SERVER/Teams /mnt/T
I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".
I have dedicated data volume container which map source code to php container and data volume which map another one folder to data volume container file structure.
common-services.yml
code:
image: debian:jessie
volumes:
- ../:/var/www
docker-compose-dev.yml
php:
extends:
file: common-services.yml
service: php
volumes_from:
- code
links:
- mysql
volumes:
- "~/Projects/test-sampledata:/var/www/app/code/TEST/SampleData/"
On the host machine I see all files&folders of /var/www but not sub-folder /var/www/app/code/TEST/SampleData/. When I enter php container I see file structure as expected.
The question is why /var/www/app/code/TEST/SampleData/ with its sub-folders don't map to the host.
OS: Ubuntu 16.04.1 LTS
Thanks in advance !
If you're running on MacOS or Windows, you need the host volume within the /Users folder. Each of these implementations run a VM under the covers to provide the docker host on Linux, and these VM's mount /Users as a share to your parent OS by default. This can be modified in the settings of Docker if needed (or at least it can be on the recent MacOS releases).