I have a directory /home/foo/mydir owned by foo:foo (uid=1040) that I bind mount in the alpine docker image as such:
docker run -it --rm -v /home/foo/mydir:/tmp/mydir --user 1040 alpine
but when I check the directory in the container, it is owned by root:root. Am I crazy? I thought docker passed through file ownership when mounting in a container? Is there anyway to retain the permissions (ie have mydir owned by foo:foo in the container) without chown'ing it in the container?
I have two Ubuntu Jammy machines and this issues happened on one machine, but not the other. I finally found the cause and the solution.
Apparently the issue is caused by Docker Desktop. On the first machine I only installed the Docker engine. The second machine had Docker Desktop installed, which runs a virtual machine and your containers will run inside that virtual machine. In that case you can't just mount the host directory the same way into the containers, because you need to mount it first into the virtual machine.
So the solution was simply to remove Docker completely, and then only install the Docker engine (https://docs.docker.com/engine/install/).
Based on my support enquiry here:
https://forums.docker.com/t/bind-mount-permissions-unexpected-mounting-as-root-root/129328?u=swpppp
Related
I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src
I'm testing Docker running on my Windows 7 PC. I can mount directories under C:\Users to containers without issue with e.g.
docker run --rm -it -v //c/Users/someuser/:/data/ alpine ash
but when I try to attach a networked location like //server1/data with e.g.
docker run --rm -it -v //server1/data/:/data/ alpine ash
the /data directory in the container appears empty. How do I pass a directory not under C:\Users\ to my Docker containers?
Because my PC was running Windows 7 I'd installed Docker Toolbox, which uses Virtualbox instead of Hyper-V. My understanding is that this means Docker is running inside a VM on my system, so that VM needs to have access to any data I intend to pass to Docker.
To attach network directories (or anything local above C:\Users) I needed to add it as a shared folder in Virtualbox.
VM (default in my case) => Settings => Shared Folders => +
After navigating the file explorer and adding //server1/data to the list of folders shared with VM 'default' I was able to pass it to the container as a volume using the second command outline in my original question.
I have a VirtualBox Guest OS running Ubuntu Server 17.04 which has docker-ce installed.
I have some shared folders that are mounted inside the Guest OS but when I pass them to the docker container with a --volume command I can't see any content inside them. Am I doing something wrong here?
sudo docker create --name=plex -v /home/kunal/media/:/media plexinc/pms-docker
I've seen an issue before (it may have since been fixed) where docker detects all the mounted filesystems on the host when the daemon starts. If you changed mounted filesystems after starting the daemon, it wouldn't see those filesystems for volume mounts. The workaround is to just bounce the docker daemon (e.g. sudo systemctl restart docker) after making any filesystem changes. You may also want to try newer versions of docker to see if the issue has since been fixed.
I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker
There is possibility to install docker in docker container.
How to control docker host service from it's container (manage another containers)?
If execute docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti debian and enter docker error appears:
docker: error while loading shared libraries: libapparmor.so.1: cannot open shared object file: No such file
The error you're seeing seems very clear: the docker binary requires a shared library that is not present inside the container.
Is your container running the same distribution and version as your host? If it is, you simply need to determine which packages provide the necessary dependencies and install them inside the container.
If not, you will probably have better luck simply installing docker inside the container, rather than trying to bind-mount it from the host. There is probably a source of recent Docker versions available for Debian.
if your host is a linux based machine, you dont need to install docker inside container, you can just mount docker into container and whatever you do with that inside your container is just like doing it on host. I have tested it on a Ubuntu machine (image: https://github.com/mohamnag/ubuntu-git.git) by mounting /usr/bin/docker from host into /bin/docker inside container. then inside that container you can literally do (build, stop, list ...) whatever you may have done with docker inside host.