Does anyone know how to mount nfs share inside docker container with centos base image? I've tried this command:
mount server:/dir /mount/point
and got the next error:
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
when I try to use it with -o nolock option, the error is:
mount.nfs: Operation not permitted
Starting from docker 17.06, you can mount NFS shares to the container directly when you run it, without the need of extra capabilities
export NFS_VOL_NAME=mynfs
export NFS_LOCAL_MNT=/mnt/mynfs
export NFS_SERVER=my.nfs.server.com
export NFS_SHARE=/my/server/path
export NFS_OPTS=vers=4,soft
docker run --mount \
"src=$NFS_VOL_NAME,dst=$NFS_LOCAL_MNT,volume-opt=device=:$NFS_SHARE,\"volume-opt=o=addr=$NFS_SERVER,$NFS_OPTS\",type=volume,volume-driver=local,volume-opt=type=nfs" \
busybox ls $NFS_LOCAL_MNT
Alternatively, you can create the volume before the container:
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=$NFS_SERVER,$NFS_OPTS \
--opt device=:$NFS_SHARE \
$NFS_VOL_NAME
docker run --rm -v $NFS_VOL_NAME:$NFS_LOCAL_MNT busybox ls $NFS_LOCAL_MNT
Got the hint from
https://github.com/moby/moby/issues/28809
official docs from docker: https://docs.docker.com/storage/volumes/#create-a-service-which-creates-an-nfs-volume
For using mount, you'll need the CAP_SYS_ADMIN capability, which is dropped by Docker when creating the container.
There are several solutions for this:
Start the container with the --cap-add sys_admin flag. This causes Docker to retain the CAP_SYS_ADMIN capability, which should allow you to mount a NFS share from within the container. This might be a security issue; do not do this in untrusted containers. [A previous version of this answer suggested using the --privileged=true to retain all capabilities, thanks to #earcam for the suggestion to use --cap-add instead].
Mount the NFS share on the host and pass it into the container as a host volume:
you#host > mount server:/dir /path/to/mount/point
you#host > docker run -v /path/to/mount/point:/path/to/mount/point
Use a Docker volume plugin (like the Netshare plugin) to directly mount the NFS share as a container volume:
you#host > docker run \
--volume-driver=nfs \
-v server/dir:/path/to/mount/point \
centos
For the second option listed in the accepted answer, I'm not sure if you have actually tried to use the "docker run -v" command to pass a NFS share on the host to docker container as a volume.
I have recently tried to do so, below is the info for the nfs share on host:
nfs-server:/path_to_mount on /path_dest type nfs
and then:
docker run -it -v /path_dest:/path_in_docker docker_name bash
But the docker daemon always reports below error:
docker: Error response from daemon: stat /path_dest: permission denied.
After many searches, I found that the error actually comes from docker daemon, which is running as "root". When docker runs a container with volume to mount, it will request docker daemon to mount it. The problem is, NFS server will handle "root" differently. By default, NFS server will map the "root" to "nobody", causing the error message: reference
I mount the nfs on docker container, thanks for #helmbert .
Run a docker container with the --privileged=true flag.
$ docker run -it --privileged=true centos:7 bash
[root#f7915ae635aa /]# yum install -y nfs-utils
Install the nfs tool package and mount nfs on CentOS.
[root#f7915ae635aa /]# yum install -y nfs-utils
[root#f7915ae635aa /]# mount -t nfs example.tw:/target/ /srv -o nolock
Show mount of the nfs server.
[root#f7915ae635aa /]# showmount example.tw
Hosts on example.tw:
10.10.10.1
10.10.10.2
By adding --cap-add sys_admin flag to client container wasn't enough for me. I was getting error:
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 1.2.3.4:/exports
After hours of research I've found that it looks like full privilege --privileged is needed to mount correctly inside docker container ..
Also don't forget to install necessary nfs client packages inside your docker container. On debian based containers:
apt-get install -y nfs-common
Related
I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.
My centos version and docker version(install by yum)
Use docker common error in container
My docker run command:
docker run -it -d -u root --name jenkins3 -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker docker.io/jenkins/jenkins
but,its error when I exec docker info in jenkins container
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker
Exposing the host's docker socket to your jenkins container will work with
-v /var/run/docker.sock:/var/run/docker.sock
but you will need to have the docker executable installed in your jenkins image via a Dockerfile.
It is likely the example you are looking at is already using a docker image. A quick google search brings up https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ whose example uses a docker image (already has the executable installed):
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
Also note from that same post your exact issue with mounting the binary:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
I'm on Docker 17.06.0-ce and I'm attempting to mount a CIFS share in a container and only having some luck. If I use --privileged, it works, but that's not desirable for me. I've tried using --cap-add as well as suggested in this answer (even trying with --cap-add ALL with no success.
The same mount command works fine on the host system as well.
Here's a simple docker file I've tried playing with
FROM alpine:latest
RUN apk add --no-cache cifs-utils
Run with many different permutations, all with the same result below:
Works:
docker run --rm -it --privileged cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH --cap-add NET_ADMIN cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add ALL cifs-test /bin/sh
And the command:
mkdir /test && mount.cifs //myserver/testpath /test -o user=auser,password=somepass,domain=mydomain
And the result from each run command above except the first:
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Has something changed in Docker that requires --privileged all the time for these types of mounts now? Or is there something else I'm missing?
I started using docker-volume-netshare so far with good success. There are some minor problems, like volumes created with docker volume create not being persistent, but nevertheless it looks like this volume driver is quite usable. One advantage is that special caps/privileged mode are not necessary. Here are some hints on how to use it.
Install (Ubuntu/Debian)
$ curl -L -o /tmp/docker-volume-netshare_0.34_amd64.deb https://github.com/ContainX/docker-volume-netshare/releases/download/v0.34/docker-volume-netshare_0.34_amd64.deb
$ sudo dpkg -i /tmp/docker-volume-netshare_0.34_amd64.deb
$ rm /tmp/docker-volume-netshare_0.34_amd64.deb
Configure
$ sudo vi /etc/default/docker-volume-netshare
enter as single setting
DKV_NETSHARE_OPTS="cifs --netrc=/root/"
then
$ sudo vi /root/.netrc
enter the following settings per host:
machine <host>
username <user>
password <password>
domain <domain>
Note that <host> must be a host name or an IP address followed by a colon (e.g. 10.20.30.4:)
Enable the volume driver as a systemd service
Note: if your OS does not support systemd, another method to install it as a service is necessary.
$ sudo systemctl enable docker-volume-netshare
Use a volume in docker run and docker service create
$ sudo docker run -it --rm --mount type=volume,volume-driver=cifs,source=<myvol>,destination=<absolute-path-in-container>,volume-opt=share=<ip>:/<share> ubuntu:zesty bash
$ sudo docker service create --name <name> --mount type=volume,volume-driver=cifs,source=<myvol>,destination=<absolute-path-in-container>,volume-opt=share=<host>/<share> <image>
Obviously it is not necessary to use the identical volume in multiple containers, because the volumes only map to a cifs share which in turn is shared among containers mounting it. As mentioned above, don't use docker volume create with this volume driver, as volumes are lost as soon as docker-volume-netshare is stopped and/or restarted (and hence on reboot).
Get help
$ docker-volume-netshare --help
$ docker-volume-netshare cifs --help
Logs
Hint: for debugging use DKV_NETSHARE_OPTS="cifs --netrc=/root/ --verbose" in /etc/default/docker-volume-netshare or stop the service and start docker-volume-netshare cifs --netrc=/root/ --verbose in a shell)
$ dmesg | tail
$ tail -50 /var/log/docker-volume-netshare.log
Resources
github
project
I have a question regarding the whole data volume process in Docker. Basically here are two Dockerfiles and their respective run commands:
Dockerfile 1 -
# Transmission over Debian
#
# Version 2.92
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install transmission-daemon transmission-common transmission-cli \
&& mkdir -p /transmission/config /transmission/watch /transmission/download
ENTRYPOINT ["transmission-daemon", "--foreground"]
CMD ["--config-dir", "/transmission/config", "--watch-dir", "/transmission/watch", "--download-dir", "/transmission/download", "--allowed", "*", "--no-blocklist", "--no-auth", "--no-dht", "--no-lpd", "--encryption-preferred"]
Command 1 -
docker run --name transmission -d -p 9091:9091 -v C:\path\to\config:/transmission/config -v C:\path\to\watch:/transmission/watch -v C:\path\to\download:/transmission/download transmission
Dockerfile 2 -
# Nginx over Debian
#
# Version 1.10.3
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install nginx
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Command 2 -
docker run --name nginx -d -p 80:80 -v C:\path\to\config:/etc/nginx -v C:\path\to\html:/var/www/html nginx
So, the weird thing is that the first dockerfile and command works as intended. Where the docker daemon mounts a directory from the container to the host. So, I am able to edit the configuration files as I please and they will be persisted to the container on a restart.
However, as for the second dockerfile and command it doesn't seem to be working. I know if you go to the Docker Volume documentation it says that volume mounts are only intended to go one-way, from host-to-container, but how come the Transmission container works as intended, while the Nginx container doesn't?
P:S - I'm running Microsoft Windows 10 Pro Build 14393 as my host and Version 17.03.0-ce-win1 (10300) Channel: beta as my Docker version.
Edit - Just to clarify. I'm trying to get the files from inside the Nginx container to the host. The first container (Transmission) works in that regard, by using a data volume. However, for the second container (Nginx), it doesn't want to copy the files in the mounted directory from inside the container to the host. Everything else is working though, it does successfully start.
The host volume will not copy data like a named volume will. However, you can create a named volume that performs a bind mount, which will then have the data initialization properties of any other named volume. The only prerequisite of a bind mount over a host volume is that the directory must exist in advance, docker will not create it for you like it does with a host volume. Here are three different examples of how to create a bind mount volume:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
So in your example with a docker run command, you can use the mount syntax:
docker run --name nginx -d -p 80:80 \
--mount type=volume,dst=/etc/nginx,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/config \
--mount type=volume,dst=/var/www/html,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/html \
nginx
The only part that may need adjusting is the windows path names inside the Linux VM that docker runs in HyperV.
Host volumes don't copy data from the container > host. Host volumes mount over the top of what's in the container/image, so they effectively replace what's in the container with what's on the host.
A standard or "named" volume will copy the existing data from the container image into a new volume. These volumes are created by launching a container with the VOLUME command in it's Dockerfile or by the docker command
docker run -v myvolume:/var/whatever myimage
By default this is data stored in a "local" volume, "local" being on the Docker host. In your case that is on the VM running Docker rather than your Windows host so might not be easily accessible to you.
You could be mistaking transmission auto generating files in a blank directory for a copy?
If you really need the keep the VM Host > container mappings then you might have to copy the data manually:
docker create --name nginxcopy nginx
docker cp nginxcopy:/etc/nginx C:\path\to\config
docker cp nginxcopy:/var/www/html C:\path\to\html
docker rm nginxcopy
And then you can map the populated host directories into the container and they will have the default data the image came with.
I am a beginner with docker and I am using a windows machine. But I have a problem mounting files using volumes. The documentation says the following thing about mount files on OSX and windows :
Official docker docs
Note: If you are using Docker Machine on Mac or Windows, your Docker daemon only has limited access to your OS X/Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory - and so you can mount files or directories using docker run -v /Users/<path>:/<container path> ... (OS X) or docker run -v /c/Users/<path>:/<container path ... (Windows). All other paths come from your virtual machine’s filesystem.
I have a small nginx Dockerfile:
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
Creating a simple container
docker run -d --name simple -p 8082:80 ng1
8875448c01a4787f1ffe4c4c5c492efb039e452eff957391ac52a08915e18d66
Creating a container with a volume
My windows host directory
Creating the docker container with -v option
docker run -d --name simple2 -v /c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
invalid value "C:\\Users\\src;C:\\Program Files\\Git\\usr\\share\\nginx\\html"
for flag -v: bad mount mode specified
: \Program Files\Git\usr\share\nginx\html
See 'C:\Program Files\Docker Toolbox\docker.exe run --help'.
Inspecting the ng1 image
docker inspect ng1
What is wrong when I am creating a docker container with a volume?
Thanks.
Try to run it with additional / for volume like:
docker run -d --name simple2 -v /c/Users/src://usr/share/nginx/html -p 8082:80 ng1
Or even for host OS, as
docker run -d --name simple2 -v //c/Users/src://usr/share/nginx/html -p 8082:80 ng1
Due to this issue:
This is something that the MSYS environment does to map POSIX paths to Windows paths before passing them to executables.
As the OP said:
Official docker docs :
Note: If you are using Docker Machine on Mac or Windows, your Docker
daemon only has limited access to your OS X/Windows filesystem. Docker
Machine tries to auto-share your /Users (OS X) or C:\Users (Windows)
directory - and so you can mount files or directories using
docker run -v /Users/:/ ... (OS X)
or
docker run -v /c/Users/:/
But if you want access to other directories, you need to add a new shared folder to the virtual box settings (Settings > Shared Folders > Add share).
Add there a new share (only possible when you stop the vm before, docker-machine stop:
path C:\Projects
name c/Projects
autoMount yes
Or edit directly the vbox configuration file
C:\Users\<username>\.docker\machine\machines\default\default\default.vbox
Add there into <SharedFolders> the line
<SharedFolder name="c/Projects" hostPath="\\?\c:\Projects" writable="true" autoMount="true"/>
Restart the machine:
docker-machine stop
docker-machine start
Now, it's possible to mount also directories with the base C:\Projects
docker run -v //c/Projects/myApp://myApp <myImage>
For anyone using docker ~> 1.12 and faces this issue. I spent 30min trying to figure it out until i realized you have to specifically share a drive first via docker settings, see:
https://docs.docker.com/docker-for-windows/#/shared-drives
If you're simply looking to access a local drive, the MINGW32 Docker Toolbox terminal puts the root of each drive in /<drive-letter>, so drive C:\ will be at /c/