I have created a new docker image. It creates a new folder /hello.
When I run this image as a container, I can access the container by the docker exec -it .. bash command and when I perform ls I see the /hello folder.
This /hello folder is also saved in a Docker volume container.
So I have linked the container with an existing Docker volume. So it's persistent.
Now is my question: Is it possible to perform the following in a Dockerfile?
A new image wants to use the same volume as the previous container, and copying the /hello file to its own container.
Is this possible to perform in a docker file?
No, this is not possible in your Dockerfile.
You may use a running containers volumes by using the --volumes-from argument when you run another container with docker run.
Example:
Dockerfile
FROM ubuntu:14.04
VOLUME /hello
Then:
$ docker build -t test-image-with-volume .
$ docker run -ti --name test-image-with-volume test-image-with-volume bash
/# cd /hello
/# ls -la
total 8
drwxr-xr-x 2 root root 4096 Jan 18 14:59 ./
drwxr-xr-x 22 root root 4096 Jan 18 14:59 ../
Then in another terminal (while above container is still running):
Dockerfile
FROM ubuntu:14.04
Then:
$ docker build -t test-image-without-volume .
$ docker run -ti test-image-without-volume bash
/# cd /hello
bash: cd: /hello: No such file or directory
/# exit
$ docker run -ti --volumes-from test-image-with-volume test-image-without-volume bash
/# cd /hello
total 8
drwxr-xr-x 2 root root 4096 Jan 18 14:59 ./
drwxr-xr-x 22 root root 4096 Jan 18 14:59 ../
/# touch test
Then in your original terminal:
/# ls -la /hello
total 8
drwxr-xr-x 2 root root 4096 Jan 18 15:04 .
drwxr-xr-x 22 root root 4096 Jan 18 15:03 ..
-rw-r--r-- 1 root root 0 Jan 18 15:04 test
And in your new terminal:
/# ls -la /hello
total 8
drwxr-xr-x 2 root root 4096 Jan 18 15:04 .
drwxr-xr-x 22 root root 4096 Jan 18 15:03 ..
-rw-r--r-- 1 root root 0 Jan 18 15:04 test
You can only link volumes from one container to another while the container with the volumes is still running.
Related
Working on a Mac.
I am trying to run docker in a docker container by mounting docker client and socket like so:
services:
jenkins:
image: ubuntu:latest
container_name: ubuntu
privileged: true
tty: true
volumes:
- ./ubuntu/home:/home
- /usr/local/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
When I now exec into the container and try to run a docker command, I get this:
➜ test docker exec -it ubuntu /bin/bash
root#c586d6f5fca4:/# which docker
root#c586d6f5fca4:/# docker -h
bash: docker: command not found
root#c586d6f5fca4:/#
Why am I not able to run docker in the container even though the hosts (my Mac) docker client and socket are mounted?
root#c586d6f5fca4:/# ls -la /usr/bin | grep docker
+drwxr-xr-x 2 root root 40 Oct 12 08:49 docker
root#c586d6f5fca4:/var/run# ls -la
total 20
drwxr-xr-x 1 root root 4096 Oct 22 05:54 .
drwxr-xr-x 1 root root 4096 Oct 22 05:54 ..
srwxr-xr-x 1 root root 0 Sep 18 12:38 docker.sock
drwxrwxrwt 2 root root 4096 Oct 3 21:41 lock
drwxr-xr-x 2 root root 4096 Oct 3 21:41 mount
drwxr-xr-x 2 root root 4096 Oct 3 21:44 systemd
The path to docker client on Mac is correct:
➜ ~ which docker
/usr/local/bin/docker
Thanks!
I am using a docker and mounting something into the /home/(user) directory on the docker image using
-v ~/.aws:/home/$(shell id -nu)/.aws. The issue is, since /home/(user) did not exist prior to creating the docker, /home/(user) ends up being owned by root. This is causing issues for pip, sam-cli, etc because they are all attempting to store some information in the user home directory.
How can I avoid this situation without completely abandoning using a specific user? I've tried using chown to change the ownership, but I always get 'Operation not Permitted.'
Pretty easy to reproduce
developer#desktopimage:~/AWS$ docker run --rm -it -v ~/.aws:/home/developer/.aws -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -u 1001:1001 alpine:latest ls -al /home/developer
total 12
drwxr-xr-x 3 root root 4096 Feb 25 14:09 .
drwxr-xr-x 1 root root 4096 Feb 25 14:09 ..
drwxr-xr-x 5 develope develope 4096 Feb 3 20:49 .aws
developer#desktopimage:~/AWS$ docker run --rm -it -v ~/.aws:/home/developer/.aws -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -u 1001:1001 alpine:latest ls -al /home
total 12
drwxr-xr-x 1 root root 4096 Feb 25 14:09 .
drwxr-xr-x 1 root root 4096 Feb 25 14:09 ..
drwxr-xr-x 3 root root 4096 Feb 25 14:09 developer
I am trying to learn Docker volumes, and I am using centos:latest as my base image. When I try to run a Docker command, I am unable to access the attached volume inside the container:
Command:
sudo docker run -it --name test -v /home/user/Myhostdir:/mydata centos:latest /bin/bash
Error:
[user#0bd1bb78b1a5 mydata]$ ls
ls: cannot open directory .: Permission denied
When I try to ls to find the folder permission, it says 1001. What's happening, and how can to solve this?
drwxrwxr-x. 2 1001 1001 38 Jun 2 23:12 mydata
My local machine:
[user#xxx07012 Myhostdir]$ pwd
/home/user/Myhostdir
[user#swathi07012 Myhostdir]$ ls -al
total 12
drwxrwxr-x. 2 user user 38 Jun 2 23:12 .
drwx------. 18 user user 4096 Jun 2 23:11 ..
-rw-rw-r--. 1 user user 15 Jun 2 23:12 text.2.txt
-rw-rw-r--. 1 user user 25 Jun 2 23:12 text.txt
This is partially a Docker issue, but mostly an SELinux issue. I am assuming you are running an old 1.x version of Docker.
You have a couple of options. First, you could take a look at this blog post to understand the issue a bit more and possibly use the fix mentioned there.
Or you could just upgrade to a newer version of Docker. I tested mounting a simple volume on Docker version 18.03.1-ce:
docker run -it --name test -v /home/chris/test:/mydata centos:latest /bin/bash
[root#bfec7af20b99 /]# cd mydata/
[root#bfec7af20b99 mydata]# ls
test.txt.txt
[root#bfec7af20b99 mydata]# ls -l
total 0
-rwxr-xr-x 1 root root 0 Jun 3 00:40 test.txt.txt
I'm trying to understand when containers copy preexisting files into a mounted volume on the same directory. For example
FROM ubuntu
RUN mkdir /testdir
RUN echo "Hello world" > /testdir/file.txt
running:
#docker create volume vol
#docker run -dit -v vol:/testdir myimage
#docker exec -it 900444b7ab86 ls -la /testdir
drwxr-xr-x 2 root root 4096 May 11 18:43 .
drwxr-xr-x 1 root root 4096 May 11 18:43 ..
-rw-r--r-- 1 root root 6 May 11 17:53 file.txt
The image for example also has files in:
# docker exec -it 900444b7ab86 ls -la /etc/cron.daily
total 20
drwxr-xr-x 2 root root 4096 Apr 26 21:17 .
drwxr-xr-x 1 root root 4096 May 11 18:43 ..
-rwxr-xr-x 1 root root 1478 Apr 20 10:08 apt-compat
-rwxr-xr-x 1 root root 1176 Nov 2 2017 dpkg
-rwxr-xr-x 1 root root 249 Jan 25 15:09 passwd
But for example when I run it with
docker run -it 900444b7ab81 -v vol:/etc/cron.daily
The directory is now empty..
Why don't the files get copied this time?
#docker run -dit -v vol:/testdir
That is not a valid docker command, there's no image reference included, so there's nothing for docker to run.
docker run -it 900444b7ab81 -v vol:/etc/cron.daily
This will attempt to run the image 900444b7ab81 with the command -v vol:/etc/cron.daily. Before you had a container id with a very similar id, so it's not clear that you aren't trying to do a run with a container id instead of an image id. And the command -v likely doesn't exist inside the container.
The order of these arguments is important, the first thing after the run that isn't an option or arg to the previous option is treated as the image reference. After that reference, anything else passed is a command to run in the container. So if you wanted to mount the volume, you need to move that option before the image id.
I'm trying to understand when containers copy preexisting files into a mounted volume on the same directory.
With named volumes, docker initializes an empty named volume upon creation of the container with the contents of the image at that location. Once the volume has files in it, it will be mapped as is into the container on any subsequent usage, so changes to the image at the same location will not be seen.
When mounting a volume with the following command:
docker run -t -i --volumes-from FOO BAR
the volumes from FOO are mounted with root as owner. I can't read and write to that without running as root as far as I know. Must I run as root or is there some other way?
I have tried by creating the folder with other owner before mounting but the mounting seems to overwrite that.
Edit: A chown would work if it could be done automatically after the mounting somehow.
I'm not sure why you aren't able to change your folder permissions in your source image. This works without issue in my lab:
$ cat df.vf-uid
FROM busybox
RUN mkdir -p /data && echo "hello world" > /data/hello && chown -R 1000 /data
$ docker build -f df.vf-uid -t test-vf-uid .
...
Successfully built 41390b132940
$ docker create --name test-vf-uid -v /data test-vf-uid
e12df8f84a3b1f113ad5440b62552b40c4fd86f99eec44698af9163a7b960727
$ docker run --volumes-from test-vf-uid -u 1000 -it --rm busybox /bin/sh
/ $ ls -al /data
total 12
drwxr-xr-x 2 1000 root 4096 Aug 22 11:44 .
drwxr-xr-x 19 root root 4096 Aug 22 11:45 ..
-rw-r--r-- 1 1000 root 12 Aug 22 11:43 hello
/ $ echo "success" >/data/world
/ $ ls -al /data
total 16
drwxr-xr-x 2 1000 root 4096 Aug 22 11:46 .
drwxr-xr-x 19 root root 4096 Aug 22 11:45 ..
-rw-r--r-- 1 1000 root 12 Aug 22 11:43 hello
-rw-r--r-- 1 1000 root 8 Aug 22 11:46 world
/ $ cat /data/hello /data/world
hello world
success
/ $ exit
So, what I ended up doing was mounting the volume to another container and change the owner (using uid of the owner I wanted in the final setup) from that container. Apparently uid's are uid's regardless. This means that I can run without being root in the final container. Perhaps there are easier ways to do it but this seems to work at least. Something like this: (untested code clip from my final solution)
docker run -v /opt/app --name Foo ubuntu /bin/bash
docker run --rm --volumes-from Foo -v $(pwd):/FOO ubuntu bash -c "chown -R 9999 /opt/app"
docker run -t -i --volumes-from FOO BAR