I followed this setup completely. Running WSL1 with Docker Desktop on Windows 10. I am not interested in WSL2 at this point. I don't have Insider Windows.
Now, I am trying to start a container with a volume, so that the container's files are copied into the volume. According to official docs:
Populate a volume using a container
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume.
So it should be possible, but I must be missing something really basic here, cause it just doesn't work.
I've tried -v vol-name:/path/on/container -> this creates a named volume... somewhere. No clue where, nor how to view it. Doing volume inspect vol-name shows a path that doesn't exist, neither in WSL nor in Docker Host (Windows). I even tried mounting the MobyVM and it isn't there either.
I've tried -v /c/full/path:/path/on/container -> this creates a bind-type mount. It's empty (by design). If it put files under /c/full/path, I will see them in container under /path/on/container, but that's not what I need. I need to populate the volume with contents from container. From what I understand from documents, I need a volume-type mount, not bind-type mount. In this case the -v options forces bind-type
I've tried --mount type=volume,source=/c/full/path,destination=/path/on/container -> This results in this error: docker: Error response from daemon: create /c/full/path: "/c/full/path" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path. The path separator is not allowed...
I've read something about special characters in passwords being an issue and reset my password
I've read about the /c/full/path needing full access permission, and gave "everyone" full access
Please help
Let me summarize what i think your setup is then i will try and give a solution.
You are running docker desktop for the docker engine
You are connecting to docker desktop via the docker cli installed on WSL
You are trying to share a windows folder with a running container
You have enabled sharing of your C drive in the settings of docker desktop
I think that you are linking to the wrong path, the path you give needs to be recognized by docker desktop which remember is running in windows, therefore the path needs to be in the format c:/full/path.
So try the following to test if you have everything setup correctly
➜ cd /mnt/c
➜ mkdir -p full/path
➜ cd full/path
➜ pwd
/mnt/c/full/path
➜ docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
➜ date > foobar.txt
➜ cat foobar.txt
Thu Feb 6 17:49:31 STD 2020
➜ docker run --rm -v c:/full/path:/full/path alpine cat /full/path/foobar.txt
Thu Feb 6 17:49:31 STD 2020
➜
to finish off you can use wslpath along with pwd to get the current dir in a form that docker desktop can use.
docker run --rm -v $(wslpath -w $(pwd)):/full/path alpine ls /full/path/
Hope this helps
I think that I have the same setup as you: windows 10, WSL1, docker desktop 2.2.0.0 (42247)
➜ uname -a
Linux LAPTOP1001 4.4.0-17134-Microsoft #1130-Microsoft Thu Nov 07 15:21:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
➜ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
I have managed to perform the same operations as you, i did this in the following manner using alpine:
➜ docker volume create my-vol
my-vol
➜ docker volume ls
DRIVER VOLUME NAME
local my-vol
➜ docker volume inspect my-vol
[
{
"CreatedAt": "2020-02-07T11:06:15Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
➜ docker run -it --name volTest --mount source=my-vol,target=/my-vol alpine
/ # cd my-vol
/my-vol # ls
/my-vol # exit
➜ docker run -it --name volTest2 --mount source=my-vol,target=/usr/bin alpine
/ # cd /usr/bin/
/usr/bin # ls -al
total 228
drwxr-xr-x 2 root root 4096 Feb 7 11:16 .
drwxr-xr-x 7 root root 4096 Jan 16 21:52 ..
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [ -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [[ -> /bin/busybox
...
...
/usr/bin # exit
➜ docker container start volTest
volTest
➜ docker container attach volTest
/ # cd /my-vol/
/my-vol # ls
total 228
drwxr-xr-x 2 root root 4096 Feb 7 11:16 .
drwxr-xr-x 7 root root 4096 Jan 16 21:52 ..
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [ -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [[ -> /bin/busybox
...
...
/my-vol # exit
➜
Firstly I create a volume my-vol and inspect it
Secondly my-vol is attached to the root of the container myTest, when you look inside the volume it is empty
Then attach my-vol to another container called myTest2 but in the folder /usr/bin which as you can see contains lots of files.
Now restarting myTest you can see the volume has been populated with the files from /usr/bin from the container myTest2
Can you replicate my results on your system?
Cheers
Related
It seems that symlinks to socket is not correctly handled by docker run.
The problem:
I'm trying to use rootless docker. The socket to connect to docker is
/run/user/1000/docker.sock.
But I need to use a docker-compose file where a traefik container is run with a volume mapping to give access to /var/run/docker.sock from inside the container. I can't modify this docker-compose file.
To make it work, I tried to do a symlink /var/run/docker.sock > /run/user/1000/docker.socket. It works fine on my host machine, I can talk to the rootless docker using this socket but when running the container, the /var/run/docker.sock file is not a socket but a directory.
How to reproduce:
To reproduce the problem with docker run:
sudo ln -s /run/user/1000/docker.sock /var/run/docker.sock
sudo docker run -v /run/user/1000/docker.sock:/var/run/docker-user.sock -v /var/run/docker.sock:/var/run/docker.sock -it traefik /bin/sh
Inside the traefik container:
/ # ls -al /var/run/
total 0
drwxr-xr-x 1 root root 54 Dec 13 17:23 .
drwxr-xr-x 1 root root 134 Dec 13 17:23 ..
srw-rw---T 1 root 974 0 Dec 13 15:06 docker-user.sock
drwxr-xr-x 2 root root 40 Dec 13 15:09 docker.sock
Thanks for the help :)
I want to use a custom docker config.json file like this to reassign the detach keystrokes:
{
"detachKeys": "ctrl-q,ctrl-q"
}
In a "normal" docker world, i.e. one where docker is installed via apt or similar and not snap, I could put this file in $HOME/.docker/config.json and the setting is picked up when I next run the docker command. However, this file is not recognized when running /snap/bin/docker. docker just silently ignores it.
If I try to force it to use this directory, I am denied:
$ docker --config .docker/ run -it ubuntu /bin/bash
WARNING: Error loading config file: .docker/config.json: open .docker/config.json: permission denied
If I try to locate the file alongside daemon.json in /var/snap/docker/current/config/ this also silently fails to notice any config.json:
$ ls -l /var/snap/docker/current/config/
total 8
-rw-r--r-- 1 root root 36 Feb 28 11:28 config.json
-rw-r--r-- 1 root root 200 Feb 28 09:44 daemon.json
$ docker run -it ubuntu /bin/bash
Now, I can force the directory location, but surely there is a better way?
$ docker --config /var/snap/docker/current/config/ run -it ubuntu /bin/bash
Ok, after writing this question, I ran across the answer. Snap wants this file to go here:
$ ls -l ~/snap/docker/current/.docker/
total 4
-rw-r--r-- 1 gclaybur gclaybur 36 Feb 28 12:04 config.json
This question already has answers here:
I lose my data when the container exits
(11 answers)
Closed 3 years ago.
I pulled Ubuntu image using docker pull.
I connect to the container using docker exec and then create a file and then exit.
Again, when I execute docker exec file is lost.
How to maintain the file in that container, I have tried dockerfile and tagging docker images, it works.
But, is there any other way to maintain the files in docker container for a longer time?
One option is to commit your changes. After you've added the file, and while the container is still running, you should run:
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Another option, maybe you'll want to use a volume, but that depends on your logic and needs.
The best way to persist content in containers its with Docker Volumes:
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ sudo docker run --rm -it -v $PWD:/data ubuntu
root#00af7ccf1d3b:/# echo "Persits data with Docker Volumes" > /data/docker-volumes.txt
root#00af7ccf1d3b:/# cat /data/docker-volumes.txt
Persits data with Docker Volumes
root#00af7ccf1d3b:/# exit
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ ls -al
total 12
drwxr-xr-x 2 exadra37 exadra37 4096 Nov 25 15:34 .
drwxr-xr-x 8 exadra37 exadra37 4096 Nov 25 15:33 ..
-rw-r--r-- 1 root root 33 Nov 25 15:34 docker-volumes.txt
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ cat docker-volumes.txt
Persits data with Docker Volumes
The docker command explained:
sudo docker run --rm -it -v $PWD:/data alpine
I used the flag -v to map the current dir $PWD to the /data dir inside the container
inside the container:
I wrote some content to it
I read that same content
I exited the container
On the host:
I used ls -al to confirm that the file was persisted to my computer.
I confirmed could access that same file in my computer filesystem.
I am trying to learn Docker volumes, and I am using centos:latest as my base image. When I try to run a Docker command, I am unable to access the attached volume inside the container:
Command:
sudo docker run -it --name test -v /home/user/Myhostdir:/mydata centos:latest /bin/bash
Error:
[user#0bd1bb78b1a5 mydata]$ ls
ls: cannot open directory .: Permission denied
When I try to ls to find the folder permission, it says 1001. What's happening, and how can to solve this?
drwxrwxr-x. 2 1001 1001 38 Jun 2 23:12 mydata
My local machine:
[user#xxx07012 Myhostdir]$ pwd
/home/user/Myhostdir
[user#swathi07012 Myhostdir]$ ls -al
total 12
drwxrwxr-x. 2 user user 38 Jun 2 23:12 .
drwx------. 18 user user 4096 Jun 2 23:11 ..
-rw-rw-r--. 1 user user 15 Jun 2 23:12 text.2.txt
-rw-rw-r--. 1 user user 25 Jun 2 23:12 text.txt
This is partially a Docker issue, but mostly an SELinux issue. I am assuming you are running an old 1.x version of Docker.
You have a couple of options. First, you could take a look at this blog post to understand the issue a bit more and possibly use the fix mentioned there.
Or you could just upgrade to a newer version of Docker. I tested mounting a simple volume on Docker version 18.03.1-ce:
docker run -it --name test -v /home/chris/test:/mydata centos:latest /bin/bash
[root#bfec7af20b99 /]# cd mydata/
[root#bfec7af20b99 mydata]# ls
test.txt.txt
[root#bfec7af20b99 mydata]# ls -l
total 0
-rwxr-xr-x 1 root root 0 Jun 3 00:40 test.txt.txt
I'm trying to understand when containers copy preexisting files into a mounted volume on the same directory. For example
FROM ubuntu
RUN mkdir /testdir
RUN echo "Hello world" > /testdir/file.txt
running:
#docker create volume vol
#docker run -dit -v vol:/testdir myimage
#docker exec -it 900444b7ab86 ls -la /testdir
drwxr-xr-x 2 root root 4096 May 11 18:43 .
drwxr-xr-x 1 root root 4096 May 11 18:43 ..
-rw-r--r-- 1 root root 6 May 11 17:53 file.txt
The image for example also has files in:
# docker exec -it 900444b7ab86 ls -la /etc/cron.daily
total 20
drwxr-xr-x 2 root root 4096 Apr 26 21:17 .
drwxr-xr-x 1 root root 4096 May 11 18:43 ..
-rwxr-xr-x 1 root root 1478 Apr 20 10:08 apt-compat
-rwxr-xr-x 1 root root 1176 Nov 2 2017 dpkg
-rwxr-xr-x 1 root root 249 Jan 25 15:09 passwd
But for example when I run it with
docker run -it 900444b7ab81 -v vol:/etc/cron.daily
The directory is now empty..
Why don't the files get copied this time?
#docker run -dit -v vol:/testdir
That is not a valid docker command, there's no image reference included, so there's nothing for docker to run.
docker run -it 900444b7ab81 -v vol:/etc/cron.daily
This will attempt to run the image 900444b7ab81 with the command -v vol:/etc/cron.daily. Before you had a container id with a very similar id, so it's not clear that you aren't trying to do a run with a container id instead of an image id. And the command -v likely doesn't exist inside the container.
The order of these arguments is important, the first thing after the run that isn't an option or arg to the previous option is treated as the image reference. After that reference, anything else passed is a command to run in the container. So if you wanted to mount the volume, you need to move that option before the image id.
I'm trying to understand when containers copy preexisting files into a mounted volume on the same directory.
With named volumes, docker initializes an empty named volume upon creation of the container with the contents of the image at that location. Once the volume has files in it, it will be mapped as is into the container on any subsequent usage, so changes to the image at the same location will not be seen.