This question already has answers here:
I lose my data when the container exits
(11 answers)
Closed 3 years ago.
I pulled Ubuntu image using docker pull.
I connect to the container using docker exec and then create a file and then exit.
Again, when I execute docker exec file is lost.
How to maintain the file in that container, I have tried dockerfile and tagging docker images, it works.
But, is there any other way to maintain the files in docker container for a longer time?
One option is to commit your changes. After you've added the file, and while the container is still running, you should run:
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Another option, maybe you'll want to use a volume, but that depends on your logic and needs.
The best way to persist content in containers its with Docker Volumes:
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ sudo docker run --rm -it -v $PWD:/data ubuntu
root#00af7ccf1d3b:/# echo "Persits data with Docker Volumes" > /data/docker-volumes.txt
root#00af7ccf1d3b:/# cat /data/docker-volumes.txt
Persits data with Docker Volumes
root#00af7ccf1d3b:/# exit
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ ls -al
total 12
drwxr-xr-x 2 exadra37 exadra37 4096 Nov 25 15:34 .
drwxr-xr-x 8 exadra37 exadra37 4096 Nov 25 15:33 ..
-rw-r--r-- 1 root root 33 Nov 25 15:34 docker-volumes.txt
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ cat docker-volumes.txt
Persits data with Docker Volumes
The docker command explained:
sudo docker run --rm -it -v $PWD:/data alpine
I used the flag -v to map the current dir $PWD to the /data dir inside the container
inside the container:
I wrote some content to it
I read that same content
I exited the container
On the host:
I used ls -al to confirm that the file was persisted to my computer.
I confirmed could access that same file in my computer filesystem.
Related
So far I have not had this problem. I am running a container image in a remote workstation. Different than normal, this workstation is not connected to the internet and I had to initiate the docker deamon manually. (for reference)
After this to run the container I tried to do
docker run -it -t --rm --gpus all --env CUDA_VISIBLE_DEVICES -v "/mnt/disco/projects/ThisProject/:/home/ThisProject/" -w "/
home/ThisProject/" container_image:latest /bin/bash
When I do this I got into the container on folder /home/ThisProject with root user but I cannot ls here. I do cd .. and ls -l and I can see that the ThisProject folder has this
drwxrws--- 7 nobody nogroup 4096 Jul 21 07:30 ThisProject
As you can see the owner is "nobody"
What can I do to correct this?
Docker version 20.10.2
I'm just starting out on Docker and following training guides - but something hasn't been mentioned so far (that I have discovered) - when I run a container to write some data out to Docker volume, if I run that container again and attach to the same volume, the newly named data will not append into it ?
Here is my rather basic Dockerfile
FROM ubuntu
RUN mkdir applocal
RUN touch applocal/applocalfile."$(date --iso-8601=seconds)"
RUN ls -la applocal
I run this sequence of commands...
docker build Dockerfile -t mine/applocal-persist
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM ubuntu
---> c29284518f49
Step 2/4 : RUN mkdir applocal
---> Running in 9f796f4d988a
Removing intermediate container 9f796f4d988a
---> 99005a7ffed1
Step 3/4 : RUN touch applocal/applocalfile."$(date --iso-8601=seconds)"
---> Running in ffbf2f4c636a
Removing intermediate container ffbf2f4c636a
---> 199bc706dcc6
Step 4/4 : RUN ls -la applocal
---> Running in 7da02faa9fba
total 8
drwxr-xr-x 1 root root 4096 Jul 16 13:52 .
drwxr-xr-x 1 root root 4096 Jul 16 13:52 ..
-rw-r--r-- 1 root root 0 Jul 16 13:52 applocalfile.2021-07-16T13:52:00+00:00
Removing intermediate container 7da02faa9fba
---> 7387c521d82b
Successfully built 7387c521d82b
Successfully tagged mine/applocal-persist:latest
Then run the command...
docker run -v applocalsaved:/applocal mine/applocal-persist
Looking at the Volume data it has worked
ls -la /var/lib/docker/volumes/applocalsaved/_data/
total 8
drwxr-xr-x 2 root root 4096 Jul 16 14:55 .
drwxr-xr-x 3 root root 4096 Jul 16 14:55 ..
-rw-r--r-- 1 root root 0 Jul 16 14:52 applocalfile.2021-07-16T13:52:00+00:00
If I wait a few minutes later and re-run docker run -v applocalsaved:/applocal mine/applocal-persist
...and check the volume data again, no new file exists
ls -la /var/lib/docker/volumes/applocalsaved/_data/
total 8
drwxr-xr-x 2 root root 4096 Jul 16 14:55 .
drwxr-xr-x 3 root root 4096 Jul 16 14:55 ..
-rw-r--r-- 1 root root 0 Jul 16 14:52 applocalfile.2021-07-16T13:52:00+00:00
Run history...
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d16e9aa495e mine/applocal-persist "bash" 57 seconds ago Exited (0) 55 seconds ago distracted_cohen
69ff06d9c886 mine/applocal-persist "bash" 2 minutes ago Exited (0) 2 minutes ago affectionate_lehmann
I've listed the Volume Inspect here...
docker volume inspect applocalsaved
[
{
"CreatedAt": "2021-07-16T14:55:24+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/applocalsaved/_data",
"Name": "applocalsaved",
"Options": null,
"Scope": "local"
}
]
I'm obviously missing a trick here - or misunderstanding what is going on or the design around this.
Thanks in advance
For info: I'm using Windows running Virtual Box and running Ubuntu 21.04 as a VM
Those commands run once when the image is built.
If you want something to run on container startup, you can use CMD or ENTRYPOINT
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
The commands in the Dockerfile only run once, when the image is originally built. You can verify this for example by just running the image without a volume mount:
docker build -t mine/applocal-persist .
docker run --rm mine/applocal-persist \
ls -l ./applocal
sleep 60
docker run --rm mine/applocal-persist \
ls -l ./applocal
If you start the container with a named volume mounted, only if the volume is a Docker named volume and only if the volume is empty, the contents of the image will be copied into the volume. (This doesn't happen on Docker bind mounts, Kubernetes volumes, or if the image has changed; I would not rely on this for any sort of data sharing since it works in so few contexts.)
Conversely, if you start the container with any sort of volume mounted, whatever content is in the volume completely replaces what's in the image. You can see this with some more experimentation:
# Build the image
docker build -t mine/applocal-persist
# Start the container with a new named volume mounted; see what's there.
docker volume rm applocalsaved
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal
# Edit a file in the volume and see that it gets persisted across restarts
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
touch /applocal/foo
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal
# But it is not included in the image without the bind mount
docker run --rm mine/applocal-persist \
ls -l /applocal
sleep 60
# Rebuild the image
docker build -t mine/applocal-persist
# In the base image, you will see the updated timestamp
docker run --rm mine/applocal-persist \
ls -l /applocal
# But if you mount the volume, the old volume contents replace the
# image contents and you will only see the old timestamp
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal
I want to use a custom docker config.json file like this to reassign the detach keystrokes:
{
"detachKeys": "ctrl-q,ctrl-q"
}
In a "normal" docker world, i.e. one where docker is installed via apt or similar and not snap, I could put this file in $HOME/.docker/config.json and the setting is picked up when I next run the docker command. However, this file is not recognized when running /snap/bin/docker. docker just silently ignores it.
If I try to force it to use this directory, I am denied:
$ docker --config .docker/ run -it ubuntu /bin/bash
WARNING: Error loading config file: .docker/config.json: open .docker/config.json: permission denied
If I try to locate the file alongside daemon.json in /var/snap/docker/current/config/ this also silently fails to notice any config.json:
$ ls -l /var/snap/docker/current/config/
total 8
-rw-r--r-- 1 root root 36 Feb 28 11:28 config.json
-rw-r--r-- 1 root root 200 Feb 28 09:44 daemon.json
$ docker run -it ubuntu /bin/bash
Now, I can force the directory location, but surely there is a better way?
$ docker --config /var/snap/docker/current/config/ run -it ubuntu /bin/bash
Ok, after writing this question, I ran across the answer. Snap wants this file to go here:
$ ls -l ~/snap/docker/current/.docker/
total 4
-rw-r--r-- 1 gclaybur gclaybur 36 Feb 28 12:04 config.json
I followed this setup completely. Running WSL1 with Docker Desktop on Windows 10. I am not interested in WSL2 at this point. I don't have Insider Windows.
Now, I am trying to start a container with a volume, so that the container's files are copied into the volume. According to official docs:
Populate a volume using a container
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume.
So it should be possible, but I must be missing something really basic here, cause it just doesn't work.
I've tried -v vol-name:/path/on/container -> this creates a named volume... somewhere. No clue where, nor how to view it. Doing volume inspect vol-name shows a path that doesn't exist, neither in WSL nor in Docker Host (Windows). I even tried mounting the MobyVM and it isn't there either.
I've tried -v /c/full/path:/path/on/container -> this creates a bind-type mount. It's empty (by design). If it put files under /c/full/path, I will see them in container under /path/on/container, but that's not what I need. I need to populate the volume with contents from container. From what I understand from documents, I need a volume-type mount, not bind-type mount. In this case the -v options forces bind-type
I've tried --mount type=volume,source=/c/full/path,destination=/path/on/container -> This results in this error: docker: Error response from daemon: create /c/full/path: "/c/full/path" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path. The path separator is not allowed...
I've read something about special characters in passwords being an issue and reset my password
I've read about the /c/full/path needing full access permission, and gave "everyone" full access
Please help
Let me summarize what i think your setup is then i will try and give a solution.
You are running docker desktop for the docker engine
You are connecting to docker desktop via the docker cli installed on WSL
You are trying to share a windows folder with a running container
You have enabled sharing of your C drive in the settings of docker desktop
I think that you are linking to the wrong path, the path you give needs to be recognized by docker desktop which remember is running in windows, therefore the path needs to be in the format c:/full/path.
So try the following to test if you have everything setup correctly
➜ cd /mnt/c
➜ mkdir -p full/path
➜ cd full/path
➜ pwd
/mnt/c/full/path
➜ docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
➜ date > foobar.txt
➜ cat foobar.txt
Thu Feb 6 17:49:31 STD 2020
➜ docker run --rm -v c:/full/path:/full/path alpine cat /full/path/foobar.txt
Thu Feb 6 17:49:31 STD 2020
➜
to finish off you can use wslpath along with pwd to get the current dir in a form that docker desktop can use.
docker run --rm -v $(wslpath -w $(pwd)):/full/path alpine ls /full/path/
Hope this helps
I think that I have the same setup as you: windows 10, WSL1, docker desktop 2.2.0.0 (42247)
➜ uname -a
Linux LAPTOP1001 4.4.0-17134-Microsoft #1130-Microsoft Thu Nov 07 15:21:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
➜ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
I have managed to perform the same operations as you, i did this in the following manner using alpine:
➜ docker volume create my-vol
my-vol
➜ docker volume ls
DRIVER VOLUME NAME
local my-vol
➜ docker volume inspect my-vol
[
{
"CreatedAt": "2020-02-07T11:06:15Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
➜ docker run -it --name volTest --mount source=my-vol,target=/my-vol alpine
/ # cd my-vol
/my-vol # ls
/my-vol # exit
➜ docker run -it --name volTest2 --mount source=my-vol,target=/usr/bin alpine
/ # cd /usr/bin/
/usr/bin # ls -al
total 228
drwxr-xr-x 2 root root 4096 Feb 7 11:16 .
drwxr-xr-x 7 root root 4096 Jan 16 21:52 ..
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [ -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [[ -> /bin/busybox
...
...
/usr/bin # exit
➜ docker container start volTest
volTest
➜ docker container attach volTest
/ # cd /my-vol/
/my-vol # ls
total 228
drwxr-xr-x 2 root root 4096 Feb 7 11:16 .
drwxr-xr-x 7 root root 4096 Jan 16 21:52 ..
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [ -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 [[ -> /bin/busybox
...
...
/my-vol # exit
➜
Firstly I create a volume my-vol and inspect it
Secondly my-vol is attached to the root of the container myTest, when you look inside the volume it is empty
Then attach my-vol to another container called myTest2 but in the folder /usr/bin which as you can see contains lots of files.
Now restarting myTest you can see the volume has been populated with the files from /usr/bin from the container myTest2
Can you replicate my results on your system?
Cheers
I have multiple docker containers, and all of them must have a common content in a subdirectory. This common content is pretty much standard, so I would enjoy being able to package it somehow.
One option would be to create a volume, put the files in the volume, and then bind the containers to the volume, however from what I understand of docker volumes, the result would be that the volume is shared among containers. Any changes a container does to the volume's content, will appear in the other container. I don't want this.
Note that keeping it readonly is not an option. I want the data to be read-write, I simply don't want it to be shared among containers, and at the same time I don't want to keep them in the image.
Is this possible in Docker?
As long as you don't re-use the same volume for other containers, you can use a docker image as template, and use it to "propagate" the volume data;
1. Create a directory for all your template files;
mkdir data
# just creating some dummy files here to illustrate the concept
touch data/foo data/bar data/baz data/bla data/bla2
2. Create a Dockerfile for building the template image
This image contains default data to be used for the containers to used
We're using a tiny image ("hello-world") as it requires a command to
allow a container to be created from it
FROM hello-world
COPY . /data/
3. Build the template image
docker build -t template-data .
4. Create a new volume, and propagate the data
Then, you can create volume, create a container from the image, and
attach the volume to it. The first time the volume is used and is still
empty, the files are copied from the container to the volume.
After the volume is created, and propagated, we don't really need the
container anymore (the data is copied to the volume), so we're passing the
--rm flag as well, so that the container (not the volume, because it's a
"named" volume) is removed directly after it exits
# create an empty volume
docker volume create --name data-volume1
# start the container (which copies the data), and remove the container
docker run -it --rm -v data-volume1:/data template-data
5. Use the volume for your application
Then start your application container, and attach the volume (which now
contains the template data) to it.
For this example, I just start an alpine container and show the contents
of the volume, but normally this would be your application;
docker run --rm -v data-volume1:/somewhere alpine ls -l /somewhere
And you can see the data is there;
docker run --rm -v data-volume1:/somewhere alpine ls -l /somewhere
total 0
-rw-r--r-- 1 root root 0 Jun 2 20:14 bar
-rw-r--r-- 1 root root 0 Jun 2 20:14 baz
-rw-r--r-- 1 root root 0 Jun 2 20:14 bla
-rw-r--r-- 1 root root 0 Jun 2 20:14 bla2
-rw-r--r-- 1 root root 0 Jun 2 20:14 foo
You can do this multiple times, but you need to create a new volume
for each project/application, otherwise they share the same volume,
so are working on the same data;
docker volume create --name data-volume2
docker volume create --name data-volume3
docker volume create --name data-volume4
docker run -it --rm -v data-volume2:/data template-data
docker run -it --rm -v data-volume3:/data template-data
docker run -it --rm -v data-volume4:/data template-data
docker run --rm -v data-volume2:/somewhere alpine ls -l /somewhere
docker run --rm -v data-volume3:/somewhere alpine ls -l /somewhere
docker run --rm -v data-volume4:/somewhere alpine ls -l /somewhere
Hope this helps!