I have a docker container running and would like to know if I can save its state without commiting it.
For example:
1. Start container
2. Create a new file inside it
3. Exit container
4. Start container
Can the file still exist in this container without running docker commit before exiting.
Docker's stopped, exited Containers, Maintain files and changes in container's writable AUFS layer. Please note this layer will be removed when the container is removed.
POC:
sudo docker run -it --name test debian:jessie /bin/bash
root#3d01feb251bd:/# touch farhad
root#3d01feb251bd:/# exit
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d01feb251bd debian:jessie "/bin/bash" 16 seconds ago Exited (0) 7 seconds ago test
sudo docker start test
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d01feb251bd debian:jessie "/bin/bash" 31 seconds ago Up 8 seconds test
sudo docker exec -it test /bin/bash
root#3d01feb251bd:/# ls
bin boot dev etc farhad home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
As you can see the file I touched before exiting the container is still there.
Related
I ran this command:
docker run -d -ti foo
it works, but I realized that I probably forgot to remote the -ti part.
I assume that docker ignores those flags if -d is used, does anyone know?
It seems like -ti and -d would contradict each other?
It still sets up the input filehandle, and allocates a pseudo tty for the container. If the app inside the container attempts to read from stdin, it will hang waiting on input rather than exit immediately or fail. Later on, you can attach to that process. E.g.
$ docker run -dit --name test-dit busybox sh
f0e057ce47e03eb227aacb42e3a358b14fa5d8b26ad490fcec7cbfe0cd3cce73
$ docker run -d --name test-d busybox sh
4f2583d3380953f328b702c88884fbe55f16c44bce13dbccc00c4bb81f3270f2
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f2583d33809 busybox "sh" 5 seconds ago Exited (0) 4 seconds ago test-d
f0e057ce47e0 busybox "sh" 14 seconds ago Up 13 seconds test-dit
$ docker container attach test-dit
/ #
/ # ls
bin dev etc home proc root sys tmp usr var
/ # exit
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f2583d33809 busybox "sh" 22 seconds ago Exited (0) 21 seconds ago test-d
f0e057ce47e0 busybox "sh" 31 seconds ago Exited (0) 2 seconds ago test-dit
In the first container ls command, you can see the shell without the -it option immediately exited, while the one with -it was available to connect and run commands.
It does not ignore -ti.
The -ti part means it enables direct user interaction, and the -d part means it detaches the container the moment it gets started. So, in order to actually interact with it, you'll have to do
?> docker attach foo
So, yes, it might not be very useful, but it neither causes an impossible situation, nor one you cannot get out of.
Here is an example on my CLI:
$ docker pull hello-world
$ docker run hello-world
It shows empty when ls/ps
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
It shows up only when I use -a but it'd suggest the containers are actually not actively running.
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96c3e42ae83a hello-world "/hello" 11 seconds ago Exited (0) 8 seconds ago jovial_rosalind
dcaed0ba308f registry "/entrypoint.sh /etc…" 42 minutes ago Created 0.0.0.0:5000->5000/tcp registry
Have I missed something?
Looks like your container exited right away. Is it meant to be interactive? (like running bash, or needing any user interaction?) if it is, you should run it like this to attach a terminal to it:
docker run -ti hello-world
If not, what does your hello program do? If it is not something that will keep running, then the container will stop whenever it exits.
Also keep in mind that, unless you pass docker run the -d/--detach flag, it will only return after the container has stopped - so if it returns right away, that means your container has already stopped.
You may want to use one of these to get a bash shell in the container to debug your problem:
docker run -ti hello-world bash
docker run --entrypoint bash -ti hello-world
To understand the difference between them, you can read the documentation on ENTRYPOINT and COMMAND.
This question already has answers here:
I lose my data when the container exits
(11 answers)
Closed 3 years ago.
I pulled Ubuntu image using docker pull.
I connect to the container using docker exec and then create a file and then exit.
Again, when I execute docker exec file is lost.
How to maintain the file in that container, I have tried dockerfile and tagging docker images, it works.
But, is there any other way to maintain the files in docker container for a longer time?
One option is to commit your changes. After you've added the file, and while the container is still running, you should run:
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Another option, maybe you'll want to use a volume, but that depends on your logic and needs.
The best way to persist content in containers its with Docker Volumes:
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ sudo docker run --rm -it -v $PWD:/data ubuntu
root#00af7ccf1d3b:/# echo "Persits data with Docker Volumes" > /data/docker-volumes.txt
root#00af7ccf1d3b:/# cat /data/docker-volumes.txt
Persits data with Docker Volumes
root#00af7ccf1d3b:/# exit
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ ls -al
total 12
drwxr-xr-x 2 exadra37 exadra37 4096 Nov 25 15:34 .
drwxr-xr-x 8 exadra37 exadra37 4096 Nov 25 15:33 ..
-rw-r--r-- 1 root root 33 Nov 25 15:34 docker-volumes.txt
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ cat docker-volumes.txt
Persits data with Docker Volumes
The docker command explained:
sudo docker run --rm -it -v $PWD:/data alpine
I used the flag -v to map the current dir $PWD to the /data dir inside the container
inside the container:
I wrote some content to it
I read that same content
I exited the container
On the host:
I used ls -al to confirm that the file was persisted to my computer.
I confirmed could access that same file in my computer filesystem.
I have installed Docker and have running some Ubuntu image with command:
sudo docker run ubuntu
I would like to create some text file on it and find it next time the same image will run. How to achieve that?
UPD.
Got problems with attaching to docker.
I have running docker
docker ps -a
aef01293fdc9 ubuntu "/bin/bash" 6 hours ago Up 6 hours priceless_ramanujan
Since it is Up mode, I suppose I don't need to execute command:
docker start priceless_ramanujan
So, I run command attach
docker attach priceless_ramanujan
And got nothing in output while command not returns.
Why I can't get to container's bash?
Simple example:
$ docker run -it ubuntu
root#4d5643e8c1a8:/# echo "test" > test.txt
root#4d5643e8c1a8:/# cat test.txt
test
root#4d5643e8c1a8:/# exit
exit
$ docker run -it ubuntu
root#cdb44750bffc:/# cat test.txt
cat: test.txt: No such file or directory
root#cdb44750bffc:/#
docker run image_name
This command creates and starts a new container based on the provided image_name. If a name is not set for the container, a random one is generated and assigned by docker. In the above example 2 containers were created based on ubuntu.
with docker ps -a we can see that modest_jennings and optimistic_leakey are the random names created:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cdb44750bffc ubuntu "/bin/bash" About a minute ago Exited (1) 4 seconds ago optimistic_leakey
4d5643e8c1a8 ubuntu "/bin/bash" 2 minutes ago Exited (0) 2 minutes ago modest_jennings
cat test.txt failed the 2nd time because the file didn't exist. The container started from a "clean" ubuntu image.
Actually, we created test.txt inside modest_jennings only.
docker start container_name
This command starts a stopped container. So, in our case, the file is still there:
$ docker start modest_jennings
modest_jennings
$ docker attach modest_jennings
root#4d5643e8c1a8:/# cat test.txt
test
root#4d5643e8c1a8:/#
docker commit container_name image_name
This command is to create a new image, so that you can use it later and run containers based on that image. Continuing our example...
$ docker commit modest_jennings my_ubuntu
sha256:a4357f37153ac0b94e37315595f1a3b540538283adc3721df4d4e3b39bf8334f
$ docker run -it my_ubuntu
root#2e38616d532a:/# cat test.txt
test
root#2e38616d532a:/#
If you want a custom image, you can create a Dockerfile
`FROM ubuntu:16.04
ADD ./test.txt /tmp/`
after you can build it docker build -t ubuntu:custom .
and finally run your custom image docker run --name myubuntu ubuntu:custom sleep 3000
You can check your file with docker exec -it myubuntu /bin/bash and more /tmp/test.txt
I'm trying to create a new Docker image that no longer uses volumes from a running container that does use images. The volumes were created using docker-compose file, not Dockerfile. The problem is, when I launch a new container via new docker-compose.yml file it still has the volumes mapped. I still need to keep these volumes and the original containers/images that use them. Also, if possible I would like to continue to use the same docker image, just add a new version, or :latest. Here's the steps I used:
New version of an existing image:
docker commit <image id> existingImage:new-version
Create a new image from current running container:
docker commit <Image ID> newimage
Create new docker-compose.yml with no volumes defined and run docker-compose with a different project name
docker-compose -p <new project name>
Running without docker-compose, just use docker run:
docker run -d -p 8093:80 <img>:<version>
Any time I run any combination of these the volumes are still mapped from the original image. So my question is, how to I create a container from an image that once had mapped volumes but I no longer want to use the volumes?
Edit:
Additional things I've tried:
Stop container, remove container, restart docker, run docker compose again. No luck.
Edit 2:
Decided to start over on the image. Using a base image, launched a container with an updated docker compose file that uses the now unrelated image. Run docker-compose -f up -d -> STILL has these same volumes mapped even though the image does not (and never has) any volumes mapped, and the current docker-compose.yml file does not map files. It looks like docker-compose caches what volumes are mapped for projects.
After searching for caching options in docker-compose, I came across this article: How to get docker-compose to always re-create containers from fresh images?
which seems to solve the problem of caching images but not containers caching volumes
According to another SO post what I am trying to do is not possible. For future reference, one cannot attach volumes to an image, and then later decide to remove them. A new image must be created without the volumes instead. Reference:
How to remove configure volumes in docker images
To remove volumes along with the containers used by docker-compose, use docker-compose down -v.
To start containers with docker-compose, leave your existing volumes intact, but not use those volumes, you should change your project name. You can use docker-compose -p new_project_name up -d for that.
Edit: here's an example showing how docker-compose does not reuse named volumes between different projects, but it does reuse and persist the volume unless you do a down -v:
$ docker-compose -p proj1 -f docker-compose.vol-named.yml up -d
Creating network "proj1_default" with the default driver
Creating volume "proj1_data" with default driver
Creating proj1_test_1 ...
Creating proj1_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71f2eb516f71 busybox "tail -f /dev/null" 5 seconds ago Up 2 seconds proj1_test_1
$ docker exec -it 71f /bin/sh
/ # ls /data
/ # echo "Hello proj1" >/data/data.txt
/ # exit
Volume is now populated, lets stop and start a new container to show it persist:
$ docker-compose -p proj1 -f docker-compose.vol-named.yml down
Stopping proj1_test_1 ... done
Removing proj1_test_1 ... done
Removing network proj1_default
$ docker-compose -p proj1 -f docker-compose.vol-named.yml up -d
Creating network "proj1_default" with the default driver
Creating proj1_test_1 ...
Creating proj1_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
311900fd3d27 busybox "tail -f /dev/null" 5 seconds ago Up 3 seconds proj1_test_1
$ docker exec -it 311 /bin/sh
/ # cat /data/data.txt
Hello proj1
/ # exit
There's the expected persistent volume, lets run a different project at the same time to show the volume would be independent:
$ docker-compose -p proj2 -f docker-compose.vol-named.yml up -d
Creating network "proj2_default" with the default driver
Creating volume "proj2_data" with default driver
Creating proj2_test_1 ...
Creating proj2_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d39e6fc51436 busybox "tail -f /dev/null" 4 seconds ago Up 2 seconds proj2_test_1
311900fd3d27 busybox "tail -f /dev/null" 33 seconds ago Up 32 seconds proj1_test_1
$ docker exec -it d39 /bin/sh
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 4096 Nov 6 19:56 .
drwxr-xr-x 1 root root 4096 Nov 6 19:56 ..
/ # exit
The volume is completely empty in the new project. Let's cleanup.
$ docker-compose -p proj2 -f docker-compose.vol-named.yml down -v
Stopping proj2_test_1 ...
Stopping proj2_test_1 ... done
Removing proj2_test_1 ... done
Removing network proj2_default
Removing volume proj2_data
$ docker volume ls
DRIVER VOLUME NAME
local proj1_data
Note the volume is there in proj1 from before.
$ docker-compose -p proj1 -f docker-compose.vol-named.yml down -v
Stopping proj1_test_1 ... done
Removing proj1_test_1 ... done
Removing network proj1_default
Removing volume proj1_data
$ docker volume ls
DRIVER VOLUME NAME
But doing a down -v deletes the volume.