I did the following and lost all the changed data in my Docker container.
docker build -t <name:tag> .
docker run *-p 8080:80* --name <container_name> <name:tag>
docker exec (import and process some files, launch a server to host them)
Then I wanted to run it on a different port. docker stop & docker run does not work. Instead I did
docker stop
docker rm <container_name>
docker run (same parameters as before)
After the restart I saw the changes that happened in the container at 1-3 had disappeared, and had to re-run the import.
How do I do this correctly next time?
what you have to do is build the image from the container you just stopped after making changes. Because your old command still using the old image which doesn't have new changes(you have made changes in container which you just stopped not in image )
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Create a new image from a container's changes
docker commit -a me new_nginx myrepo/nginx:latest
then you can start container with the new image you just built
but if you dont want create image with the changes you made(like you dont want to put config containing password in the image) you can use volume mount
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
Manage data in containers
Every time you do a docker run it will spin up a fresh container based on your image. And once a container is started, there are very few things that docker allows you to change with the docker update. So instead, you should preserve your data in an external volume that needs to persist between instances of a container. E.g.
docker run -p 8080:80 -v app-data:/data --name <container_name> <name:tag>
The volume name (app-data) and mount point in the container (/data) can be changed for your own requirements. Then when you destroy and restart a new container, you can mount the same volume in the new container.
Related
i'm trying to start up tomcat on my docker desktop,and i followed the official tomcat tutorial on docker hub.but somehow i found that docker will create a new container everytime after running the command:docker run -it --rm tomcat and delete the container automatically when tomcat shuts down.
i have already known the reason is that run --rm can automatically remove the container when it exits.
now i finally built webs on tomcat,and i don't want them to be vanished.
how can i save my container before it's deleted?
thx! ;D
Based on what I've found on the internet, remove the --rm flag is not possible currently. docker update gives you the ability to update some parameters after you start your container, but you cannot update the cleanup flag (--rm) according to the document.
References:
I started a docker container with --rm Is there an easy way to keep it, without redoing everything?
Cancel --rm option on running docker container
But some workaround can be applied. You can export your current container to an image, act as a checkpoint, then you can start a new container without the --rm flag, and based on the image you exported. You can use docker commit to do so:
docker commit [your container name/id] [repo/name:tag]
(Use docker ps to list your containers, do it in a new bash/cmd/PowerShell session, or you will lose your work when you exit your docker container)
Then start a new container without the --rm flag:
docker run -it [repo/name:tag]
Disclaimer:
In the production environment, you should never change the container by running bash or sh in it. Use Dockerfile and docker build instead. Dockerfile will give you a reproducible configuration even you delete your container. By design, the container should not have any important data (aka not persistent). Use the image and volumes to save your custom changes and configurations.
I have 2 docker containers which I created from 2 Dockerfiles.
docker run container1 # It updates a txt (update.txt) file every minutes and store it in the same container
docker run container2 --link container1 # A web server which in intended to read the updated file in container1
Now I want to access the file update.txt in container2 but I can't do that. I don't want to just copy the file since it will become static but I want to read the dynamically updated file to read the latest updates. Can anyone suggest a way out?
Use named volume to store update.txt in that volume on host.
Mount this volume in both containers.
All changes that container 1 writes then will be accessible in container 2.
Firstly, create a docker volume by using the below command
$ docker volume create --name sharedVolume
sharedVolume
And then start the first container by mounting the above-created volume and write data in that location where volume will be mounted.
$ docker run -it -v sharedVolume:/dataToWrite ubuntu
root#1021d9260d7b:/# echo "DATA Written" >> /dataToWrite/Example.txt
root#1021d9260d7b:/# cat /dataToWrite/Example.txt
DATA Written
Now, start the second container and mount the same volume you created above and check whether the same file present in the second container or not
$ docker run -it -v sharedVolume:/dataToWrite alpine
/ # cat /dataToWrite/Example.txt
DATA Written
As you can see above, the first container is ubuntu and the second container is alpine. Contents which are written in the first container is present in second container.
I have a dockerized application that uses the filesystem to store lots of state. The application code is contained in the docker image
I am considering a update strategy which involves sharing the volume between two containers, but making sure that at most one container at a time can write to that filesystem.
The workflow would be:
start container A with /data mounted rw
start container B with /data mounted ro, and a newer version of the application
stop serving requests to container A
for container A, make the /data mount read-only
for container B, make the /data mount read-write
start serving requests to container B
You can re-mount your volume from inside the container, in the rw mode, like that:
mount -o remount,rw /mnt/data
The catch is that mount syscall is not allowed inside the Docker containers by default so that you would have to run it in a privileged mode:
docker run --privileged ...
or enable the SYS_ADMIN capability
SYS_ADMIN Perform a range of system administration operations.
docker run --cap-add=SYS_ADMIN --security-opt apparmor:unconfined
(note that I have had to also add --security-opt apparmor:unconfined, to make this work on Ubuntu).
Also, remounting the rw volume back to ro might be tricky, as some process(es) might have already opened some files inside it for writing , in which case the remount will fail with is busy error message.
But my guess is that you can just restart the container instead (as it would be the one running an old version of the app).
Not exactly what the OP requested, but I've had a similar question where i needed to get data OUT of the running container, but had mounted RW.
Other ways to extract the data would have taken too long.
My approach ? Stash the container as an image and start a new container from that Image with a mount as RW :D
Initial container start:
docker run -p 80:8080 --mount type=bind,source="C:\data-folder-local\",target=/data-folder-container-ro,readonly -d imageName:imageTag
Making an image from the container. You can stop this container before/after if you want.
docker commit -a "mud" -m "Damn, mount should be rw, stashing a snapshot to reuse." CONTAINER_ID_HERE snapshotImageName:snapshotImageTag
where CONTAINER_ID_HERE i got from the output of docker ps (https://docs.docker.com/engine/reference/commandline/ps/)
Start a new container from the image made, but this time mount with write rights!
docker run -p 80:8080 --mount type=bind,source="C:\data-folder-local\",target=/data-folder-container-rw -d snapshotImageName:snapshotImageTag
write out files to the mount folder (on local system) from within your container :D
Hope that helps somebody.
I am running an instance of docker, and I would like to save my work - the docs just aren't 100% clear on how to do this, so I'm asking here. I opened the docker instance using:
docker run -it [public dockerhub name]
Now I would like to save all my work locally so that I can come back to it. I don't particularly want to check it into dockerhub, unless that's advisable.
Here's what I have done. I have opened a new docker CLI tab, and done docker ps there to find the ID of the running docker instance. Then in the same tab I tried doing this:
docker commit <docker-id> me/myinstance
This gave me a commit hash.
Can I now safely exit the running docker instance? What command would I use to open it again - do I need to store the commit hash, or can I just do docker run -it me/myinstance?
As the docs mention:
You pull an image from Docker hub
You run that image on a container using docker run <image>
When you make changes to a container, you're not changing the underlying image, so those changes are not persisted if the container is stopped. To persist the changes you've made to the container, you create a new image with docker commit <container_id>
In the example that is on Docker docs:
# What containers are running on my system?
$ docker ps
ID IMAGE COMMAND CREATED
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago
# Create a new image called svendowideit/testimage, tag it as "version3"
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
# What images do I have on my system?
$ docker images
REPOSITORY TAG ID
svendowideit/testimage version3 f5283438590d
This way, you have persisted the changes to container c3f279d17e0a, on a new image, called svendowideit/testimage:version3.
Now you have an image with your modification, so you can run it as many times as you want on a container:
$ docker run svendowideit/testimage:version3
Again, containers are stateless. Any change you make inside a container, is lost when that container stops. One way to persist data even after a container exists, is by using volumes. This way your container has access to a directory in the host filesystem, that you can read and write.
Changes made inside a container are not lost when the container exits and containers (container applications) are not stateless unless you have specifically separated the data storage from the application (by mounting folders from the host filesystem or sending data to a database outside of the container).
To see your changes persisted in a container, start the old container (docker start ~) instead of creating a new container (docker run ~).
This is easier to do if you name your containers.
ie.
docker run -it --name containerName imageName
do stuff to your container
docker kill containerName
docker start containerName
You will see that your changes are persisted in that container.
You can also commit your container as an image, which can be pushed to a registry or exported to a file.
I'm trying to go deeper in my understanding of Docker's volume, and I'm having an hard time to figure out the differences / use-case of:
The docker volume create command
The docker run -v /path:/host_path
The VOLUME entry in the Dockerfile file
I particularly don't understand what happens if you combine the VOLUME entry with the -v flag.
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
If you had persisted some content in a volume, but since then deleted the container (which by default does not deleted its associated volume, unless you are using docker rm -v), you can re-attach said volume to a new container (declaring the same volume).
See "Docker - How to access a volume not attached to a container?".
With docker volume create, this is easy to reattached a named volume to a container.
docker volume create --name aname
docker run -v aname:/apath --name acontainer
...
# modify data in /apath
...
docker rm acontainer
# let's mount aname volume again
docker run -v aname:/apath --name acontainer
ls /apath
# you find your data back!
VOLUME instruction becomes interesting when you combine it with volumes-from runtime parameter.
Given the following Dockerfile:
FROM busybox
VOLUME /myvolume
Build an image with:
docker build -t my-busybox .
And spin up a container with:
docker run --rm -it --name my-busybox-1 my-busybox
The first thing to notice is you will have a folder in this image named myvolume. But it is not particularly interesting since when we exit the container the volume will be removed as well.
Create an empty file in this folder, so run the following in the container:
cd myvolume
touch hello.txt
Now spin up a new container, but share the same volume with my-busybox-1:
docker run --rm -it --volumes-from my-busybox-1 --name my-busybox-2 my-busybox
You will see that my-busybox-2 contains the file hello.txt in myvolume folder.
Once you exit both containers, the volume will be removed as well.
#radium226
Specifying VOLUME in Dockerfile makes sure the folder is to be treated as a volume(i.e., outside container) at runtime, as opposed to be a regular directory inside the container. Note the performance and accessibility implications.
If having forgot to specify "-v" in "docker run" command line, the above is still true. It's just the volume name becomes anonymous. But there are still ways to access or recover data from such anonymous volumes.
Using MYSQL from docker hub:
Running the below command as an example:
$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
Therefore, a directory that persists when the container is killed is mounted is available that also provided higher performance for some operations like databases actions.