docker container created and deleted automatically - docker

i'm trying to start up tomcat on my docker desktop,and i followed the official tomcat tutorial on docker hub.but somehow i found that docker will create a new container everytime after running the command:docker run -it --rm tomcat and delete the container automatically when tomcat shuts down.
i have already known the reason is that run --rm can automatically remove the container when it exits.
now i finally built webs on tomcat,and i don't want them to be vanished.
how can i save my container before it's deleted?
thx! ;D

Based on what I've found on the internet, remove the --rm flag is not possible currently. docker update gives you the ability to update some parameters after you start your container, but you cannot update the cleanup flag (--rm) according to the document.
References:
I started a docker container with --rm Is there an easy way to keep it, without redoing everything?
Cancel --rm option on running docker container
But some workaround can be applied. You can export your current container to an image, act as a checkpoint, then you can start a new container without the --rm flag, and based on the image you exported. You can use docker commit to do so:
docker commit [your container name/id] [repo/name:tag]
(Use docker ps to list your containers, do it in a new bash/cmd/PowerShell session, or you will lose your work when you exit your docker container)
Then start a new container without the --rm flag:
docker run -it [repo/name:tag]
Disclaimer:
In the production environment, you should never change the container by running bash or sh in it. Use Dockerfile and docker build instead. Dockerfile will give you a reproducible configuration even you delete your container. By design, the container should not have any important data (aka not persistent). Use the image and volumes to save your custom changes and configurations.

Related

Docker - Change existing containers settings without RUN command

Is it possible to change the settings of docker container like entrypoint, ports or memory-limits without having to delete the container and run using docker run command? Example: docker stop <container_id>, change settings and then docker start <container_id>?
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
Is it possible to change the settings by stopping the container instead of re-run?
You need to stop, delete, and recreate the container.
# this is absolutely totally 100% normal and routine
docker stop my_container
docker rm my_container
# docker build -t image_name .
docker run -d -p 12345:8000 --name my_container image_name
This isn't specific to Docker. If you run any command in any Unix-like environment and you want to change its command-line parameters or environment variables, you need to stop the process and create a new one. A Docker container is a wrapper around a process with some additional isolation features, and for a great many routine things you're required to delete the container. In cluster container environments like Kubernetes, this is routine enough that changing any property of a Deployment object will cause all of the associated containers (Kubernetes Pods) to get recreated automatically.
There are a handful of Docker commands that exist but are almost never used in normal operation. docker start is among these; just skip over it in the documentation.
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
In fact, the normal behavior of docker run is that you're always beginning the program from a known "clean" initial state; this is easier to set up as an application developer than trying to recover from whatever state the previous run of the application might have been left in.
If you need to debug the image startup, an easy thing to do is to tell the container to run an interactive shell instead of its default command
docker run --rm -it image_name /bin/sh
(Some images may have bash available which will be more comfortable to work in; some images may require an awkward docker run --entrypoint option.) From this shell you can try to manually run the container startup commands and see what happens. You don't need to worry about damaging the container code in any particular way, since anything you change in this shell will get lost as soon as the container exits.

How to use container exits immediately after startup in docker?

As everyone knows, we can use docker start [dockerID] to start a closed container.
But, If this container exits immediately after startup. What should I do?
For example, I have a MySQL container, it runs without any problems. But the system is down. At next time I start this container. It tell me a file is worry so that this container immediately exit.
Now I want to delete this file, but this container can not be activated, so I can't enter this container to delete this file. What should I do?
And if I want to open bash in this state container, What should I do?
Delete the container and launch a new one.
docker rm dockerID
docker run --name dockerID ... mysql:5.7
Containers are generally treated as disposable; there are times you're required to delete and recreate a container (to change some networking or environment options; to upgrade to a newer version of the underlying image). The flip side of this is that containers' state is generally stored outside the container filesystem itself (you probably have a docker run -v or Docker Compose volumes: option) so it will survive deleting and recreating the container. I almost never use docker start.
Creating a new container gets you around the limitations of docker start:
If the container exits immediately but you don't know why, docker run or docker-compose up it without the -d option, so it prints its logs to the console
If you want to run a different command (like an interactive shell) as the main container command, you can do it the same as any other container,
docker run --rm -it -v ...:/var/lib/mysql/data mysql:5.6 sh
docker-compose run db sh
If the actual problem can be fixed with an environment variable or other setting, you can add that to the startup-time configuration, since you're already recreating the container

Docker: How a container persists data without volumes in the container?

I'm running the official solr 6.6 container used in a docker-compose environment without any relevant volumes.
If i modify a running solr container the data survives a restart.
I dont see any volumes mounted and it works for a plain solr container:
docker run --name solr_test -d -p 8983:8983 -t library/solr:6.6
docker exec -it solr_test /bin/bash -c 'echo woot > /opt/solr/server/solr/testfile'
docker stop solr_test
docker start solr_test
docker exec -it solr_test cat /opt/solr/server/solr/testfile
Above example prints 'woot'. I thought that a container doesnt persist any data? Also the documentation mentions that the solr cores are persisted in the container.
All i found, regarding container persistence is that i need to add volumes on my own like mentioned here.
So i'm confused: do containers store the data changed within the container or not? And how does the solr container achive this behaviour? The only option i see is that i misunderstood peristence in case of docker or the build of the container can set some kind of option to achieve this which i dont know about and didnt see in the solr Dockerfile.
This is expected behaviour.
The data you create inside a container persist as long as you don't delete the container.
But think containers in some way of throw away mentality. Normally you would want to be able to remove the container with docker rm and spawn a new instance including your modified config files. That's why you would need an e.g. named volume here, which survives a container life cycle on your host.
The Dockerfile, because you mention it in your question, actually only defines the image. When you call docker run you create a container from it. Exactly as defined in the image. A fresh instance without any modifications.
When you call docker commit on your container you snapshot it (including the changes you made to the files) and create a new image out of it. They achieve the data persistence this way.
The documentation you referring to explains this in detail.

Docker: How to save running instance?

I am running an instance of docker, and I would like to save my work - the docs just aren't 100% clear on how to do this, so I'm asking here. I opened the docker instance using:
docker run -it [public dockerhub name]
Now I would like to save all my work locally so that I can come back to it. I don't particularly want to check it into dockerhub, unless that's advisable.
Here's what I have done. I have opened a new docker CLI tab, and done docker ps there to find the ID of the running docker instance. Then in the same tab I tried doing this:
docker commit <docker-id> me/myinstance
This gave me a commit hash.
Can I now safely exit the running docker instance? What command would I use to open it again - do I need to store the commit hash, or can I just do docker run -it me/myinstance?
As the docs mention:
You pull an image from Docker hub
You run that image on a container using docker run <image>
When you make changes to a container, you're not changing the underlying image, so those changes are not persisted if the container is stopped. To persist the changes you've made to the container, you create a new image with docker commit <container_id>
In the example that is on Docker docs:
# What containers are running on my system?
$ docker ps
ID IMAGE COMMAND CREATED
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago
# Create a new image called svendowideit/testimage, tag it as "version3"
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
# What images do I have on my system?
$ docker images
REPOSITORY TAG ID
svendowideit/testimage version3 f5283438590d
This way, you have persisted the changes to container c3f279d17e0a, on a new image, called svendowideit/testimage:version3.
Now you have an image with your modification, so you can run it as many times as you want on a container:
$ docker run svendowideit/testimage:version3
Again, containers are stateless. Any change you make inside a container, is lost when that container stops. One way to persist data even after a container exists, is by using volumes. This way your container has access to a directory in the host filesystem, that you can read and write.
Changes made inside a container are not lost when the container exits and containers (container applications) are not stateless unless you have specifically separated the data storage from the application (by mounting folders from the host filesystem or sending data to a database outside of the container).
To see your changes persisted in a container, start the old container (docker start ~) instead of creating a new container (docker run ~).
This is easier to do if you name your containers.
ie.
docker run -it --name containerName imageName
do stuff to your container
docker kill containerName
docker start containerName
You will see that your changes are persisted in that container.
You can also commit your container as an image, which can be pushed to a registry or exported to a file.

How to edit file inside docker which is exited?

I edited a file in a running docker container and restarted it, unfortunately my last edit was not correct. So every time I start the container with:
docker start <containerId>
It always exits immediately.
Now I can not even modify my edit back, since
docker exec -it <containerId> bash
can only run on a running docker.
The question is how can I change it and restart the container now? Or I had to abandon it and start a new container from an existing image?
You didn't supply any details regarding your container's purpose, or what you modified. Conceptually, you could create the file that needs to be modified in a place on your filesystem and mount that file into the container as a volume when you start it, like:
docker run -it -v /Users/<path_to_file>:<container_path_to_file> <container>
Hovever, this is bad form, as your container loses portability at that point without committing a new image.
Ideally, changes that need to be made inside of a Docker container are made in the Dockerfile, and the container image re-built. This way, your initial, working container state is represented in your Dockerfile code, making your configuration repeatable, portable, and immutable.
The file system of exited containers can still be changed. The preferable way is probably:
docker cp <fixedFile> <containerId>:<brokenFile>
But you can also circumvent docker completely; see here.

Resources