What happens when the docker host is shut down and restarted?
will the images that were running be restarted?
will the changes that were made to those images persist, or will a new instance of the image be spawned and changes be lost?
does docker have any configuration option, such as the list of images to be automatically executed at startup and the options to run the images? Where? If not, I suppose only the docker command line can be used to alter docker state. Where is that state stored (I suppose somewhere in /var). This could be useful to backup the docker state.
(I'd have liked to find this in the FAQ)
will the images that were running be restarted?
Docker will restart containers when the daemon restarts if you pass -r=True to the startup options. On Ubuntu, you can accomplish this permanently by modifying DOCKER_OPTS="-r=true" in /etc/default/docker.
will the changes that were made to those images persist, or will a new instance of the image be spawned and changes be lost?
Containers will be stopped. Any modifications to the container will still be present when the container next starts, which will happen automatically when the docker daemon starts if -r=true is provided as mentioned above.
where is the docker configuration stored on the host system?
There is no configuration file per se. You can tune the upstart/init options in /etc/default/docker.
Related
I use docker-compose to start a set of unrelated docker containers. I use docker-compose for that because of the ease of configuration via docker-compose.yaml and the centralized configuration this file brings.
One problem I have is the update of images, or actually of containers after an image update. I update them via docker-compose pull but the containers previously spawned do not restart by themselves. I have two possible solutions, both doable but none ideal:
restart all the containers after a pull. This would introduce unavailability which is not a critical thing in my home environment but still (especially Home Assistant restarting is a pain as the lights are reset)
write some code to check which images IDs have changed during the pull and restart the relevant containers (removing them first). This is the solution I will be using if there is nothing better.
I was wondering if there was a better soution.
This is a home environment so I would like to avoid heavy duty solutions such as Kubernetes.
Swarm mode could work but I just read about it and it looks more like a solution to ensure state more than a containers manager (in the sense that it would restart containers based on the freshness of the image they were spawned from).
After you docker pull image, docker-compose -f "docker-compose.yml" up -d will only restart the containers for which there is a new version of the image after the docker pull. It will not impact the containers whose image stays the same. This setup works fine for me.
docker-compose up --force-recreate -d
if there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers (preserving mounted volumes). To prevent Compose from
picking up changes, use the --no-recreate flag.
If you want to force Compose to stop and recreate all containers, use
the --force-recreate flag.
docker-compose up -CLI
NOTE:Recreate containers even if their configuration and image haven't changed.
Trying to make sure I understand the proper usage of docker volumes. If I have a container running MongoDB that I plan to start and stop do I need a volume configured with I "docker run" the first time? My understanding is that if use Docker run once, then docker stop/start my data is saved inside the container. The volume is more useful if multiple containers want access to the data. Is that accurate or am I misunderstanding something?
Starting and stopping a container will not delete the container specific data. However, you upgrade containers by replacing them with new containers. Any changes to the container specific read/write layer will be lost when that happens, and the new container will go back to it's initial state. If there are files inside your container that you want to preserve when the container is replaced, then you need to store those files in a volume, and then mount that same volume in the new container.
We have installed Ephesoft on a docker.
One file (in particular
dcma-batch.properties
in
Ephesoft/Application/WEB-INF/classes/META-INF/dcma-batch/
is reset after the docker exits and it is relaunched. Is there any way of stopping this?
If you restart a docker container, all data is lost, and it is reset to the initial image.
If you want persistent storage, look into docker volumes here:
https://docs.docker.com/engine/admin/volumes/
Let's assume I have a Python program running inside a docker container
import time
counter = 0
while True:
counter += 1
print counter
time.sleep(1)
What happens if I do a commit on that running container, and then use that new image to run a new container?
The docs state that a running container is paused (cgroups freezer), and after commiting it gets unpaused. In what state is the image in? SIGKILL? I assume that the python program won't be running anymore when doing a docker start on that image, correct?
I'm asking because I have a couple of Java servers (Atlassian) running in the container, so I wonder if I'm doing daily backups via commit on that container, and I then "restore" (docker run ... backup/20160118) one of the backups, in what state will the servers be in?
Docker commit only commits the filesystem changes of a container, so any file that has been added, removed, or modified on the filesystem since the container was started.
Note that any volume (--volume or VOLUME inside the dockerfile) is not part of the container's filesystem, so won't be committed.
In-memory state: "Checkpoint and Restore"
Committing a container, including it's current (in-memory) state, is a lot more complex. This process is called "checkpoint and restore". You can find more information about this on https://criu.org. There's currently a pull request to add basic support for checkpoint and restore to Docker; https://github.com/docker/docker/pull/13602, but that feature does not yet support "migrating" such containers to a different machine.
I have a running docker container with a base image fedora:latest.
I would like to preserve the state of my running applications, but still update a few packages which got security fixes (i.e. gnutls, openssl and friends) since I first deployed the container.
How can I do that without interrupting service or losing the current state?
So optimally I would like to get a bash/csh/dash/sh on the running container, or any fleet magic?
It's important to note that you may run into some issues with the container shutting down.
For example, imagine that you have a Dockerfile for an Apache container which runs Apache in the foreground. Imagine that you attach a shell to your container (via docker exec) and you start updating. You have to apply a fix to Apache and, in the process of updating, Apache restarts. The instant that Apache shuts down, the container will stop. You're going to lose the current state of the applications. This is going to require extremely careful planning and some luck, and some updates will probably not be possible.
The better way to do it is rebuild the image upon which the container is based with all the appropriate updates, then re-run the container. There will be a (brief) interruption in service. However, in order for you to be able to save the state of your applications, you would need to design the images in such a way that any state information that needs to be preserved is stored in a persistent manner - either in the host file system by mounting a directory or in a data container.
In short, if you're going to lose important information when your container shuts down, then your system is fragile & you're going to run into problems sooner or later. Better to redesign it so that everything that needs to be persistent is saved outside the container.
If the docker container has a running bash
docker attach <containerIdOrName>
Otherwise execute a new program in the same container (here: bash)
docker exec -it <containerIdOrName> bash