I run a specific docker image for the first time:
docker run [OPTIONS] image [CMD]
Some of the options I supply include --link (link with other containers) and -p (expose ports)
I noticed that if I kill that container and simply do docker start <container-id>, Docker honors all the options that I specified during the run command including the links and ports.
Is this behavior explicitly documented and can I always count on the start command to reincarnate the container with all the options I supplied in the run command?
Also, I noticed that killing/starting a container which is linked to another container updates the upstream container's /etc/hosts file automatically:
A--(link)-->B (A has an entry in /etc/hosts for B)
If I kill B, B will normally get a new IP address. I notice that when i start B, the entry for B in A's /etc/hosts file is automatically updated... This is very nice.
I read here that --link does not handle container restarts... Has this been updated recently? If not, why am I seeing this behavior?
(Im using Docker version 1.7.1, build 786b29d)
Yes, things work as you describe :)
You can rely on the behaviour of docker start as it doesn't really "reincarnate" your container; it was always there on disk, just in a stopped state. It will also retain any changes to files, but changes in RAM, such as process state, will be lost. (Note that kill doesn't remove a container, it just stops it with a SIGKILL rather than a SIGTERM, use docker rm to truly remove a container).
Links are now updated when a container changes IP address due to a restart. This didn't use to be the case. However, that's not what the linked question is about - they are discussing whether you can replace a container with a new container of the same name and have links still work. This isn't possible, but that scenario will be covered by the new networking functionality and "service" objects which is currently in the Docker experimental channel.
Related
TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.
I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".
I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.
I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".
I'm dockerizing some of our services. For our dev environment, I'd like to make things as easy as possible for our developers and so I'm writing some scripts to manage the dockerized components. I want developers to be able to start and stop these services just as if they were non-dockerized. I don't want them to have to worry about creating and running the container vs stopping and starting and already-created container. I was thinking that this could be handled using Fig. To create the container (if it doesn't already exist) and start the service, I'd use fig up --no-recreate. To stop the service, I'd use fig stop.
I'd also like to ensure that developers are running containers built using the latest images. In other words, something would check to see if there was a later version of the image in our Docker registry. If so, this image would be downloaded and run to create a new container from that image. At the moment it seems like I'd have to use docker commands to list the contents of the registry (docker search) and compare that to existing local containers (docker ps -a) with the addition of some greping and awking or use the Docker API to achieve the same thing.
Any persistent data will be written to mounted volumes so the data should survive the creation of a new container.
This seems like it might be a common pattern so I'm wondering whether anyone else has given these sorts of scenarios any thought.
This is what I've decided to do for now for our Neo4j Docker image:
I've written a shell script around docker run that accepts command-line arguments for the port, database persistence directory on the host, log file persistence directory on the host. It executes a docker run command that looks like:
docker run --rm -it -p ${port}:7474 -v ${graphdir}:/var/lib/neo4j/data/graph.db -v ${logdir}:/var/log/neo4j my/neo4j
By default port is 7474, graphdir is $PWD/graph.db and logdir is $PWD/log.
--rm removes the container on exit, however the database and logs are maintained on the host's file system. So no containers are left around.
-it allows the container and the Neo4j service running within it to receive signals so that the service can be gracefully shut down (the Neo4j server gracefully shuts down on SIGINT) and the container exited by hitting ^C or sending it a SIGINT if the developer puts this in the background. No need for separate start/stop commands.
Although I certainly wouldn't do this in production, I think this fine for a dev environment.
I am not familiar with fig but your scenario seems good.
Usually, I prefer to kill/delete + run my container instead of playing with start/stop though. That way, if there is a new image available, Docker will use it. This work only for stateless services. As you are using Volumes for persistent data, you could do something like this.
Regarding the image update, what about running docker pull <image> every N minutes and checking the "Status" that the command returns? If it is up to date, then do nothing, otherwise, kill/rerun the container.