How can I reuse a Docker container as a service? - docker

I already have a running container for both postgres and redis in use for various things. However, I started those from the command line months ago. Now I'm trying to install a new application and the recipe for this involves writing out a docker compose file which includes both postgres and redis as services.
Can the compose file be modified in such a way as to specify the already-running containers? Postgres already does a fine job of siloing any of the data, and I can't imagine that it would be a problem to reuse the running redis.
Should I even reuse them? It occurs to me that I could run multiple containers for both, and I'm not sure there would be any disadvantage to that (other than a cluttered docker ps output).
When I set container_name to the names of the existing containers, I get what I assume is a rather typical error of:
cb7cb3e78dc50b527f71b71b7842e1a1c". You have to remove (or rename) that container to be able to reuse that name.
Followed by a few that compain that the ports are already in use (5432, 6579, etc).
Other answers here on Stackoverflow suggest that if I had originally invoked these services from another compose file with the exact same details, I could do so here as well and it would reuse them. But the command I used to start them was somehow never written to my bash_history, so I'm not even sure of the details (other than name, ports, and restart always).

Are you looking for docker-compose's external_links keyword?
external_links allows you reuse already running containers.
According to docker-compose specification:
This keyword links to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to the legacy option links when specifying both the container name and the link alias (CONTAINER:ALIAS).
And here's the syntax:
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql

You can give name for your container. If there is no container with the given name, then it is the first time to run the image. If the named container is found, restart the container.
In this way, you can reuse the container. Here is my sample script.
containerName="IamContainer"
if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerName}\$"; then
docker restart ${containerName}
else
docker run --name ${containerName} -d hello-world
fi

You probably don't want to keep using a container that you don't know how to create. However, the good news is that you should be able to figure out how you can create your container again by inspecting it with the command
$ docker container inspect ID
This will display all settings, the docker-compose specific ones will be under Config.Labels. For container reuse across projects, you'd be interested in the values of com.docker.compose.project and com.docker.compose.service, so that you can pass them to docker-compose --project-name and use them as the service's name in your docker-compose.yaml.

Related

How to Recreate a Docker Container Without Docker Compose

TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.

How to spawn an interactive container in an existing docker swarm?

Note: I've tried searching for existing answers in any way I could think of, but I don't believe there's any information out there on how to achieve what I'm after
Context
I have an existing swarm running a bunch of networked services across multiple hosts. The deployment is done via docker-compose build && docker stack deploy. Several of the services contain important state necessary for the functioning of the main service this stack is for, including when interacting with it via CLI.
Goal
How can I create an ad-hoc container within the existing stack running on my swarm for interactive diagnostics and troubleshooting of my main service? The service has a CLI interface, but it needs access to the other components for that CLI to function, thus it needs to be run exactly as if it were a service declared inside docker-compose.yml. Requirements:
I need to run it in an ad-hoc fashion. This is for troubleshooting by an operator, so I don't know when exactly I'll need it
It needs to be interactive, since it's troubleshooting by a human
It needs to be able to run an arbitrary image (usually the image built for the main service and its CLI, but sometimes other diagnostics might be needed through other containers I won't know ahead of time)
It needs to have full access to the network and other resources set up for the stack, as if it were a regular predefined service in it
So far the best I've been able to do is:
Find an existing container running my service's image
SSH into the swarm host on which it's running
docker exec -ti into it to invoke the CLI
This however has a number of downsides:
I don't want to be messing with an already running container, it has an important job I don't want to accidentally interrupt, plus its state might be unrelated to what I need to do and I don't want to corrupt it
It relies on the service image also having the CLI installed. If I want to separate the two, I'm out of luck
It relies on some containers already running. If my service is entirely down and in a restart loop, I'm completely hosed because there's nowhere for me to exec in and run my CLI
I can only exec within the context of what I already have declared and running. If I need something I haven't thought to add beforehand, I'm sadly out of luck
Finding the specific host on which the container is running and going there manually is really annoying
What I really want is a version of docker run I could point to the stack and say "run in there", or docker stack run, but I haven't been able to find anything of the sort. What's the proper way of doing that?
Option 1
deploy a diagnostic service as part of the stack - a container with useful tools in it, with an entrypoint of tail -f /dev/null - use a placement contraint to deploy this to a known node.
services:
diagnostics:
image: nicolaka/netshoot
command: tail -f /dev/null
deploy:
placement:
constraints:
- node.hostname == host1
NB. You do NOT have to deploy this service with your normal stack. It can be in a separate stack.yml file. You can simply stack deploy this file to your stack later, and as long as --prune is not used, the services are cumulative.
Option 2
To allow regular containers to access your services - make your network attachable. If you havn't specified the network explicitly you can just explicitly declare the default network.
networks:
default:
driver: overlay
attachable: true
Now you can use docker run and attach to the network with a diagnostic container :-
docker -c manager run --rm --network <stack>_default -it nicolaka/netshoot
Option 3
The third option does not address the need to directly access the node running the service, and it does not address the need to have an instance of the service running, but it does allow you to investigate a service without effecting its state and without needing tooling in the container.
Start by executing the usual commands to discover the node and container name and id of the service task of interest:
docker service ps ${service} --no-trunc --format '{{.Node}} {{.Name}}.{{.ID}}' --filter desired-state=running
Then, assuming you have docker contexts to match your node names: - pick one ${node}, ${container} from the list of {{.Node}}, {{.Name}}.{{.ID}} and run a container such as ubuntu or netshoot, attaching it to the network namespace of the target container.
docker -c ${node} run --rm -it --network container:${container} nicolaka/netshoot
This container can be used to perform diagnostics in the context of the running service task, and then closed without affecting it.

Docker: How a container persists data without volumes in the container?

I'm running the official solr 6.6 container used in a docker-compose environment without any relevant volumes.
If i modify a running solr container the data survives a restart.
I dont see any volumes mounted and it works for a plain solr container:
docker run --name solr_test -d -p 8983:8983 -t library/solr:6.6
docker exec -it solr_test /bin/bash -c 'echo woot > /opt/solr/server/solr/testfile'
docker stop solr_test
docker start solr_test
docker exec -it solr_test cat /opt/solr/server/solr/testfile
Above example prints 'woot'. I thought that a container doesnt persist any data? Also the documentation mentions that the solr cores are persisted in the container.
All i found, regarding container persistence is that i need to add volumes on my own like mentioned here.
So i'm confused: do containers store the data changed within the container or not? And how does the solr container achive this behaviour? The only option i see is that i misunderstood peristence in case of docker or the build of the container can set some kind of option to achieve this which i dont know about and didnt see in the solr Dockerfile.
This is expected behaviour.
The data you create inside a container persist as long as you don't delete the container.
But think containers in some way of throw away mentality. Normally you would want to be able to remove the container with docker rm and spawn a new instance including your modified config files. That's why you would need an e.g. named volume here, which survives a container life cycle on your host.
The Dockerfile, because you mention it in your question, actually only defines the image. When you call docker run you create a container from it. Exactly as defined in the image. A fresh instance without any modifications.
When you call docker commit on your container you snapshot it (including the changes you made to the files) and create a new image out of it. They achieve the data persistence this way.
The documentation you referring to explains this in detail.

How to use Dockerfile to link main container to a db container?

Docker has a quite convenient way to link the main container to a db container with a command like:
docker run --link db:db user/main
This is very convenient already. However, I believe it's still clumsy compared to a command like:
docker run user/ultra
where ultra is a container that is already linking the main container to the db container.
Is that possible that I can achieve this by writing a good Dockerfile.
I suppose I can start the Dockerfile with
FROM user/main
but how do I get the second container involved and then link them with Dockerfile?
Thanks.
FROM user/main
That would create an image (build time) based on user/main, which is not at all the same as linking at runtime two containers together.
Plus, --link is now obsolete: see "Legacy container links"
Warning: The --link flag is a deprecated legacy feature of Docker. It may eventually be removed.
Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link
Use instead the links directive in a docker-compose.yml file.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified
You can make sure the containers are launched in the proper order with the depends_on directive.
Then a simple docker-compose up will launch the containers and link them on the same network.

How to autoremove linked Docker container?

I have a container with PHP and a linked container with MySQL database, because I need an ability to run PHPUnit with database (integration tests).
The basic command looks like this:
docker run -i --rm --link db binarydata/phpunit php script.php
I have created db container and started it before running this command.
binarydata/phpunit container gets removed after a command was run. But db container stays up and running.
Question is: how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
First, you don't have to use --link anymore with docker 1.10+. docker-compose will create for you a network in which all containers see each others.
And with docker-compose alias, you can declare your binary/phpunit as "db" for other containers to use.
Second, with that network in place, if you stop/remove the php container, it will be removed from said network, including its alias 'db'.
That differs from the old link (docker 1.8 and before), which would modify the /etc/hosts of the container who needed it. In that case, removing the linked container would not, indeed, change the /etc/hosts.
With the new embedded docker-daemon DNS, there is no longer a need for that.
Matt suggests in the comments the following command and caveats:
docker-compose up --abort-on-container-exit --force-recreate otherwise the command never returns and the db container would never be removed.
up messes with stdout a bit though.
The exit status for the tests will be lost too, it's printed to screen instead.

Resources