Reuse containers with `docker-compose` - docker

I've an app running on multiple Docker containers defined by docker-compose. Everything works fine from my user and the docker-compose ps output looks like:
Name Command State Ports
------------------------------------------------------------
myuser_app_1 /config/bootstrap.sh Exit 137
myuser_data_1 sh Exit 0
myuser_db_1 /run.sh Exit 143
Now I'm trying to run docker-compose up with supervisord (see relevant part of supervisord.conf below) and the issue is that the containers are now named myapp_app_1, myapp_data_1 and myapp_db_1, that is they're created from scratch and all customizations on the former containers is lost.
I tried renaming the containers, but it gives an error saying that there's already a container with that name.
Q: Is there some way to force docker-compose reuse existing containers instead of creating new ones based in their respective images?
supervisord.conf
...
[program:myapp]
command=/usr/local/bin/docker-compose
-f /usr/local/app/docker-compose.yml up
redirect_stderr=true
stdout_logfile=/var/log/myapp_container.log
stopasgroup=true
user=myuser

Compose will always reuse containers as long as their config hasn't changed.
If you have any state in a container, you need to put that state into a volume. Containers should be ephemeral, you should be able to destroy them and recreate them at any time without losing anything.
If you need to initialize something I would do it in the Dockerfile, so that it's preserved in the image.

Related

Docker - Change existing containers settings without RUN command

Is it possible to change the settings of docker container like entrypoint, ports or memory-limits without having to delete the container and run using docker run command? Example: docker stop <container_id>, change settings and then docker start <container_id>?
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
Is it possible to change the settings by stopping the container instead of re-run?
You need to stop, delete, and recreate the container.
# this is absolutely totally 100% normal and routine
docker stop my_container
docker rm my_container
# docker build -t image_name .
docker run -d -p 12345:8000 --name my_container image_name
This isn't specific to Docker. If you run any command in any Unix-like environment and you want to change its command-line parameters or environment variables, you need to stop the process and create a new one. A Docker container is a wrapper around a process with some additional isolation features, and for a great many routine things you're required to delete the container. In cluster container environments like Kubernetes, this is routine enough that changing any property of a Deployment object will cause all of the associated containers (Kubernetes Pods) to get recreated automatically.
There are a handful of Docker commands that exist but are almost never used in normal operation. docker start is among these; just skip over it in the documentation.
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
In fact, the normal behavior of docker run is that you're always beginning the program from a known "clean" initial state; this is easier to set up as an application developer than trying to recover from whatever state the previous run of the application might have been left in.
If you need to debug the image startup, an easy thing to do is to tell the container to run an interactive shell instead of its default command
docker run --rm -it image_name /bin/sh
(Some images may have bash available which will be more comfortable to work in; some images may require an awkward docker run --entrypoint option.) From this shell you can try to manually run the container startup commands and see what happens. You don't need to worry about damaging the container code in any particular way, since anything you change in this shell will get lost as soon as the container exits.

How to use container exits immediately after startup in docker?

As everyone knows, we can use docker start [dockerID] to start a closed container.
But, If this container exits immediately after startup. What should I do?
For example, I have a MySQL container, it runs without any problems. But the system is down. At next time I start this container. It tell me a file is worry so that this container immediately exit.
Now I want to delete this file, but this container can not be activated, so I can't enter this container to delete this file. What should I do?
And if I want to open bash in this state container, What should I do?
Delete the container and launch a new one.
docker rm dockerID
docker run --name dockerID ... mysql:5.7
Containers are generally treated as disposable; there are times you're required to delete and recreate a container (to change some networking or environment options; to upgrade to a newer version of the underlying image). The flip side of this is that containers' state is generally stored outside the container filesystem itself (you probably have a docker run -v or Docker Compose volumes: option) so it will survive deleting and recreating the container. I almost never use docker start.
Creating a new container gets you around the limitations of docker start:
If the container exits immediately but you don't know why, docker run or docker-compose up it without the -d option, so it prints its logs to the console
If you want to run a different command (like an interactive shell) as the main container command, you can do it the same as any other container,
docker run --rm -it -v ...:/var/lib/mysql/data mysql:5.6 sh
docker-compose run db sh
If the actual problem can be fixed with an environment variable or other setting, you can add that to the startup-time configuration, since you're already recreating the container

How to understand Container states

I am trying to understand the life cycle of a container. Downloaded alpine image, built containers using "docker container run" command, all of those containers ran and in "Exited" status. While using "docker container start" command, some of the containers are staying in up status(running) and some or Exited immediately. Any thoughts on why the difference in such behavior around statuses? One difference I observed is, containers staying in up status are modified with respect to file structure from base image.
Hope i was able to put the scenario with proper context. Help me in understanding the concept.
The long sequence is as follows:
You docker create a container with its various settings. Some settings may be inherited from the underlying image. It is in a "created" status; its filesystem exists but nothing is running.
You docker start the container. If the container has an entrypoint (Dockerfile ENTRYPOINT directive, docker create --entrypoint option) then that entrypoint is run, taking the command as arguments; otherwise the command (Dockerfile CMD directive, any options after the docker create image name) is run directly. This process gets process ID 1 in the container and the rights and responsibilities that go along with that. The container is in "running" status.
The main process exits, or an administrator explicitly docker stops it. The container is in "exited" status.
Optionally you can restart a stopped container (IME this is unusual though); go to step 2.
You docker rm the stopped container. Anything in the container filesystem is permanently lost, and it no longer shows up in docker ps -a or anywhere else.
Typically you'd use docker run to combine these steps together. docker run on its own does the first two steps together (creates a container and then starts it). If you docker run --rm it does everything listed above.
(All of these commands are identical to the docker container ... commands, but I'm used to the slightly shorter form.)
The key point here is that there is some main process that the container runs. Typically this is some sort of daemon or server process, and generally specified in the image's Dockerfile. If you, for example, docker run ... nginx, then its Dockerfile ends with
CMD ["nginx", "-g", "daemon off;"]
and that becomes the main container process.
In early exploration it's pretty common to just run some base distribution image (docker run --rm -it alpine) but that's not really interesting: the end of the lifecycle sequence is removing the container and once you do that everything in the container is lost. In standard use you'd want to use a Dockerfile to build a custom image, and there's a pretty good Docker tutorial on the subject.

Run docker with just copied file inside

I'm building an image that just contains a copied file, with the following Dockerfile:
FROM alpine:3.8
COPY ./somefile /srv/somefile
When I try to docker run the image, it exits immediately, that is just after docker run I have:
Exited (0) 1 second ago.
I tried adding CMD ["/bin/sh"] or ENTRYPOINT ["/bin/sh"] but it doesn't change anything.
Is it possible to have such container with just copied file and make it up and running, until I stop it?
So there is really no problem - you have succeeded running your docker with file inside. But as long as you didn't supply any additional job for your container, your run process had taken 1 second.
First you should get acquainted with what is considered as "running" container in Docker terminology. Container is "running" until its main process (PID 1) is running. Process exits => container stops. When you want your container to remain running (as a service for example), you need to keep your main process active.
Second, what is your main process? It is the process being launched upon container start. It is combined from ENTRYPOINT and CMD directives (with some rules). These directives are often given the default value in Dockerfile but you can override them. If you just run docker run <image>, default values are taken, but if you provide some arguments after <image>, they override CMD.
So, for alpine you can simply run shell like docker run -it alpine sh. And until you exit the shell, your container is running.
And the last. Argument -it connects both STDIN and STDOUT/STDERR to your console. So your sh process, which is main, keeps alive till you close the console.

How to edit file inside docker which is exited?

I edited a file in a running docker container and restarted it, unfortunately my last edit was not correct. So every time I start the container with:
docker start <containerId>
It always exits immediately.
Now I can not even modify my edit back, since
docker exec -it <containerId> bash
can only run on a running docker.
The question is how can I change it and restart the container now? Or I had to abandon it and start a new container from an existing image?
You didn't supply any details regarding your container's purpose, or what you modified. Conceptually, you could create the file that needs to be modified in a place on your filesystem and mount that file into the container as a volume when you start it, like:
docker run -it -v /Users/<path_to_file>:<container_path_to_file> <container>
Hovever, this is bad form, as your container loses portability at that point without committing a new image.
Ideally, changes that need to be made inside of a Docker container are made in the Dockerfile, and the container image re-built. This way, your initial, working container state is represented in your Dockerfile code, making your configuration repeatable, portable, and immutable.
The file system of exited containers can still be changed. The preferable way is probably:
docker cp <fixedFile> <containerId>:<brokenFile>
But you can also circumvent docker completely; see here.

Resources