I have a docker compose yml file with a few containers defined:
database
web-service
I have 'depends_on' defined in 'web-service' to start after 'database'. Both containers are defined with 'restart always'.
I've been googling and cannot find clear info on container startup order on system reboots. Does the docker daemon read the docker-compose yml file and start the database and then web-service? Or how does it work?
If you want to start the containers on system startup you have to setup a some kind of "scheduled" job using e.g. Linux's CRON daemon.
Docker daemon itself is not responsible for waking-up containers, restart entry in compose file refers to e.g. restarting on crash of app in the container, after ending a job (which terminates terminal) and so on.
Please find the restarts explanation of docker docs https://docs.docker.com/config/containers/start-containers-automatically/#restart-policy-details
containers are started according to depends_on contraints.
on reboot too.
but you should not rely on it too much.
you can just let your web service crash when he has no acces to the db. docker will restart it automatically and it will retry. (it's cheap)
if you want to deal with it more safely/precisely, you can also wait for the port to be accessible using a script like this one.
https://github.com/vishnubob/wait-for-it
docker explains it in his documentation : https://docs.docker.com/compose/startup-order/
that way you garanty way more than depends_on. because depends only ganranty order, not that to service is ready or even working.
Related
I have to execute some maprcli commands on a daily basis, and the maprcli command needs to be executed with a special user. The maprcli command and the user are both on the local host.
To schedule this tasks I need to use airflow, which further on works in a docker container. I am facing 2 problems here:
the maprcli is not available in the airflow docker conainer
the user with whom it should be executed is not available in the container.
The first problem can be solved with a volume mapping, but is there maybe a cleaner solution?
Is there any way to use the needed local/host user during the execution of a python script inside the airflow docker container?
The permissions depend on the availability of a mapr ticket that is normally generated by maprlogin.
Making this work correctly is much easier in Kubernetes than in bare docker containers because of the more advanced handling of tickets.
I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
I'm dockerizing some of our services. For our dev environment, I'd like to make things as easy as possible for our developers and so I'm writing some scripts to manage the dockerized components. I want developers to be able to start and stop these services just as if they were non-dockerized. I don't want them to have to worry about creating and running the container vs stopping and starting and already-created container. I was thinking that this could be handled using Fig. To create the container (if it doesn't already exist) and start the service, I'd use fig up --no-recreate. To stop the service, I'd use fig stop.
I'd also like to ensure that developers are running containers built using the latest images. In other words, something would check to see if there was a later version of the image in our Docker registry. If so, this image would be downloaded and run to create a new container from that image. At the moment it seems like I'd have to use docker commands to list the contents of the registry (docker search) and compare that to existing local containers (docker ps -a) with the addition of some greping and awking or use the Docker API to achieve the same thing.
Any persistent data will be written to mounted volumes so the data should survive the creation of a new container.
This seems like it might be a common pattern so I'm wondering whether anyone else has given these sorts of scenarios any thought.
This is what I've decided to do for now for our Neo4j Docker image:
I've written a shell script around docker run that accepts command-line arguments for the port, database persistence directory on the host, log file persistence directory on the host. It executes a docker run command that looks like:
docker run --rm -it -p ${port}:7474 -v ${graphdir}:/var/lib/neo4j/data/graph.db -v ${logdir}:/var/log/neo4j my/neo4j
By default port is 7474, graphdir is $PWD/graph.db and logdir is $PWD/log.
--rm removes the container on exit, however the database and logs are maintained on the host's file system. So no containers are left around.
-it allows the container and the Neo4j service running within it to receive signals so that the service can be gracefully shut down (the Neo4j server gracefully shuts down on SIGINT) and the container exited by hitting ^C or sending it a SIGINT if the developer puts this in the background. No need for separate start/stop commands.
Although I certainly wouldn't do this in production, I think this fine for a dev environment.
I am not familiar with fig but your scenario seems good.
Usually, I prefer to kill/delete + run my container instead of playing with start/stop though. That way, if there is a new image available, Docker will use it. This work only for stateless services. As you are using Volumes for persistent data, you could do something like this.
Regarding the image update, what about running docker pull <image> every N minutes and checking the "Status" that the command returns? If it is up to date, then do nothing, otherwise, kill/rerun the container.
What happens when the docker host is shut down and restarted?
will the images that were running be restarted?
will the changes that were made to those images persist, or will a new instance of the image be spawned and changes be lost?
does docker have any configuration option, such as the list of images to be automatically executed at startup and the options to run the images? Where? If not, I suppose only the docker command line can be used to alter docker state. Where is that state stored (I suppose somewhere in /var). This could be useful to backup the docker state.
(I'd have liked to find this in the FAQ)
will the images that were running be restarted?
Docker will restart containers when the daemon restarts if you pass -r=True to the startup options. On Ubuntu, you can accomplish this permanently by modifying DOCKER_OPTS="-r=true" in /etc/default/docker.
will the changes that were made to those images persist, or will a new instance of the image be spawned and changes be lost?
Containers will be stopped. Any modifications to the container will still be present when the container next starts, which will happen automatically when the docker daemon starts if -r=true is provided as mentioned above.
where is the docker configuration stored on the host system?
There is no configuration file per se. You can tune the upstart/init options in /etc/default/docker.