I think I have an understanding of this but just would like some clarification.
I have a docker-compose file with all my services in it. Did a docker-compose up and everything is fine. One of my services is a worker that needs to be restarted whenever my files change. For now I do a bind-mount from my host to the container. When I make some changes on my local system and then restart the worker container and it should pick up the changes.
If I do docker-compose restart , then it works and my changes are picked up.
If I do docker restart , then it seems to just cache the old environment, the files my worker runs are the "old" ones even though I can see the file changed when I ssh into the container.
I'm guessing it has something to do with docker-compose reloading configs or something? For now I'm just going continue to use docker-compose restart but I'd like a better understanding of what's going on.
Thanks for any help.
Related
I have an EC2 instance running a dockerized application using docker-compose.
Every time I run docker-compose up, many days' worth of logs to stdout for all services. This means that I have to wait up to an hour before all the logs have been printed and I start seeing recent logs.
Any ideas?
Your problem is that the old containers created by docker-compose are re-used.
Starting with docker-compose up --force-recreate should do the trick.
Though I remember this from the past, and for me, this problem no longer happens. So it could also be something else.
Please make sure the following:
You are using a modern version of docker-compose (I am running 1.29, run docker-compose version)
Please make sure the containers you are starting to are not already running (docker-compose ps), as then you will attach to them instead of starting them and then printing all the logs in the container is common.
I have a docker compose yml file with a few containers defined:
database
web-service
I have 'depends_on' defined in 'web-service' to start after 'database'. Both containers are defined with 'restart always'.
I've been googling and cannot find clear info on container startup order on system reboots. Does the docker daemon read the docker-compose yml file and start the database and then web-service? Or how does it work?
If you want to start the containers on system startup you have to setup a some kind of "scheduled" job using e.g. Linux's CRON daemon.
Docker daemon itself is not responsible for waking-up containers, restart entry in compose file refers to e.g. restarting on crash of app in the container, after ending a job (which terminates terminal) and so on.
Please find the restarts explanation of docker docs https://docs.docker.com/config/containers/start-containers-automatically/#restart-policy-details
containers are started according to depends_on contraints.
on reboot too.
but you should not rely on it too much.
you can just let your web service crash when he has no acces to the db. docker will restart it automatically and it will retry. (it's cheap)
if you want to deal with it more safely/precisely, you can also wait for the port to be accessible using a script like this one.
https://github.com/vishnubob/wait-for-it
docker explains it in his documentation : https://docs.docker.com/compose/startup-order/
that way you garanty way more than depends_on. because depends only ganranty order, not that to service is ready or even working.
I use docker-compose to start a set of unrelated docker containers. I use docker-compose for that because of the ease of configuration via docker-compose.yaml and the centralized configuration this file brings.
One problem I have is the update of images, or actually of containers after an image update. I update them via docker-compose pull but the containers previously spawned do not restart by themselves. I have two possible solutions, both doable but none ideal:
restart all the containers after a pull. This would introduce unavailability which is not a critical thing in my home environment but still (especially Home Assistant restarting is a pain as the lights are reset)
write some code to check which images IDs have changed during the pull and restart the relevant containers (removing them first). This is the solution I will be using if there is nothing better.
I was wondering if there was a better soution.
This is a home environment so I would like to avoid heavy duty solutions such as Kubernetes.
Swarm mode could work but I just read about it and it looks more like a solution to ensure state more than a containers manager (in the sense that it would restart containers based on the freshness of the image they were spawned from).
After you docker pull image, docker-compose -f "docker-compose.yml" up -d will only restart the containers for which there is a new version of the image after the docker pull. It will not impact the containers whose image stays the same. This setup works fine for me.
docker-compose up --force-recreate -d
if there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers (preserving mounted volumes). To prevent Compose from
picking up changes, use the --no-recreate flag.
If you want to force Compose to stop and recreate all containers, use
the --force-recreate flag.
docker-compose up -CLI
NOTE:Recreate containers even if their configuration and image haven't changed.
I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.
I'm trying to teach myself about Docker and using the docker-compose.yml to play around with images and the compose file. I've got the Wordpress image up and running using successfully docker-compose.yml up -d via the tutorial here... https://docs.docker.com/compose/wordpress/), but as soon as I make changes to the compose file and docker-compose.yml up -d again I can't access the changes again and have to completely delete images/containers/docker machine's to get my changes to work.
What am I doing wrong, what's the process to restart/delete the minimum amount to see my docker-compose.yml changes so I can play around with docker-compose.yml?
docker-compose stop to stop the stack
docker-compose start to start the stack
Both above will not remove your container, but rather shutdown and start them again, without any loses, even on the container filesystem, not only the volumes
docker-compose down will remove the containers of your services and all anonymous volumes assigned to them.
Be aware, not all changes in the docker-compose file can be applied using start/stop, rather most of the time, you have to do a down/up. Things like volumes/ports cannot be hot-applied like this.