If I change something in docker-compose.yml, or add a new service, I need to run docker-compose up --build. How can I not have downtime for services that were not changed?
You can specify the service you want to build/start. The other services won't be restared/rebuild, even if their configuration was changed:
$ docker-compose up --build <service-name>
You would need to implement a load balanced setup where you have more than one instance of the same service running. Your load balancer eg nginx would have to know about the various nodes/instances and split traffic between them.
For example, my-api service with the same configurations duplicated as my-api-1 and my-api-2. Then you would rebuild your main image and restart one service at a time.
Doing this would allow a service instance to continue to reply to traffic while the other instance(s) restart with the new image.
There are other implementation details to consider when implementing load balanced solutions (eg health checks), but this is the high level solution.
Related
I ve seen in some build the use of supervisor to run the docker-compose up -d command with the possibility to autostart and/or autorestart.
Im wondering if this cohabitation of supervisor and docker-compose works well? Aren't the two autorestart options interfering with each other? Also what is the benefit to use supervisor in place of a simple docker-compose except run at startup if the server is shut down?
Please share your experience if you have some on using theses two tools
Thank you
Running multiple single-process containers is almost always better than running a single multiple-process container; avoid supervisord when possible.
Mechanically, the combination should work fine. Supervisord will capture logs and take responsibility for restarting the process in the container. That means docker logs will have no interesting output, and you need to get the file content out of the container. If one of the managed processes fails then supervisord will restart it. The container itself will probably never be restarted, unless supervisord manages to crash somehow.
There are a couple of notable disadvantages to using supervisord:
As noted, it swallows logs, so you need a complex file-oriented approach to read them out.
If one of the processes fails then you'll have difficulty seeing that from outside the container.
If you have a code update you have to delete and recreate the container with the new image, which with supervisord means restarting every process.
In a clustered environment like Kubernetes, every process in the supervisord container runs on the same node, and you can't scale individual processes up to handle additional load.
Given the choice of tools you suggest, I'd pretty much always use Compose with its restart: policies, and not use supervisord at all.
I have a docker compose yml file with a few containers defined:
database
web-service
I have 'depends_on' defined in 'web-service' to start after 'database'. Both containers are defined with 'restart always'.
I've been googling and cannot find clear info on container startup order on system reboots. Does the docker daemon read the docker-compose yml file and start the database and then web-service? Or how does it work?
If you want to start the containers on system startup you have to setup a some kind of "scheduled" job using e.g. Linux's CRON daemon.
Docker daemon itself is not responsible for waking-up containers, restart entry in compose file refers to e.g. restarting on crash of app in the container, after ending a job (which terminates terminal) and so on.
Please find the restarts explanation of docker docs https://docs.docker.com/config/containers/start-containers-automatically/#restart-policy-details
containers are started according to depends_on contraints.
on reboot too.
but you should not rely on it too much.
you can just let your web service crash when he has no acces to the db. docker will restart it automatically and it will retry. (it's cheap)
if you want to deal with it more safely/precisely, you can also wait for the port to be accessible using a script like this one.
https://github.com/vishnubob/wait-for-it
docker explains it in his documentation : https://docs.docker.com/compose/startup-order/
that way you garanty way more than depends_on. because depends only ganranty order, not that to service is ready or even working.
I'm currently running Docker version 18.03.1-ce, build 9ee9f40 on multiple nodes. My setup is a nginx service and multiple java restful API services running in a wildfly cluster.
For my API services I've configured a simple healthcheck to determine whether my API task is actually up:
HEALTHCHECK --interval=5m --timeout=3s \
--retries=2 --start-period=1m \
CMD curl -f http://localhost:8080/api/healthcheck || exit 1
But even with the use of HealthCheck my nginx sometimes gets and error - caused by the fact that the API is still not fully up - can't serve rest requests.
The only solution, that I managed to get working so far is increasing the --start-period by hand to a lot longer periods.
How does the docker service load balancer decide, when to start routing requests to the new service?
Is setting a higher time with the --start-period currently the only way to prevent load balancer from redirecting traffic to a task that is not ready for traffic or am I missing something?
I've seen the "blue-green" deployment answers like this where you can manage zero downtime, but I'm still hoping this could be done with the use of docker services.
The routing mesh will start routing traffic on the "first successful healthcheck", even if future ones fail.
Whatever you put in the HEALTHCHECK command it needs to only start returning "exit 0" when things are truly ready. If it returns a good result too early, then that's not a good healthcheck command.
The --start-period only tells swarm when to kill the task if it's yet to receive a successful healthcheck in that time, but it won't cause green healthchecks to be ignored during the start period.
Using docker stack deploy, I can see the following message:
Ignoring unsupported options: restart
Does it mean that restart policies are not in place?
Do they have to be specified outside the compose file?
You can see this message for example with the
Joomla compose file available at the bottom of that page.
To start the compose file:
sudo docker swarm init
sudo docker stack deploy -c stackjoomla.yml joomla
A Compose YAML file is used by both docker-compose tool, for local (single-host) dev and test scenarios, and Swarm Stacks, for production multi-host concerns.
There are many settings in the Compose file which only work in one tool or the other (docker-compose up vs. docker stack deploy) because some settings are specific to dev and others specific to production clusters. It's OK that they are there, and you'll see warnings in either tool when there are settings included that the specific tool will ignore. This is commonly seen for build: settings (which are docker-compose only) and deploy: settings (which are Swarm Stacks only).
The whole goal here is a single file you can use in both tools, and the relevant sections of the compose file are used in that scenario, while the rest are ignored.
All of this can be referenced for the individual setting in the compose file documentation. If you're often working in Compose YAML, I recommend always having a tab open on this page, as I've referenced it almost daily for years, as the spec keeps changing (we're on 3.4+ now).
docker-compose does not restart containers by default, but it can if you set the single-setting restart: as documented here. But that setting doesn't work for Swarm Stacks. It will show up as a warning in a docker stack deploy to remind you that the setting will not take effect in a Swarm Stack.
Swarm Stacks use the restart_policy: under the deploy: setting, which gives finer control with multiple sub-settings. Like all Stack's, the defaults don't have to be specified in the compose file, and you'll see their default settings documented on that docs page.
There is a list on that page of the settings that won't work in a Swarm Stack, but it looks incomplete as the restart: setting should be there too. I'll submit a PR to fix that.
Also, in the Joomla example you pointed us too, that README seems out of date as well, as it includes links: in the compose example, which are depreciated as of Compose version 2, and not needed anymore (because all containers on a custom virtual network can reach each other now).
If you docker-compose up your application on a Docker host in standalone mode, all that Compose will do is start containers. It will not monitor the state of these containers once they are created.
So it is up to you to ensure that your application will still work if a container dies. You can do this by setting a restart-policy.
If you deploy an application into a Docker swarm with docker stack deploy, things are different.
A stack is created that consists of service specifications.
Docker swarm then makes sure that for each service in the stack, at all times the specified number of instances is running. If a container fails, swarm will always spawn a new instance in order to match the service specification again. In this context, a restart-policy does not make any sense and the corresponding setting in the compose file is ignored.
If you want to stop the containers of your application in swarm mode, you either have to undeploy the whole stack with docker stack rm <stack-name> or scale the service to zero with docker service scale <service-name>=0.
Is there anyone knows what is the best practice for sharing database between containers in docker?
What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.
So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.
Which one is better? Or, is there any other best practice for this requirement?
Thanks
It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.
So I will give a specific example of what I have done in a serious deployment.
I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.
For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:
docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true
I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:
CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0
Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.
I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.
That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.
So to specifically answer your question:
DB is packaged in a container, for all the same reasons the other elements are.
Volumes for DB storage are storage containers passing through local filesystems.
'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)
It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.
(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)
It depends. A useful thing to do is to keep the database URL and password in an environment variable and provide that to Docker when running the containers. That way you will be free to connect to a database wherever it may be located. E.g. running in a container during testing and on a dedicated server in production.
The best practice is to use Docker Volumes.
Official doc: Manage data in containers. This doc details how to deal with DB and container. The usual way of doing so is to put the DB into a container (which is actually not a container but a volume) then the other containers can access this DB-container (the volume) to CRUD (or more) the data.
Random article on "Understanding Docker Volumes"
edit I won't detail much further as the other answer is well made.