Choose restart policy for my services in the docker-compose file - docker

I´m having troube selecting the restart policy for my Services.
My Application needs two Containers to run.
Tomcat Container with deployed .war file
Postgre Container with Postgre Database
My Questions:
Does anyone have Experience with choosing the restart policy?
Do I need to have the same restart Policy for both of my services?
I would straight forward choose the on-failure policy. When something crashes in the container it should immediately startup back again.
Offical Docker Documentation: https://docs.docker.com/config/containers/start-containers-automatically/#:~:text=Restart%20policy%20details,-Keep%20the%20following&text=A%20restart%20policy%20only%20takes,going%20into%20a%20restart%20loop.

We have tried on-failure and that based on particular needs for some projects and services.
I think all depends on your needs. Have you consider what are your needs? Some questions here:
Do you need the linked containers are start in the correct order?
Restart policies ensure that linked containers are started in the correct order.
Are you planning to use a process manager to start containers?
Docker recommends that you use restart policies, and avoid using process managers to start containers.
Have you try to experiment with all the policies and determine the policy that covers what you project needs?
on-failure[:max-retries]
always
unless-stopped

Related

Docker - Restart specific container if another restarts

Is it possible to restart a container if another container fails and restarts?
I have a server container and multiple client containers, I want to have it that if the server container fails and restarts, that one of the client containers restarts as well.
I've already used the restart policies (always, on-failure etc.) but this would be linking two containers and triggering the restart of container A if container B restarts.
This question seems to be quite similar, if not duplicate, of this one.
TL;DR: There has been a shift from defining complex restart policies in docker/docker-compose superseded by explicitly checking for dependencies from within your service so it is deployment agnostic. Therefore, the recommendation is to create specific checks within the container that 'depends' on other services and crash properly when they are not met so that a simple restart: always policy is all that is needed.

Is it possible to switch port binding between docker containers without downtime?

Scenario:
There is a container running with image version 1.0 and exposed port 8080 on localhost 80. The new version of the image is available, and there is a need to switch those versions. No, any orchestration tool is running ( Kubernetes, OpenShift etc...).
Is it possible to start a container with version 1.1 make it run without a problem
Please, keep in mind that I don't want to keep it simple, no replication, etc.
Simply docker container with the binded port to localhost.
Questions:
1. Is it possible to switch exposing of port between containers without downtime?
2. If not, is there is any mechanism implemented with docker (free edition) to do such switch?
Without downtime, you'd need a second replica of the service up an running, and a proxy in front of that service that's listening to user requests and routing from one to the other. Both Swarm Mode and Kubernetes provide this capability with similar tools, the port being exposed is indirectly connected to the app via either an application reverse proxy, or some iptables rules and ipvs entries in the kernel.
Out of the box, recent versions of docker include support for Swarm Mode with nothing additional to install. You can run a simple docker swarm init to start a single node swarm cluster in less than a second. And then instead of docker-compose up you switch to docker stack deploy -c docker-compose.yml $stack_name to manage your projects with almost the same compose file. For swarm mode, you'll want to be on version 3 of the compose file syntax.
For a v3 syntax compose file in swarm mode that has no outage on an update, you'll want healthcheck's defined in your image to monitor the application and report back when it's ready to receive requests. Then you'll want a deploy section of the compose file to either have multiple replicas for HA, or at least configure a single replica to have a "start-first" policy to ensure the new service is up before stopping the old one. See the compose docs for settings to adjust: https://docs.docker.com/compose/compose-file/#update_config
For an application based reverse proxy in docker, I really do like traefik, but more to allow me to run multiple http based container services with a single port opened. This allows me to mapping requests based off the hostname/path/http header to the right container, while at the same time giving features to migrate between different versions with weighting of which backend to use so you can do more than a simple round-robin load balancing during an upgrade.
There is no mechanism native to Docker that would allow you replace one container with another with no interruption. On the other hand, the duration of the interruption can probably be measured in milliseconds; whether or not this is really an issue for you depends entirely on your application.
You can get the behavior you want by introducing a dynamic reverse proxy such as Traefik into your configuration. The proxy binds to host ports and handles requests from remote systems, then distributes those requests to one or more backend containers.
You can create and remove backend containers as you please, and as long as at least one is running your application will be available. For your specific use case, this means that you can start the new version of your application first, then retire the old one, all without any interruption in service.

Preserve old docker-compose containers untill next are ready

I am deploying my app via docker-compose, that have 2 services: server app and nginx.
So in CI I created following script of instructions:
docker-compose build # create new containers
docker-compose down # down old containers
docker-compose up -d # up new containers
But server app container has its own start up time, so right after app start I see 502 page, because server app is not yet ready to receive calls, but nginx is ready.
What I want to do is to preserve old containers running, during that build and up new containers, wait some time for server app to be ready and then somehow substitute them. So whole operation whould be seamless for users.
How can I do it?
It is not possible with docker-compose. But, yes it is possible with container orchestration tools like Kubernetes, the popular orchestration tools.
From Kubernetes official site :
Automated rollouts and rollbacks
Kubernetes progressively rolls out changes to your application or its
configuration, while monitoring application health to ensure it
doesn't kill all your instances at the same time. If something goes
wrong, Kubernetes will rollback the change for you. Take advantage of
a growing ecosystem of deployment solutions.
Self-healing
Restarts containers that fail, replaces and reschedules containers
when nodes die, kills containers that don't respond to your
user-defined health check, and doesn't advertise them to clients until
they are ready to serve.

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

Is it wrong to run a single process in docker without providing basic system services?

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH

Resources