Benefits of Docker stack vs docker service create - docker

I have been curious since I just finished a course on Docker, and it was stated that docker stack deploy is the preferred practice for production.
Is there any benefit to use docker stack deploy for 3 services for example, instead of manually creating each service with docker service create, apart from the fact that using docker stack makes it easier to create the services?

Related

Extract docker-compose.yml from an existing Docker Swarm

I have an existing stack in a Swarm that I'd like to add some templated variables to. The Swarm is currently managed by, but was not created in, Portainer. I no longer have access to the original YML that created the stack and many edits have been made to the services since it was used anyway.
Portainer easily lets me add and remove services, but it seems the ability to associate a service with a stack requires the original YML.
Is there an automated way to extract a YML file from an existing stack? If not, is there a way to associate a new service with an existing stack without using docker stack deploy?
docker stack deploy can easilly add new services to an existing stack namespace. Just don't use the --prune flag.
We use this a lot in our production and staging CD pipelines as we use stacks as environments, and so publish multiple microservices into a single stack - either by passing multiple --compose-file directives, or doing singular --compose-file stack deploys when a single service is implicated.
dockers dns resolution is even more interesting. Docker has no awareness of the stack namespace your services are deployed to - dns is resolved by the networks your service is attached to and the network aliases that have been implicitly and automatically assigned to your service.
network aliases can be explicitly controlled at the service level with no stacks at all with docker service create --network name=my-network,alias=web1 and there is a similar alias syntax for the network section of services in compose files.

docker-swarm vs.docker-compose on single host in production

Is there a reason to use docker-swarm instead of docker-compose for deploying a single host in production?
I'm currently rewriting an existing application. My predecessors set up the application using docker-swarm. But I do not understand why: the application will only consist of a single host running a couple of services. These services will only supply some local information on the customer network via a REST-Api to a kubernetes cluster (so no real load or reason to add additional hosts).
I looked through the Docker website and could not find a reason to use docker-swarm to deploy a single host, apart from testing a deployment on a single host dev environment.
Are there benefits of using docker-swarm compared to docker-compose regarding deployment, networking, etc...?
Docker Swarm and Docker Compose are fundamentally different animals. Compose is a build tool that lets you define and configure a group of related containers, whereas swarm is an orchestration tool that manages multiple docker engines in a way that lets you treat them (somewhat) as a single unit. Swarm exposes an API that is mostly compatible with the Docker Remote API, which allows existing applications to use Swarm to scale horizontally without having to completely overhaul the existing interface to the container engine.
That said, much of the functionality in Docker Compose that overlaps with Docker Swarm has been added incrementally. Compose has grown over time, and the distinction between the two has narrowed a bit. Swarm was eventually integrated into the Docker engine, and Docker Stack was introduced, allowing compose.yml files to be read directly by Docker, without using Compose.
So the real question might be: what is the difference between docker compose and docker stack? Not a whole lot. Compose is actually a separate project, written in Python that uses the Docker API under the hood. Stack does much of the same things as Compose, but is integrated into Docker. Stack also wants pre-built images, while compose will handle those image builds for you, which makes compose very handy for development.
What you are dealing with might be a product of a time when these 2 tools were a lot more distinct. Docker Swarm is part of Docker, and it allows for easy scaling if needed (even if you don't need it now, it might be good down the road). On the other hand, Compose (in my opinion anyway) is much more useful for development situations where you are making frequent tweaks to your images, and rebuilding.

Adding a service to a stack after the stack has been deployed

I’m trying to add a service to a stack after the stack was already deployed. But this new service is having trouble communicating with services (redis) inside the stack.
This is my current understanding of stacks and services, please let me know if there are any inaccuracies.
Stacks are an abstraction on top of services that provide useful utilities like DNS so services across a stack can communicate with one another. Stacks allow us to logically separate out groups of services, that might be running on the same swarm (so different development teams can share the same swarm).
I would like to first deploy a stack to a swarm (via compose file) and then periodically add containers like the one described in this article about one-shot containers. These containers are different because they are performing long, stateful operations. They need to be spun up with some initial state, do their work, and then go away. They are different because they don’t need to be replicated, or load balanced.
Essentially what I’m trying to do is:
Launch a “stack” like this:
docker stack deploy --with-registry-auth --compose-file docker-compose.yml my-stack
And then some time later when certain criteria is met, add a container like this:
docker service create -name statefulservice reponame/imagename
And this generally behaves as expected, except statefulservice isn’t able to talk to redis inside my-stack.
I’m confident that statefulservice is engineered correctly because when it’s added to the docker-compose.yml it behaves as expected.
A further detail that may or may not be relevant is that the command to create a new service is issued from a container within the swarm. This happens using the go sdk for docker, and I’m using it the way the one-shot container article described
The reason I suspect this isn’t relevant: I still run into this issue when I do this operation via docker-cli only (and not use the docker sdk for go).
When you deploy a stack like this:
docker stack deploy --with-registry-auth --compose-file docker-compose.yml my-stack
It creates a network called my-stack_default
So to launch a service that can communicate with services that are in that stack you need to launch them like this:
docker service create --name statefulservice --network my-stack_default reponame/imagename
It would help to have a working example, but:
My guess is you don't have the services on the same docker network. Even if you don't manually assign stack services to a network (which is fine), The stack command creates one where all services in that stack are attached to. You'll need to specify that overlay network in your subsequent service create commands so they can find each other in DNS and communicate inside the swarm.
For example, if I create a stack called nginx, it will add all those services (unless configured otherwise in stack file) to an overlay network called nginx_default

docker-compose.yml vs docker-stack.yml what difference?

I am new docker-user. And in difference manuals I have find usually docker-compose.yml file for description docker job, but on docker site for this goal used docker-stack.yml file. What difference?
docker-compose.yml is for the docker-compose tool which is for multi container docker applications on a single docker engine.
its called with
docker-compose up
docker-stack.yml is for the docker swarm tool. (for orchestration and scheduling).
its called with
docker stack
To add to Gabbax0r reply:
Docker Swarm was a standalone component used to cluster Docker engines as a single one.
As of Docker 1.12 the "Swarm" standalone was integrated inside the Docker engine (read the preamble at this page), and Swarm is (or will be) legacy.
To reply to your original question, it is just different names for different cases, but they both are meant to serve the same purpose.
To reply to your comment question, use docker-compose when you have to orchestrate a multi-container app on a single node; if you have to worry about multi-nodes and load-balancing and all this advanced stuff, you better off go with the Swarm.
The docker-stack.yml has the advantage over docker-compose.yml :
Update separately
When working with services, swarms, and docker-stack.yml files, keep in mind that the tasks (containers) backing a service can be deployed on any node in a swarm.
This may be applied to a different node each time the service is updated.
Deploy remotely
If you are running Docker Swarm on your private host then docker-stack.yml can use to access and deploy your application remotely to the host using an SSH key.
You may even use such a service like Codefresh to do so.

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

Resources