I have an existing stack in a Swarm that I'd like to add some templated variables to. The Swarm is currently managed by, but was not created in, Portainer. I no longer have access to the original YML that created the stack and many edits have been made to the services since it was used anyway.
Portainer easily lets me add and remove services, but it seems the ability to associate a service with a stack requires the original YML.
Is there an automated way to extract a YML file from an existing stack? If not, is there a way to associate a new service with an existing stack without using docker stack deploy?
docker stack deploy can easilly add new services to an existing stack namespace. Just don't use the --prune flag.
We use this a lot in our production and staging CD pipelines as we use stacks as environments, and so publish multiple microservices into a single stack - either by passing multiple --compose-file directives, or doing singular --compose-file stack deploys when a single service is implicated.
dockers dns resolution is even more interesting. Docker has no awareness of the stack namespace your services are deployed to - dns is resolved by the networks your service is attached to and the network aliases that have been implicitly and automatically assigned to your service.
network aliases can be explicitly controlled at the service level with no stacks at all with docker service create --network name=my-network,alias=web1 and there is a similar alias syntax for the network section of services in compose files.
Related
I have been curious since I just finished a course on Docker, and it was stated that docker stack deploy is the preferred practice for production.
Is there any benefit to use docker stack deploy for 3 services for example, instead of manually creating each service with docker service create, apart from the fact that using docker stack makes it easier to create the services?
I have a multi-container web application which is defined in a docker-compose.yml file. In our test environment, I wish to run multiple instances of this stack on the same Docker Swarm host. The stacks will be identical save for some minor configuration details (e.g. each stack may use different databases and/or container image tags).
Specifically, I would like it to work as follows:
https://awesomeco.com/dev-app/ will point to the dev stack
https://awesomeco.com/test-app/ will point to the test stack
https://awesomeco.com/qa-app/ will point to the QA stack
All 3 stacks will use the same docker-compose.yml file.
I intend to use a uniquely-named Docker network for each stack to prevent naming collisions when containers communicate within a stack. However, it is not clear to me how I can put an Nginx reverse proxy in front of all of the stacks to route traffic to the correct stack, as each stack will have the same set of service names.
For example, if each stack will has a service named web-app, and my nginx reverse proxy is connected to all the stack networks, how do I route traffic to the "right" instance of the web-app service?
Does anyone know how best to achieve this? And, if it is not possible and each stack must have uniquely named services, won't communication between stack containers suddenly become much harder? For example, the service names are currently hard-coded in individual services' source code e.g. web-app may communicate with the database via http://database-server. Would these names all need to become dynamic too?
This can be achieved using network aliases. These can be used to resolve the container by another name in the network being connected to.
In a docker compose file they would be defined as follows:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2
Accordingly, in my particular use case, each instance of the web-app service would need to have a unique alias in the network it shares with the reverse proxy. This will permit the reverse proxy to differentiate between the different instances of the web-app service.
Further links:
https://docs.docker.com/compose/compose-file/compose-file-v3/#aliases
https://docs.docker.com/engine/reference/commandline/network_connect/#create-a-network-alias-for-a-container
I’m trying to add a service to a stack after the stack was already deployed. But this new service is having trouble communicating with services (redis) inside the stack.
This is my current understanding of stacks and services, please let me know if there are any inaccuracies.
Stacks are an abstraction on top of services that provide useful utilities like DNS so services across a stack can communicate with one another. Stacks allow us to logically separate out groups of services, that might be running on the same swarm (so different development teams can share the same swarm).
I would like to first deploy a stack to a swarm (via compose file) and then periodically add containers like the one described in this article about one-shot containers. These containers are different because they are performing long, stateful operations. They need to be spun up with some initial state, do their work, and then go away. They are different because they don’t need to be replicated, or load balanced.
Essentially what I’m trying to do is:
Launch a “stack” like this:
docker stack deploy --with-registry-auth --compose-file docker-compose.yml my-stack
And then some time later when certain criteria is met, add a container like this:
docker service create -name statefulservice reponame/imagename
And this generally behaves as expected, except statefulservice isn’t able to talk to redis inside my-stack.
I’m confident that statefulservice is engineered correctly because when it’s added to the docker-compose.yml it behaves as expected.
A further detail that may or may not be relevant is that the command to create a new service is issued from a container within the swarm. This happens using the go sdk for docker, and I’m using it the way the one-shot container article described
The reason I suspect this isn’t relevant: I still run into this issue when I do this operation via docker-cli only (and not use the docker sdk for go).
When you deploy a stack like this:
docker stack deploy --with-registry-auth --compose-file docker-compose.yml my-stack
It creates a network called my-stack_default
So to launch a service that can communicate with services that are in that stack you need to launch them like this:
docker service create --name statefulservice --network my-stack_default reponame/imagename
It would help to have a working example, but:
My guess is you don't have the services on the same docker network. Even if you don't manually assign stack services to a network (which is fine), The stack command creates one where all services in that stack are attached to. You'll need to specify that overlay network in your subsequent service create commands so they can find each other in DNS and communicate inside the swarm.
For example, if I create a stack called nginx, it will add all those services (unless configured otherwise in stack file) to an overlay network called nginx_default
I am new docker-user. And in difference manuals I have find usually docker-compose.yml file for description docker job, but on docker site for this goal used docker-stack.yml file. What difference?
docker-compose.yml is for the docker-compose tool which is for multi container docker applications on a single docker engine.
its called with
docker-compose up
docker-stack.yml is for the docker swarm tool. (for orchestration and scheduling).
its called with
docker stack
To add to Gabbax0r reply:
Docker Swarm was a standalone component used to cluster Docker engines as a single one.
As of Docker 1.12 the "Swarm" standalone was integrated inside the Docker engine (read the preamble at this page), and Swarm is (or will be) legacy.
To reply to your original question, it is just different names for different cases, but they both are meant to serve the same purpose.
To reply to your comment question, use docker-compose when you have to orchestrate a multi-container app on a single node; if you have to worry about multi-nodes and load-balancing and all this advanced stuff, you better off go with the Swarm.
The docker-stack.yml has the advantage over docker-compose.yml :
Update separately
When working with services, swarms, and docker-stack.yml files, keep in mind that the tasks (containers) backing a service can be deployed on any node in a swarm.
This may be applied to a different node each time the service is updated.
Deploy remotely
If you are running Docker Swarm on your private host then docker-stack.yml can use to access and deploy your application remotely to the host using an SSH key.
You may even use such a service like Codefresh to do so.
I am building a proof of concept docker swarm based application stack which is intended to evolve a product which is currently deployed to many physical sites and backed by a distributed CDN. The docker compose system I've set up includes a number of different image types which I need to ensure are deployed to each physical location (three copies of each service A, two copies of each service B, at each site for example, each site being several collocated physical machines belonging to the docker swarm) and then others which are deployed only to a central origin location. I'd like to find a way to deploy this with constraints on where the image types end up on the swarm. Is this possible?
Short answer, yes.
Long answer:
use docker compose for managing your cluster, it will ease management.
after creating your swarm you can make docker-compose use that swarm by:
docker-compose -H <docker-swarm-ip:port> up -d
and if you want a container/service to run specifically on a host.
add the following entry in docker-compose.yml under the service you want to run on that host:
environment:
- "constraint:node==<host>"
This is the way i do it now.
i believe this is also available when you use the run command. Tough i never tried it.