I have a 2 machine swarm cluster. I have installed the simple Docker compose demo from here on one of the machines. However, when I try to scale the application with the docker-compose scale web=5command, it only scales to the current machine and does not create any of the new web containers on the other machine in the swarm cluster as expected.
On every example I've seen by others, their scale command just works and nothing was mentioned about additional configurations needed get it to scale across multiple nodes.
Not sure what else to try. I get the same result when running the scale command from either machine
Please let me know what further information I can provide.
I see now there were two issues causing my scale commands to fail, however, it is still not working even with proper multi-host networking setup.
When scaling a container from a compose application that was linked to another container in that same compose app - This was failing because I was joining the containers with the deprecated(?) "links" functionality rather than using the new multi-host networking functionality. Apparently, "links" can only work on a single machine and cannot be scaled across multiple machines. (I'm fairly sure this is the case, but could be wrong)
When attempting to scale an unlinked container - This was actually working as expected. I had forgot I had other containers running on the machine I was expecting Docker to scale out my container to. Thus the Swarm scheduler just put the newly scaled containers onto the current machine since the current machine was being least utilized. (This was on a 2 machine swarm cluster)
EDIT - Actual Solution
Okay, it looks like the final problem was I cannot scale the part of the compose app that uses build for creating its image rather than specifying the image with image.
I suppose this makes sense because the machine it is trying to scale that container to doesn't have the build file available to create that image but I had assumed Docker Compose/Swarm would be smart enough to figure that out and somehow copy that across machines.
So the solution is to build that image beforehand with Docker build and then either push that image to the public Docker Hub or your own private registry and have the Docker compose file specify that image with image rather than trying to create it with build.
One thing you could do is label the web containers (such as com.mydomain.myapp.category=web) and make a soft anti-affinity rule for the label (such as affinity:com.mydomain.myapp.category!=~web). This would tell Swarm to try and schedule another container with com.mydomain.myapp.category=web to host that doesn't container the container first (but schedule on one already having that container if not).
The modified Docker Compose file in that repository would be something like:
web:
build: .
volumes:
- .:/code
links:
- redis
expose:
- "5000"
environment:
- "affinity:com.mydomain.myapp.category!=~web"
labels:
- "com.mydomain.myapp.category=web"
redis:
image: redis
lb:
image: tutum/haproxy
links:
- web
ports:
- "80:80"
environment:
- BACKEND_PORT=5000
- BALANCE=roundrobin
Related
There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.
This question already has answers here:
How to copy Docker images from one host to another without using a repository
(21 answers)
Closed 3 years ago.
I'm setting up a CI/CD solution. I would like to run a docker application on a production machine that has no access to the internet.
The constraints are as follows:
Build needs to happen on machine A
Resulting image/container needs to be exported and transported to machine B
Optionally: Run the container again with a docker-compose file
I know about docker commit and repos, but this is sadly not an option, as the resulting server does not have access to the internet.
Here's the docker-compose.yaml; this is not set in stone and can change however necessary
version: '2'
services:
test_dev_app:
image: testdevapp:latest
container_name: test_dev_app
hostname: test_dev_app
environment:
DJANGO_SETTINGS_MODULE: "settings.production"
APPLICATION_RUN_TYPE: "uwsgi"
volumes:
- ./:/data/application
ports:
- "8000:8000"
- "8080:8080"
I'd expect to be able to properly transport a container or image and use the same image on a different machine with docker-compose up
Esteban is right about how to do it the registry way, but forgot to mention the "tar" way : you can basically save an image to a tar archive, then later load it to the docker inner registry of another one.
The way to transport the image is up to you!
Still, if you plan to do it often, I recommand following the private registry solution: it's definitely cleaner!
P̶u̶s̶h̶i̶n̶g̶ ̶D̶o̶c̶k̶e̶r̶ ̶i̶m̶a̶g̶e̶s̶ ̶t̶o̶ ̶a̶ ̶r̶e̶g̶i̶s̶t̶r̶y̶ ̶i̶s̶ ̶t̶h̶e̶ ̶o̶n̶l̶y̶ ̶w̶a̶y̶ ̶(̶a̶t̶ ̶l̶e̶a̶s̶t̶ ̶s̶u̶p̶p̶o̶r̶t̶e̶d̶ ̶b̶y̶ ̶d̶o̶c̶k̶e̶r̶ ̶o̶u̶t̶-̶o̶f̶-̶t̶h̶e̶-̶b̶o̶x̶)̶ ̶t̶o̶ ̶s̶h̶a̶r̶e̶ ̶t̶h̶e̶m̶ ̶b̶e̶t̶w̶e̶e̶n̶ ̶s̶e̶r̶v̶e̶r̶s̶.
If internet access is not an option, then take a look at having your own private docker registry.
Deploy it in a network segment accessible from both the pushing and pulling machine.
Then build your docker image including your registry's address and push it:
docker build -t <private_registry_address>/test_dev_app:latest .
docker push <private_registry_address>/test_dev_app:latest
When you push it the docker client will know that it has to use the specified address instead of the public registry.
Or as mentioned by tgogos on the comment below check his link on how to use docker save / docker load on air-gapped environments.
It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.
I will try to describe my desired functionality:
I'm running docker swarm over docker-compose
In the docker-compose, I've services,for simplicity lets call it A ,B ,C.
Assume C service that include shared code modules need to be accessible for services A and B.
My questions are:
1. Should each service that need access to the shared volume must mount the C service to its own local folder,(using the volumes section as below) or can it be accessible without mounting/coping to a path in local container.
In docker swarm, it can be that 2 instances of Services A and B will reside in computer X, while Service C will reside on computer Y.
Is it true that because the services are all maintained under the same docker swarm stack, they will communicate without problem with service C.
If not which definitions should it have to acheive it?
My structure is something like that:
version: "3.4"
services:
A:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
B:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
C:
image: repository.com/C:1.0.0
volumes:
- C:/shared_code
volumes:
C:
If what you’re sharing is code, you should build it into the actual Docker images, and not try to use a volume for this.
You’re going to encounter two big problems. One is getting a volume correctly shared in a multi-host installation. The second is a longer-term issue: what are you going to do if the shared code changes? You can’t just redeploy the C module with the shared code, because the volume that holds the code already exists; you need to separately update the code in the volume, restart the dependent services, and hope they both work. Actually baking the code into the images makes it possible to test the complete setup before you try to deploy it.
Sharing code is an anti-pattern in a distributed model like Swarm. Like David says, you'll need that code in the image builds, even if there's duplicate data. There are lots of ways to have images built on top of others to limit the duplicate data.
If you still need to share data between containers in swarm on a file system, you'll need to look at some shared storage like AWS EFS (multi-node read/write) plus REX-Ray to get your data to the right containers.
Also, depends_on doesn't work in swarm. Your apps in a distributed system need to handle the lack of connection to other services in a predicable way. Maybe they just exit (and swarm will re-create them) or go into a retry loop in code, etc. depends_on is mean for local docker-compose cli in development where you want to spin up a app and its dependencies by doing something like docker-compose up api.
I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.