I am trying to deploy my app with Compose and Swarm. Currently I don't want to upgrade my docker-compose.yaml from v2 to v3. So I am only able to do that with standalone(legacy) swarm rather that docker swarm mode based on Stoneman's answer and official Swarm documents.
Following the official instruction, I successfully set up a swarm cluster. I ran docker -H :4000 info on the swarm manager node to check the swarm cluster status, as shown below. There are two other worker nodes in this cluster. Next, I want to create an overlay network with this cluster and refer this network in the docker-compose.yaml. But when I ran docker -H :4000 network create -d overlay test on the swarm manager node to create the netwrok, it reported error: Error response from daemon: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
So, how can I create a network with a swarm cluster( without docker-machine and virtual box)? Currently, the swarm manager and worker nodes are running as docker containers.
Did you setup overlay networking with it's own etcd backend first? https://docs.docker.com/network/overlay-standalone.swarm/
Swarm "classic" is deprecated and replaced by docker swarm mode. Everything is harder in classic, including setting up overlay. I wouldn't recommend using it for anything new unless you had a hard requirement.
In swarm mode you run all commands at the swarm manager host. Same with creating networks, secrets, etc.
You can find out the docker manager machine by:
$docker node ls
Manager host is marked with MANAGER STATUS:Leader.
After creating the network on the manager all nodes on that swarm should see the network.
"I ran docker -H :4000 network create -d overlay test"
Better to declare the network inside the stack yml file, for faster and easier deployment. you can create network and expose your ports at the yml file, no need to create them manually every time you run the stack.
Add the following block under the docker service:
services:
...
#Network
networks:
- network-name-here
...
#Exposed ports:
ports:
- target: 4000
published: 4000
At the end of the yml file add the following block to declare the network, so its created every time you run $docker stack deploy:
networks:
network-name-here:
driver: overlay
Related
I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?
I create a swarm and join a node, very nice all works fine
docker swarm init --advertise-addr 192.168.99.1
docker swarm join --token verylonggeneratedtoken 192.168.99.1:2377
I create 3 services on the swarm manager
docker service create --replicas 1 --name nginx nginx --publish published=80,target=80
docker service create --replicas 1 --name php php:7.1-fpm published=9000,target=9000
docker service create --replicas 1 --name postgres postgres:9.5 published=5432,target=5432
All services boots up just fine, but if I customize the php image with my app, and configure nginx to listen to the php fpm socket I can’t find a way to communicate these three services. Even if I access the services using “docker exec -it service-id bash” and try to ping the container names or host names (I even tried to curl them).
What I am trying to say is I don’t know how to configure nginx to connect to fpm since I don’t know how one container communicates to another using swarm. Using docker-compose or docker run is simple as using a links option. I’ve read all documentation around, spent hours on trial and error, and I just couldn’t wrap my head around this. I have read about the routing mesh, wish will get the ports published and it really does to the outside world, but I couldn’t figure in wish ip its published for the internal containers, also that can't be an random ip as that will cause problems to manage my apps configuration, even the nginx configurations.
To have multiple containers communicate with each other, they next to be running on a user created network. With swarm mode, you want to use an overlay network so containers can run on multiple hosts.
docker network create -d overlay mynet
Then run the services with that network:
docker service create --network mynet ...
The easier solution is to use a compose.yml file to define each of the services. By default, the services in a stack are deployed on their own network:
docker stack deploy -c compose.yml stack-name
Or you can just make 1 Docker-compose, and make a docker stack with them.
It's easier and more reliable to combine php_fpm and nginx in the same image. I know this goes against the official way of single-app images, but for cases like php_fpm+nginx where you must have both to return a request, it's the best case. I have a WIP sample here: https://github.com/BretFisher/php-docker-good-defaults
I'm trying to setup some very simple networking between a pair of Docker containers and so far all the documentation I've seen is far more complex than for what I am trying to do.
My use case is simple:
Container 1 is already running and is listening on port 28016
Container 2 will start after container 1 and needs to connect to container 1 on port 28016.
I am aware I can set this up via Docker-Compose with ease, however Container 1 is long-lived and for this use case, I do not want to shut it down. Container 2 needs to start and automatically connect to container 1 via port 28016. Also, both containers are running on the same machine. I cannot figure out how to do this.
I've exposed 28016 in Container 1's dockerfile, and I'm running it with -p 28016:28016. What do I need to do for Container 2 to connect to Container 1?
There are a few ways of solving this. Most don't require you to publish the ports.
Using a user defined network
If you start your long-running container in a user-defined network, because then docker will handle
docker network create service-network
docker run --net=service-network --name Container1 service-image
If you then start your ephemeral container in the same network, it will be able to refer to the long-running container by name. E.g:
docker run --name Container2 --net=service-network ephemeral-image
Using the existing container network namespace
You can just run the ephemeral container inside the network namespace of the long running container:
docker run --name Container2 --net=container:Container1 ephemeral-image
In this case, the service would be available via localhost:28016.
Accessing the service on the host
Since you've published the service on the host with -p 28016:28016, you can refer to that access using the address of the host, which from inside the container is going to be the default gateway. You can get that with something like:
address=$(ip route | awk '$1 == "default" {print $3}')
And your service would be available on ${address}:28016.
Here are the steps to perform:
Create a network: docker network create my-net
Attach the network to the already running container: docker container attach <container-name> my-net
Start the new container with the --network my-net or with docker-compose add a network property:
...
networks:
- my-net
networks:
my-net:
external: true
The container should now be able to communicate using the container-name as a DNS host name
I want a shell inside a Docker Service / Swarm network. Specifically, I want to be able to connect to a database that's inside the network.
From the manager node, I tried:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
481c20b4039a bridge bridge local
2fhe9rtim9mz my-network overlay swarm
Then
docker run -it --network my-network alpine sh
But I get the error:
docker: Error response from daemon: swarm-scoped network (event-data-core-prod) is not compatible with docker create or docker run. This network can only be used by a docker service.
Is it possible to somehow start an interactive session that can connect to a network service?
Since Docker Engine v1.13 (like already mentioned by johnharris85) it is possible for non-service container to attach to a swarm-mode overlay networks using the --attachable commandline parameter when creating the network:
docker network create --driver overlay --attachable my-attachable-overlay-network
Regarding your followup question:
Is there a way to change this for an extant network?
Yes and no, like I already described in another question you can make use of the docker service update feature:
To update an already running docker service:
Create an attachable overlay network:
docker network create --driver overlay --attachable my-attachable-overlay-network
Remove the network stack with a disabled "attachable" overlay network (in this example called: my-non-attachable-overlay-network):
docker service update --network-rm my-non-attachable-overlay-network myservice
Add the network stack with an enabled "attachable" overlay network:
docker service update --network-add my-attachable-overlay-network myservice
Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode