I try to test docker swarm and private registry as a service without TLS: 1 manager and 2 worker. On manager:
docker service create --name registry --publish 5000:5000 registry:2
When trying to check on manager, with
curl localhost:5000/v2/_catalog
or
curl 127.0.0.1:5000/v2/_catalog
curl just wait for ever (at least an hour). Same on worker1. but on worker 2 works ok!
curl 127.0.0.1:5000/v2/_catalog
{"repositories":[]}
Then on manager
docker service ps registry
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
p6ngzdemfolu registry.1 registry:2 worker2 Running Running 14 hours ago
I can see, that registry is running on worker2. So you can make pull/push/queries only on worker where image is running!?
My version of docker is:
Docker version 17.03.0-ce, build 60ccb22
What's am I doing wrong?
I'm answering to myself:
I moved manager to google and add one more worker on scenegroup. between workers there is all needed ports open. So my combination is manager+worker on GCE, worker on AWS and worker on SCENE.
All seems to work ok, when AWS worker is drained. If AWS worker active, there is problems with swarm. My own decision is, that there is something between google & amazon.
Related
I am running a docker service in swarm mode. When I want to restart it, there are 2 options I know:
from swarm manager: docker service scale myservice=0 then docker service scale myservice=1
from server running the server: docker ps, take the container id of my service and do docker stop <containerId>
And this works fine. However, if I go with option #2 and instead of docker stop I write docker restart it will restart the current instance, but because being in swarm mode it will also start a new one. So in the end I will end up having 2 of the same service, even though in my compose I have specified I want only 1 replica.
Is there any way to prevent docker restart and the docker swarm to start a 2nd service while one is already there?
I am using docker 18.09.2 on ubuntu 18.04
I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?
When do we use a docker service create command and when do we use a docker run command?
In short: Docker service is used mostly when you configured the master node with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
Docker run: The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start
source: https://docs.docker.com/engine/reference/commandline/run/#parent-command
Docker service:
Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:
the port where the swarm will make the service available outside the swarm
an overlay network for the service to connect to other services in the swarm
CPU and memory limits and reservations
a rolling update policy
the number of replicas of the image to run in the swarm
source: https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#services-tasks-and-containers
docker run command is used to create a standalone container
docker service create command is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of course, but not standalone containers. In a sense a service acts as a template when instantiating tasks.
For example
docker service create --name MY_SERVICE_NAME --replicas 3 IMAGE:TAG
creates 3 tasks of the MY_SERVICE_NAME service, which is based on the IMAGE:TAG image.
More information can be found here
Docker run will start a single container.
With docker service you manage a group of containers (from the same image). You can scale them (start multiple containers) or update them.
You may want to read "docker service is the new docker run"
According to these slides, "docker service create" is like an "evolved" docker run. You need to create a "service" if you want to deploy a container to Docker Swarm
Docker services are like "blueprints" for containers. You can e.g. define a simple worker as a service, and then scale that service to 20 containers to go through a queue really quickly. Afterwards you scale that service down to 3 containers again. Also, via Swarm these containers could be deployed to different nodes of your swarm.
But yeah, I also recommend reading the documentation, just like #Tristan suggested.
You can use docker in two way.
Standalone mode
When you are using the standalone mode you have installed docker daemon in only one machine. Here you have the ability to create/destroy/run a single container or multiple containers in that single machine.
So when you run docker run; the docker-cli creates an API query to the dockerd daemon to run the specified container.
So what you do with the docker run command only affects the single node/machine/host where you are running the command. If you add a volume or network with the container then those resources would only be available in the single node where you are running the docker run command.
Swarm mode (or cluster mode)
When you want or need to utilize the advantages of cluster computing like high availability, fault tolerance, horizontal scalability then you can use the swarm mode. With swarm mode, you can have multiple node/machine/host in your cluster and you can distribute your workload throughout the cluster. You can even initiate swarm mode in a single node cluster and you can add more node later.
Example
You can recreate the scenario for free here.
Suppose at this moment we have only one node called node-01.dc.local, where we have initiated following commands,
####### Initiating swarm mode ########
$ docker swarm init --advertise-addr eth0
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
####### create a standalone container #######
[node1] (local) root#192.168.0.8 ~
$ docker run -d --name app1 nginx
####### creating a service #######
[node1] (local) root#192.168.0.8 ~
$ docker service create --name app2 nginx
After a while, when you feel that you need to scale your workload you have added another machine named node-02.dc.local. And you want to scale and distribute your service to the newly created node.
So we have run the following command on the node-02.dc.local node,
####### Join the second machine/node/host in the cluster #######
[node2] (local) root#192.168.0.7 ~
$ docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
This node joined a swarm as a worker.
Now from the first node I have run the followings to scale up the service.
####### Listing services #######
[node1] (local) root#192.168.0.8 ~
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
syn9jo2t4jcn app2 replicated 1/1 nginx:latest
####### Scalling app2 from single container to 10 more container #######
[node1] (local) root#192.168.0.8 ~
$ docker service update --replicas 10 app2
app2
overall progress: 10 out of 10 tasks
1/10: running [==================================================>]
2/10: running [==================================================>]
3/10: running [==================================================>]
4/10: running [==================================================>]
5/10: running [==================================================>]
6/10: running [==================================================>]
7/10: running [==================================================>]
8/10: running [==================================================>]
9/10: running [==================================================>]
10/10: running [==================================================>]
verify: Service converged
[node1] (local) root#192.168.0.8 ~
####### Verifying that app2's workload is distributed to both of the ndoes #######
$ docker service ps app2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z12bzz5sop6i app2.1 nginx:latest node1 Running Running 15 minutes ago
8a78pqxg38cb app2.2 nginx:latest node2 Running Running 15 seconds ago
rcc0l0x09li0 app2.3 nginx:latest node2 Running Running 15 seconds ago
os19nddrn05m app2.4 nginx:latest node1 Running Running 22 seconds ago
d30cyg5vznhz app2.5 nginx:latest node1 Running Running 22 seconds ago
o7sb1v63pny6 app2.6 nginx:latest node2 Running Running 15 seconds ago
iblxdrleaxry app2.7 nginx:latest node1 Running Running 22 seconds ago
7kg6esguyt4h app2.8 nginx:latest node2 Running Running 15 seconds ago
k2fbxhh4wwym app2.9 nginx:latest node1 Running Running 22 seconds ago
2dncdz2fypgz app2.10 nginx:latest node2 Running Running 15 seconds ago
But if you need to scale your app1 you can't because you have created the container with standalone mode.
I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.
I want to create a Docker Swarm Cluster running an elastic search instance, a MongoDB instance and a grails app, each on a separate machine. I'm using Docker Machine to set up my Docker Swarm Cluster
swarm-01:
mongodb
mongodb_ambassador
swarm-02:
elasticsearch
elasticsearch_ambassador
swarm-03:
mongodb_ambassador
elasticsearch_ambassador
grails
The last step of my setup, running the actual grails app, using the following command:
docker run -p 8080:8080 -d --name grails-master --volumes-from maven --link mongo:mongo-master --link es:es-master my-grails-image
fails with error:
Error response from daemon: Unable to find a node fulfilling all
dependencies: --volumes-from=maven --link=mongo:mongo-master
--link=es:es-master
The ambassador containers and the maven data container are all running on the same node.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74677dad09a7 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 9200/tcp, 9300/tcp swarm-03/es
98b38c4fc575 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 27107/tcp swarm-03/mongo
7d45fb82eacc debian:jessie "/bin/bash" 20 minutes ago swarm-03/maven
I'm not able to get the Grails app running on the Swarm cluster; any advice would be appreciated. Running all containers on a single machine works, so I guess I'm making a mistake linking the mongo and es instances to the grails app.
Btw I'm using latest Docker Toolbox installation on OS X.
"linking" is deprecated in docker. Don't use it. It's complicated and not flexible enough.
Just create an overlay network for swarm mode.
docker network create -d overlay mynetwork
In swarm mode (even in single container mode), just add every service who should communicate with another service to the same network.
docker service create --network mynetwork --name mymongodb ...
Other services in the same network can reach your mongodb service just over the hostname mymongodb. That's all. Docker swarm mode has battery included.