Docker Swarm Linking - docker

I want to create a Docker Swarm Cluster running an elastic search instance, a MongoDB instance and a grails app, each on a separate machine. I'm using Docker Machine to set up my Docker Swarm Cluster
swarm-01:
mongodb
mongodb_ambassador
swarm-02:
elasticsearch
elasticsearch_ambassador
swarm-03:
mongodb_ambassador
elasticsearch_ambassador
grails
The last step of my setup, running the actual grails app, using the following command:
docker run -p 8080:8080 -d --name grails-master --volumes-from maven --link mongo:mongo-master --link es:es-master my-grails-image
fails with error:
Error response from daemon: Unable to find a node fulfilling all
dependencies: --volumes-from=maven --link=mongo:mongo-master
--link=es:es-master
The ambassador containers and the maven data container are all running on the same node.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74677dad09a7 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 9200/tcp, 9300/tcp swarm-03/es
98b38c4fc575 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 27107/tcp swarm-03/mongo
7d45fb82eacc debian:jessie "/bin/bash" 20 minutes ago swarm-03/maven
I'm not able to get the Grails app running on the Swarm cluster; any advice would be appreciated. Running all containers on a single machine works, so I guess I'm making a mistake linking the mongo and es instances to the grails app.
Btw I'm using latest Docker Toolbox installation on OS X.

"linking" is deprecated in docker. Don't use it. It's complicated and not flexible enough.
Just create an overlay network for swarm mode.
docker network create -d overlay mynetwork
In swarm mode (even in single container mode), just add every service who should communicate with another service to the same network.
docker service create --network mynetwork --name mymongodb ...
Other services in the same network can reach your mongodb service just over the hostname mymongodb. That's all. Docker swarm mode has battery included.

Related

Docker swarm - Port not accessible

I am trying out some things with docker and docker swarm and currently I am running into a problem.
If I create a container with:
docker run -d --name my_nginx -p 8080:80 nginx
everythings went fine, I am able to access this port.
If I try to create a service with docker swarm (container was removed before) I am not able to open that port:
docker service create -d --name my_service_nginx --replicas=1 -p 8080:80 nginx
It seems that the service does not create a portmapping.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3417b80036c nginx:latest "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp my_service.1.1l3fwcct1m9hoallkn0g9qwpd
Do you have any idea what I am doing wrong?
Best regards
Jan
Launching a Docker swarm on the LXC is not possible:
Docker swarm get access from outside network

What is the difference between Docker Service and Docker Container?

When do we use a docker service create command and when do we use a docker run command?
In short: Docker service is used mostly when you configured the master node with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
Docker run: The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start
source: https://docs.docker.com/engine/reference/commandline/run/#parent-command
Docker service:
Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:
the port where the swarm will make the service available outside the swarm
an overlay network for the service to connect to other services in the swarm
CPU and memory limits and reservations
a rolling update policy
the number of replicas of the image to run in the swarm
source: https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#services-tasks-and-containers
docker run command is used to create a standalone container
docker service create command is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of course, but not standalone containers. In a sense a service acts as a template when instantiating tasks.
For example
docker service create --name MY_SERVICE_NAME --replicas 3 IMAGE:TAG
creates 3 tasks of the MY_SERVICE_NAME service, which is based on the IMAGE:TAG image.
More information can be found here
Docker run will start a single container.
With docker service you manage a group of containers (from the same image). You can scale them (start multiple containers) or update them.
You may want to read "docker service is the new docker run"
According to these slides, "docker service create" is like an "evolved" docker run. You need to create a "service" if you want to deploy a container to Docker Swarm
Docker services are like "blueprints" for containers. You can e.g. define a simple worker as a service, and then scale that service to 20 containers to go through a queue really quickly. Afterwards you scale that service down to 3 containers again. Also, via Swarm these containers could be deployed to different nodes of your swarm.
But yeah, I also recommend reading the documentation, just like #Tristan suggested.
You can use docker in two way.
Standalone mode
When you are using the standalone mode you have installed docker daemon in only one machine. Here you have the ability to create/destroy/run a single container or multiple containers in that single machine.
So when you run docker run; the docker-cli creates an API query to the dockerd daemon to run the specified container.
So what you do with the docker run command only affects the single node/machine/host where you are running the command. If you add a volume or network with the container then those resources would only be available in the single node where you are running the docker run command.
Swarm mode (or cluster mode)
When you want or need to utilize the advantages of cluster computing like high availability, fault tolerance, horizontal scalability then you can use the swarm mode. With swarm mode, you can have multiple node/machine/host in your cluster and you can distribute your workload throughout the cluster. You can even initiate swarm mode in a single node cluster and you can add more node later.
Example
You can recreate the scenario for free here.
Suppose at this moment we have only one node called node-01.dc.local, where we have initiated following commands,
####### Initiating swarm mode ########
$ docker swarm init --advertise-addr eth0
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
####### create a standalone container #######
[node1] (local) root#192.168.0.8 ~
$ docker run -d --name app1 nginx
####### creating a service #######
[node1] (local) root#192.168.0.8 ~
$ docker service create --name app2 nginx
After a while, when you feel that you need to scale your workload you have added another machine named node-02.dc.local. And you want to scale and distribute your service to the newly created node.
So we have run the following command on the node-02.dc.local node,
####### Join the second machine/node/host in the cluster #######
[node2] (local) root#192.168.0.7 ~
$ docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
This node joined a swarm as a worker.
Now from the first node I have run the followings to scale up the service.
####### Listing services #######
[node1] (local) root#192.168.0.8 ~
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
syn9jo2t4jcn app2 replicated 1/1 nginx:latest
####### Scalling app2 from single container to 10 more container #######
[node1] (local) root#192.168.0.8 ~
$ docker service update --replicas 10 app2
app2
overall progress: 10 out of 10 tasks
1/10: running [==================================================>]
2/10: running [==================================================>]
3/10: running [==================================================>]
4/10: running [==================================================>]
5/10: running [==================================================>]
6/10: running [==================================================>]
7/10: running [==================================================>]
8/10: running [==================================================>]
9/10: running [==================================================>]
10/10: running [==================================================>]
verify: Service converged
[node1] (local) root#192.168.0.8 ~
####### Verifying that app2's workload is distributed to both of the ndoes #######
$ docker service ps app2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z12bzz5sop6i app2.1 nginx:latest node1 Running Running 15 minutes ago
8a78pqxg38cb app2.2 nginx:latest node2 Running Running 15 seconds ago
rcc0l0x09li0 app2.3 nginx:latest node2 Running Running 15 seconds ago
os19nddrn05m app2.4 nginx:latest node1 Running Running 22 seconds ago
d30cyg5vznhz app2.5 nginx:latest node1 Running Running 22 seconds ago
o7sb1v63pny6 app2.6 nginx:latest node2 Running Running 15 seconds ago
iblxdrleaxry app2.7 nginx:latest node1 Running Running 22 seconds ago
7kg6esguyt4h app2.8 nginx:latest node2 Running Running 15 seconds ago
k2fbxhh4wwym app2.9 nginx:latest node1 Running Running 22 seconds ago
2dncdz2fypgz app2.10 nginx:latest node2 Running Running 15 seconds ago
But if you need to scale your app1 you can't because you have created the container with standalone mode.

docker swarm and private registry

I try to test docker swarm and private registry as a service without TLS: 1 manager and 2 worker. On manager:
docker service create --name registry --publish 5000:5000 registry:2
When trying to check on manager, with
curl localhost:5000/v2/_catalog
or
curl 127.0.0.1:5000/v2/_catalog
curl just wait for ever (at least an hour). Same on worker1. but on worker 2 works ok!
curl 127.0.0.1:5000/v2/_catalog
{"repositories":[]}
Then on manager
docker service ps registry
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
p6ngzdemfolu registry.1 registry:2 worker2 Running Running 14 hours ago
I can see, that registry is running on worker2. So you can make pull/push/queries only on worker where image is running!?
My version of docker is:
Docker version 17.03.0-ce, build 60ccb22
What's am I doing wrong?
I'm answering to myself:
I moved manager to google and add one more worker on scenegroup. between workers there is all needed ports open. So my combination is manager+worker on GCE, worker on AWS and worker on SCENE.
All seems to work ok, when AWS worker is drained. If AWS worker active, there is problems with swarm. My own decision is, that there is something between google & amazon.

client access to docker swarm

I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.

Why docker containers can run, despite of docker-machine is not running?

Apparently, this is a silly question, though, i hope someone can help me.
I was thinking docker containers can run, because docker-machine is running on my MacOS X. Like on this situation:
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.12.2
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
But container can run, although in this situation.
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
Are there are no relationships between them?
Reference: https://docs.docker.com/machine/overview/
I installed Docker for Mac.
> docker --version
Docker version 1.12.1, build 6f9534c
This post is duplicated with Default docker machine on Mac.
Docker 1.12 and onward no longer uses docker-machine to run containers. Instead it uses a native docker engine for mac/windows.

Resources