Docker Swarm Windows Worker with Traefik returns Gateway Timeout - docker

The objective is to get a mixed OS Docker swarm running using Linux servers and Windows 10 Machines running Docker For Windows
Currently Windows workers are theoretically supported on mixed os swarms provided the --endpoint-mode flag is set to 'dnsrr'. This is explained here. However attempts to use traefik to route to a simple docker whoami image stefanscherer/whoami image have failed.
Minimal Failing Example
// On (Linux) Manager Node:
docker swarm init --advertise-addr <hostaddress> --listen-addr <hostaddress>:2377
// On (Windows 10) Worker Node:
docker swarm join <jointoken>
// On Manager Node:
docker network create --driver=overlay traefik-net
docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 --publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik \
--docker \
--docker.swarmmode \
--docker.domain=traefik \
--docker.watch \
--web
docker service create \
--name whoami \
--label traefik.enable=true \
--label traefik.frontend.rule=Host:whoami.docker \
--label traefik.protocol=http \
--label traefik.docker.network=traefik-net \
--label traefik.backend.loadbalancer.method=drr \
--label traefik.backend=whoami \
--network traefik-net \
--mode global\
--label traefik.port=80 \
stefanscherer/whoami
Traefik successfully sets up backend rules, to check the routing I used the traefik dashboard to find out the URL that is routed to by the rule e.g. '10.0.0.12:8080'. I then compare this with the IP address of each task, the task can be viewed with docker service ps, and their address' found using
docker inspect <taskID> \
--format '{{ range .NetworksAttachments }}{{ .Addresses }}{{ end }}'
The Problem
A HTTP request with a header 'Host:whoami.docker' sent to the IP of the manager will succeed when routed to the manager and fail with 504 Gateway Timeout when routed to the Windows Task on the Windows worker.

You're missing setting --endpoint-mode=dnsrr to your whoami service.
docker service create \
--name whoami \
--label traefik.enable=true \
--label traefik.frontend.rule=Host:whoami.docker \
--label traefik.protocol=http \
--label traefik.docker.network=traefik-net \
--label traefik.backend.loadbalancer.method=drr \
--label traefik.backend=whoami \
--network traefik-net \
--mode global\
--label traefik.port=80 \
--endpoint-mode=dnsrr
stefanscherer/whoami
Setting endpoint-mode dnsrr will disable VIP address which probably is causing the issue.

I had the same problem when using the stefanscherer/whoami image. Using microsoft/dotnet-samples:aspnetapp works though, so the error seems related to the image.
I'm using the following setup:
Ubuntu 16.04
Docker 18.03.1-ce
Run as Manager
Runs traefik
Windows 1803
Docker 18.03.1-ee-2
Runs as Worker (joining as Manager did not work)
Runs microsoft/dotnet-samples:aspnetapp

Related

How to access Gitlab's metrics (Prometheus and Grafana) from Docker installation?

I installed Gitlab using Docker image on a Ubuntu virtual machine running on a MAC M1 as follows (https://hub.docker.com/r/yrzr/gitlab-ce-arm64v8):
docker run \
--detach \
--restart always \
--name gitlab-ce \
--privileged \
--memory 4096M \
--publish 22:22 \
--publish 80:80 \
--publish 443:443 \
--hostname 127.0.0.1 \
--env GITLAB_OMNIBUS_CONFIG=" \
nginx['redirect_http_to_https'] = true; "\
--volume /srv/gitlab-ce/conf:/etc/gitlab:z \
--volume /srv/gitlab-ce/logs:/var/log/gitlab:z \
--volume /srv/gitlab-ce/data:/var/opt/gitlab:z \
yrzr/gitlab-ce-arm64v8:latest
All seems to be working correctly on localhost, except that I can't access the metrics, I got unable to connect error on:
Prometheus: http://localhost:9090
Grafana: http://localhost/-/grafana
I tried enabling metrics as in the documentation, and docker exec -it gitlab-ce gitlab-ctl reconfigure
What I'm missing?
Thanks
When Gitlab uses localhost this will resolve the localhost on the container and not the host (so your Mac).
There are two options to solve this:
Use host.docker.internal instead of localhost (this resolves to the internal IP address used by the host) - see this doc for more info
Configure your container to use the host network by adding this to the docker run command: --network=host which will let your container and host to share the same network stack (however, this is not supported nu Docker Desktop for mac according to this)

Running gitlab and jenkins with https in docker swarm

Context: I want to run gitlab and jenkins in docker swarm with https. I succeeded in making them run on the default port(8080 for jenkins and 80 for gitlab with http).
My problem: is when I try to run for example gitlab on the port 443, I get nothing even though I published my container on that port and modified the external url on the "gitlab.rb" file(I've been following the official doc).
And for Jenkins it's even harder to make it run on https, it's either adding a reverse proxy or SSL certificate.
> sudo docker service create -u 0 --name jenkins_stack \
> --network devops-net --replicas 1 --publish 8443:8443 \
> --publish 50000:50000 --mount src=jenkins-volume,dst=/var/jenkins_home \
> --hostname jenkins jenkins/jenkins
>
>
> sudo docker service create -u 0 --name gitlabstack \
> --network devops-net --replicas 1 --publish 80:80 --publish 443:443 \
> --mount src=gitlab-data,dst=/var/opt/gitlab \
> --mount src=gitlab-logs,dst=/var/log/gitlab \
> --mount src=gitlab-config,dst=/etc/gitlab \
> --hostname gitlab gitlab/gitlab-ce
Above you will find the docker lines to create the services.
I'd really appreciate it, if someone can share any video or tutorial on how to run gitlab/jenkins on docker swarm with https.
I'm sorry if I've been unclear.

Running a local kibana in a container

I am trying to run use kibana console with my local elasticsearch (container)
In the ElasticSearch documentation I see
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.2
Which lets me run the community edition in a quick one liner.
Looking at the kibana documentation i see only
docker pull docker.elastic.co/kibana/kibana:6.2.2
Replacing pull with run it looks for the x-pack (I think it means not community) and fails to find the ES
Unable to revive connection: http://elasticsearch:9200/
Is there a one liner that could easily set up kibana localy in a container?
All I need is to work with the console (Sense replacement)
If you want to use kibana with elasticsearch locally with docker, they have to communicate with each other. To do so, according to the doc, you need to link the containers.
You can give a name to the elasticsearch container with --name:
docker run \
--name elasticsearch_container \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
And then link this container to kibana:
docker run \
--name kibana \
--publish 5601:5601 \
--link elasticsearch_container:elasticsearch_alias \
--env "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" \
docker.elastic.co/kibana/kibana:6.2.2
The port 5601 is exposed locally to access it from your browser. You can check in the monitoring section that elasticsearch's health is green.
EDIT (24/03/2020):
The option --link may eventually be removed and is now a legacy feature of docker.
The idiomatic way of reproduce the same thing is to firstly create a user-defined bridge:
docker network create elasticsearch-kibana
And then create the containers inside it:
 Version 6
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_URL=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:6.2.2
Version 7
As it was pointed out, the environment variable changed for the version 7. It now is ELASTICSEARCH_HOSTS.
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_HOSTS=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:7.6.2
User-defined bridges provide automatic DNS resolution between containers that means you can access each other by their container names.
It is convenient to use docker-compose as well.
For instance, the file below, stored in home directory, allows to start Kibana with one command:
docker-compose up -d:
# docker-compose.yml
version: "2"
kibana:
image: "docker.elastic.co/kibana/kibana:6.2.2"
container_name: "kibana"
environment:
- "ELASTICSEARCH_URL=http://<elasticsearch-endpoint>:9200"
- "XPACK_GRAPH_ENABLED=false"
- "XPACK_ML_ENABLED=false"
- "XPACK_REPORTING_ENABLED=false"
- "XPACK_SECURITY_ENABLED=false"
- "XPACK_WATCHER_ENABLED=false"
ports:
- "5601:5601"
restart: "unless-stopped"
In addition, Kibana service might be a part of your project in development environment (in case, docker-compose is used).

Hyperledger fabricV1 on docker swarm

I have created a docker swarm with one manager and two workers and i am trying to deploy the hyperledger fabric on top of that for this i am using the below command
docker service create --name orderer.nokia.com hyperledger/fabric-orderer orderer\
--env ORDERER_GENERAL_LOGLEVEL=debug \
--env ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 \
--env ORDERER_GENERAL_GENESISMETHOD=file \
--env ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block \
--env ORDERER_GENERAL_LOCALMSPID=OrdererMSP \
--env ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp \
--env ORDERER_GENERAL_TLS_ENABLED=true \
--env ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key \
--env ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt \
--env ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] \
--mount type=bind,source=../channel-artifacts/genesis.block,destination=/var/hyperledger/orderer/orderer.genesis.block \
--mount type=bind,source=../crypto-config/ordererOrganizations/nokia.com/orderers/orderer.nokia.com/msp,destination=/var/hyperledger/orderer/msp \
--mount type=bind,source=../crypto-config/ordererOrganizations/nokia.com/orderers/orderer.nokia.com/tls/,destination=/var/hyperledger/orderer/tls \
--publish 7050:7050
but getting below error
Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
o8ftuvxa3szzhgphxc71w5fv9 * SwarmNode1-192 Ready Active Leader
sm7b4hf7oz9mkwksrxylu0ncq SwarmNode3-194 Ready Active
yag0gy3dlhu4fy8rl3iawro07 SwarmNode2-193 Ready Active
OS:Ubuntu
Docker version 17.06.1-ce, build 874a737
Had the same issue. In my case it was the names of the services having "." in them.
If you change it from --name orderer.nokia.com to --name orderernokiacom it should build correctly.
However, I am still trying to deploy chaincode successfully so not 100% sure
========================================EDIT===============================
I have it set up and running with no problems now.
Indeed the error you are getting is from the "dots" in the services names.
If for some reason you need your service names to contain "." you can use network aliases.
To deploy in swarm mode, you first need to create an overlay network (if you are using compose this has to be created outside the compose file).
And then everything should work just fine. For an example have a look https://github.com/endimion/HL_V1_test/blob/master/docker-swarm-compose.yml

Consul cluster automatic bootstrap on docker

I am working on conf management tools like etcd and consul. As I know etcd has discovery mechanism. I wonder does consul have something like that?
I am working on official consul docker image and when I set advertise IP's and join IP's there is no problem but I don't want to do this manually. Docker containers' Ip could change or some nodes could crash and you need new node to replace it. The situations like that how could I manage? I mean is there a possibility to join cluster without exactly know the nodes' Ip in the cluster?
You could start consul with docker swarm inside a subnet. Like this:
docker network create --driver overlay --subnet 172.20.0.0/24 consul-net
docker service create \
--name consul \
--publish 8500:8500 \
--network consul-net \
--replicas 3 \
-e 'CONSUL_BIND_INTERFACE=eth0' \
-e ‘CONSUL_LOCAL_CONFIG={“skip_leave_on_interrupt”:true}’ \
consul agent -server -ui \
-client=0.0.0.0 \
-bootstrap-expect=3 \
-data-dir=consul/data \
-retry-join 172.20.0.3 \
-retry-join 172.20.0.4 \
-retry-join 172.20.0.5 \
-retry-interval 5s
You can also see this consul issue #66

Resources