Using Weave Net Docker Swarm Adapter to allow services to communicate - docker

I have a Docker Swarm cluster setup as follows:
Setup on node 1
docker swarm init --advertise-addr ${NODE_1_IP} --data-path-port=7789
Setup on node 2
docker swarm join --advertise-addr ${NODE_2_IP} --token XXX ${NODE_1_IP}:2377
I have then installed weave on both nodes as follows.
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
docker plugin install weaveworks/net-plugin:latest_release
docker plugin disable weaveworks/net-plugin:latest_release
docker plugin set weaveworks/net-plugin:latest_release WEAVE_PASSWORD=XXX
docker plugin enable weaveworks/net-plugin:latest_release
I wanted to set a password because I need the network to be encrypted.
I then set up a network and a service. The constraint makes the service consist of one container running on node 2.
docker network create --driver=weaveworks/net-plugin:latest_release --attachable testnet_weave_encrypted
docker service create --network=testnet_weave_encrypted --name web_encrypted --publish 80 --replicas=1 --constraint 'node.labels.datastore001 == true' nginx:latest
Finally I test it inside another container running on node 1:
docker run --rm --name alpine --net=testnet_weave_encrypted -ti alpine:latest sh
apk add --no-cache curl
curl web_encrypted
This fails with the message:
curl: (7) Failed to connect to web_encrypted port 80: Host is unreachable
I know that web_encrypted is not wrong because when I try a different value I get a different error.
After bashing my head against this wall for hours I have discovered that I can do the following on node 1:
curl web_encrypted.1.lsrdyz8n66jdotaqgdzk9u1uo
And it works!
But of course this is useless to me because the exact container name will change every time the service recreates it.
Is this a bug in the weave plugin or have I missed a step in setting this up?

If you specify endpoint_mode as DNSRR, you should be able to use the service name as well as any aliases you specify. The weavenet network in the snippet below was created using the Weave network plugin.
host#docker service create --network name=weavenet,alias=ngx --endpoint-mode=dnsrr --name nginx nginx
host#docker run -it -d --name alpine --network=weavenet alpine:curl
host#docker exec -it alpine sh
/ # curl nginx
<!DOCTYPE html>
<html>
....
/ # curl ngx
<!DOCTYPE html>
<html>

Related

Docker - Connecting to localhost - connection refused

This is my Dockerfile:
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
RUN curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'
After running build:
docker build -t test./
The input is:
(7) Failed connect to localhost:8081; Connection refused
Why? Sending requests to localhost (container which is building) is possible only after run it? Or maybe I should add something to Dockerfile?
Thanks for help :)
Why do you want to connect to the service while it is build?
By default the service is not running yet, you'll need to start the container first.
Remove the curl and start the container first
docker run test
Dockerfile is a way to create images, then you create containers from images. Ports are up to serve once the container is up & running. Hence, you can't do a curl while building an image.
Change Dockerfile to -
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
Build image -
docker build -t test .
Create container from the image -
docker run -d --name nexus -p 8081:8081 test
Now see if your container is running & do a curl -
docker ps
curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'

Running composer-playground in docker container doesn't connect to fabric network

I have setup a fabric network with 2 peers with couch db, 1 orderer, 1 ca. Now I want to run composer-playground in a docker container and I'm trying to run it with the following command:
docker run --network composer_default --name composer-playground -v ~/.composer:/home/composer/.composer --publish 8080:8080 --detach hyperledger/composer-playground
It launches the container, and I can see the PeerAdmin card as well as my network admin card, but when I try to connect with the network admin card, It keeps connecting with message "Please Wait: Connecting to Business Network avocado-network
using connection profile hlfv1" and after sometime, It throws REQUEST_TIMEOUT error.
Has anyone faced this issue, If yes, please enlighten me.
ts likely its because your connection profile has 'localhost' definitions (and therefore the containers are not resolvable, when trying to contact the other docker containers from inside your 'playground' container). Suggest to see the sed sequence here -> hyperledger.github.io/composer/latest/tutorials/… (Step 9) that changes the connection.json (this assumes a 'dev' environment setup, use as appropriate for your env etc
the following 'one-liner' does the job for the localhost based Composer Dev environment setup: (in this case, my existing business network card is admin#trade-network and use this to
sed -e 's/localhost:7051/peer0.org1.example.com:7051/' -e 's/localhost:7053/peer0.org1.example.com:7053/' -e 's/localhost:7054/ca.org1.example.com:7054/' -e 's/localhost:7050/orderer.example.com:7050/' < $HOME/.composer/cards/admin#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/admin#trade-network/

Logging to local syslog inside centos7 container

I would like my centos7 container to log message to /var/log/messages
[root#gen-r-vrt-057-009 ~]# docker exec -it rsyslog_base_centos7 "/bin/bash"
[root#gen-r-vrt-057-009 /]# logger "lior"
[root#gen-r-vrt-057-009 /]# cat /var/log/messages
[root#gen-r-vrt-057-009 /]#
I installed rsyslog, tried running container in several ways:
docker run -dit --name rsyslog_base_centos7 --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host -v /dev/log:/dev/log --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
But nothing seems to do the trick.
Container os and docker version:
[root#gen-r-vrt-057-009 /]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root#gen-r-vrt-057-009 /]# exit
[root#gen-r-vrt-057-009 ~]# docker -v
Docker version 17.03.2-ce, build f5ec1e2
Any ideas?
Thanks
If I understand you correctly, you want to run rsyslog inside the container but want to make rsyslog log data from the host machine. By default, this is not possible due to isolation.
It is an interesting use case, and we probably should add an issue tracker at https://github.com/rsyslog/rsyslog-docker.
You can probably achieve your goal by mounting /dev/log into the container, but depending on the host OS that requires some extra work there as well.
The rsyslog/rsyslog_base_centos7 is designed with the intent to provide a base container that you can use to make applications inside the container use rsyslog logging.
Please also have a look at this Twitter conversation: https://twitter.com/rgerhards/status/978183898776686592 - doc updates will be upcoming once we have the actual procedure.
Note: This answer was completely rewritten as I originally totally missed the point.
Smart people from rsyslog put the following Docker image together:
https://hub.docker.com/r/rsyslog/rsyslog_base_centos7
It allows for your use case :
c) want to run a client machine where rsyslog processes log messagesv
(the default CentOS 7 config does NOT work inside a container, but
this container has a corrected config!)
Here is a URL to a patch you can throw to CentOS7 docker config to make it work:
https://gist.github.com/oleksandriegorov/2718a7e35b8d17ada934b651d627ab97
Of course, restart rsyslogd to apply changes.

Adding additional docker node to Shipyard

I have installed Shipyard following the automatic procedure on their website. This works and I can access the UI. It's available on 172.31.0.179:8080. From the UI, I see a container called 'shipyard-discovery' which is exposing 172.31.0.179:4001.
I'm now trying to add an additional node to Shipyard. For that I use Docker Machine to install an additional host and on that host I'm using the following command to add the node to Shipyard.
curl -sSL https://shipyard-project.com/deploy | ACTION=node DISCOVERY=etcd://173.31.0.179:4001 bash -s
This additional node is not added to the Swarm cluster and is not visible in the Shipyard UI. On that second host I get the following output
-> Starting Swarm Agent
Node added to Swarm: 172.31.2.237
This indicated that indeed the node is not added to the Swarm cluster as I was expecting sth like: Node added to Swarm: 172.31.0.179
Any idea on why the node is not added to the Swarm cluster?
Following the documentation for manual deployment you can add a Swarm Agent writing it's host IP:
docker run \
-ti \
-d \
--restart=always \
--name shipyard-swarm-agent \
swarm:latest \
join --addr [NEW-NODE-HOST-IP]:2375 etcd://[IP-HOST-DISCOVERY]:4001
I've just managed to make shipyard see the nodes in my cluster, you have to follow the instructions in Node Installation, by creating a bash file that does the deploy for you with the discovery IP set up.

add hosts redirection in docker

I use gitlab in a Virtual machine . And I will use gitlab-ci (in the same VM), with docker .
For access to my gitlab, I use the domain git.local ( redirect to my VM on my computer, redirect to the 127.0.0.1 in my VM ).
And when I launch the tests, the test return :
fatal: unable to access 'http://gitlab-ci-token:xxxxxx#git.local/thib3113/ESCF.git/': Couldn't resolve host 'git.local'
So My question is: How add a redirection for git.local to the container IP ? I see the arg -h <host> for docker, but I don't know how to tell gitlab to use this argument. Or maybe a configuration for tell docker to use the container dns?
I see this: How do I get a Docker Gitlab CI runner to access Git on its parent host?
but same problem, I don't know how add argument :/ .
According to the GitLab CI Runner Advanced configuration, you can try to play with the extra_hosts param in your GitLab CI runner.
In /etc/gitlab-runner/config.toml :
[[runners]]
url = "http://localhost/ci"
token = "TOKEN"
name = "my_runner"
executor = "docker"
[runners.docker]
host = "tcp://<DOCKER_DAEMON_IP>:2375"
image = "..."
...
extra_hosts = ["localhost:192.168.0.39"]
With this example, when inside the container running the test git will try to clone from localhost, it will use the 192.168.0.39 as IP for this hostname.
if you want to use dns in docker use dns-gen follow these simple steps by this step you can assign host name to multi docker containers.
1. First know your docker IP by Publishing this command
/sbin/ifconfig docker0 | grep "inet" | head -n1 | awk '{ print $2}' | cut -d: -f2
now note the output ip and time to start dns-gen container(ps: don't forget to add docker ip you get by issuing above command before :53:53)
docker run --detach \
--name dns-gen \
--publish dockerip:53:53/udp \
--volume /var/run/docker.sock:/var/run/docker.sock \
jderusse/dns-gen
Last thing: Register you new DnsServer in you resolv.conf
echo "nameserver dockerip" | sudo tee --append /etc/resolvconf/resolv.conf.d/head
sudo resolvconf -u
Now you should be able to access your docker container in browser :- http://containername.docker
Hope it works.. Thanks..
Shubhankit

Resources