How to determine bridge IP of docker swarm container - docker

I'm trying to set up a framework, using docker swarm, where I can connect from an external system (via ssh or whatever) into a specific service's container. So, I'm able to do this using something like:
ssh -o ProxyCommand="ssh ubuntu#10.0.0.18 nc 172.18.0.4 22" -l root foo
Here 10.0.0.18 is one of the swarm nodes and I then connect to the gateway bridge address (172.18.0.4) for that specific container.
In order to provide some automation around this I'd like to be able to inspect whatever docker object in order to map a containers' ID to its bridge IP. I'd like to create a mapping of something like:
{
container_id: {
swarm_node: <Swarm node IP>,
bridge_ip: <Container's bridge IP>
}
}
However, I cannot see any kind of struct which shows the bridge info for a specific container. I can always exec into a given container and run ifconfig but I was hoping to avoid that.
Any pointers appreciated!

Try starting with this:
docker service ls -q \
| xargs docker service ps -f desired-state=running -q \
| while read task_id; do
docker inspect -f '{{printf "%.12s" .Status.ContainerStatus.ContainerID }}:
{ swarm_node: {{.NodeID}},
bridge_ip: {{range .NetworksAttachments}}{{if ne "ingress" .Network.Spec.Name }}{{.Addresses}}{{end}}{{end}}
}' $task_id
done
You may need to cleanup the container IP a bit since it's coming out as a list of IP's in with a bit length included. And the swarm node is actually the node id, not the node IP. I also hardcoded the exclusion for "ingress", not sure if there's a cleaner way.
To map node ID's to IP addresses, here's another one to work with:
docker node ls -q \
| xargs docker node inspect -f '{ {{.ID}}: {{.Status.Addr}} }'

Related

Single script for Docker Swarm cluster setup

I have the Docker Swarm cluster setup in my preprod servers(3 manager node and 7 worker nodes); however I would like to replicate the same in Production servers but rather than using commands I prefer using a script.
At present I am using "docker swarm init" to initialize the swarm and then adding the workers and managers with the generated key.
I would have 30 servers and planning for 7 manager and 23 worker nodes.
I have searched the net; but could not find any script which can initialize the docker swarm automatically with a script in all the servers.
Any help would really appreciated.
The way I approached this was to have a common build script for all nodes, and to use Consul for sharing the docker swarm manager token with the rest of the cluster.
The first node (at 10.0.0.51) calls docker swarm init and places the token in the key value store, and the remaining nodes (at 10.0.0.52 onwards) read the token back and use it to call docker swarm join.
The bash looked something like this -
# Get the node id of this machine from the local IP address
privateNetworkIP=`hostname -I | grep -o 10.0.0.5.`
nodeId=`(echo $privateNetworkIP | tail -c 2)`
if [ $nodeId -eq 1 ]; then
sudo docker swarm init
MANAGER_KEY_IN=`sudo docker swarm join-token manager -q`
curl --request PUT --data $MANAGER_KEY_IN http://10.0.0.51:8500/v1/kv/docker-manager-key
else
MANAGER_KEY_OUT=`curl -s http://10.0.0.51:8500/v1/kv/docker-manager-key?raw`
sudo docker swarm join --token $MANAGER_KEY_OUT 10.0.0.51:2377
fi
... and works fine provided that node 1 is built first.
There is nothing like in-built utilities for that you can use the command like this :
You can create your custom script like this :
for i in `cat app_server.txt` ; do echo $i ; ssh -i /path/to/your_key.pem $i "sudo docker swarm join --token your-token-here ip-address-of-manager:port" ; done
Here app_server.txt is the ip address of your worker node that you want to add in your swarm.
--token : your token generated by manager on docker swarm init
Hope this may help.
You can also use ansible for the same but that require ansible docker modules installed on all the worker nodes.
Thank you!

Adding additional docker node to Shipyard

I have installed Shipyard following the automatic procedure on their website. This works and I can access the UI. It's available on 172.31.0.179:8080. From the UI, I see a container called 'shipyard-discovery' which is exposing 172.31.0.179:4001.
I'm now trying to add an additional node to Shipyard. For that I use Docker Machine to install an additional host and on that host I'm using the following command to add the node to Shipyard.
curl -sSL https://shipyard-project.com/deploy | ACTION=node DISCOVERY=etcd://173.31.0.179:4001 bash -s
This additional node is not added to the Swarm cluster and is not visible in the Shipyard UI. On that second host I get the following output
-> Starting Swarm Agent
Node added to Swarm: 172.31.2.237
This indicated that indeed the node is not added to the Swarm cluster as I was expecting sth like: Node added to Swarm: 172.31.0.179
Any idea on why the node is not added to the Swarm cluster?
Following the documentation for manual deployment you can add a Swarm Agent writing it's host IP:
docker run \
-ti \
-d \
--restart=always \
--name shipyard-swarm-agent \
swarm:latest \
join --addr [NEW-NODE-HOST-IP]:2375 etcd://[IP-HOST-DISCOVERY]:4001
I've just managed to make shipyard see the nodes in my cluster, you have to follow the instructions in Node Installation, by creating a bash file that does the deploy for you with the discovery IP set up.

add hosts redirection in docker

I use gitlab in a Virtual machine . And I will use gitlab-ci (in the same VM), with docker .
For access to my gitlab, I use the domain git.local ( redirect to my VM on my computer, redirect to the 127.0.0.1 in my VM ).
And when I launch the tests, the test return :
fatal: unable to access 'http://gitlab-ci-token:xxxxxx#git.local/thib3113/ESCF.git/': Couldn't resolve host 'git.local'
So My question is: How add a redirection for git.local to the container IP ? I see the arg -h <host> for docker, but I don't know how to tell gitlab to use this argument. Or maybe a configuration for tell docker to use the container dns?
I see this: How do I get a Docker Gitlab CI runner to access Git on its parent host?
but same problem, I don't know how add argument :/ .
According to the GitLab CI Runner Advanced configuration, you can try to play with the extra_hosts param in your GitLab CI runner.
In /etc/gitlab-runner/config.toml :
[[runners]]
url = "http://localhost/ci"
token = "TOKEN"
name = "my_runner"
executor = "docker"
[runners.docker]
host = "tcp://<DOCKER_DAEMON_IP>:2375"
image = "..."
...
extra_hosts = ["localhost:192.168.0.39"]
With this example, when inside the container running the test git will try to clone from localhost, it will use the 192.168.0.39 as IP for this hostname.
if you want to use dns in docker use dns-gen follow these simple steps by this step you can assign host name to multi docker containers.
1. First know your docker IP by Publishing this command
/sbin/ifconfig docker0 | grep "inet" | head -n1 | awk '{ print $2}' | cut -d: -f2
now note the output ip and time to start dns-gen container(ps: don't forget to add docker ip you get by issuing above command before :53:53)
docker run --detach \
--name dns-gen \
--publish dockerip:53:53/udp \
--volume /var/run/docker.sock:/var/run/docker.sock \
jderusse/dns-gen
Last thing: Register you new DnsServer in you resolv.conf
echo "nameserver dockerip" | sudo tee --append /etc/resolvconf/resolv.conf.d/head
sudo resolvconf -u
Now you should be able to access your docker container in browser :- http://containername.docker
Hope it works.. Thanks..
Shubhankit

How to get the hostname of the docker host from inside a docker container on that host without env vars

What are the ways get the docker host's hostname from inside a container running on that host besides using environment variables? I know I can pass the hostname as an environment variable to the container at container creation time. I'm wondering how I can look it up at run time.
foo.example.com (docker host)
bar (docker container)
Is there a way for container bar running in docker host foo.example.com to get "foo.example.com"?
Edit to add use case:
The container will create an SRV record for service discovery of the form
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com.
where 20003 is a dynamically allocated port on the docker host for a service listening on some fixed port in bar (docker handles the mapping from host port to container port).
My container will run a health check to make sure it has successfully created that SRV record as there will be many other bar containers on other docker hosts that also create their own SRV records.
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com. <--
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo2.example.com.
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo3.example.com.
The health check will loop through the SRV records looking for the first one above and thus needs to know its hostname.
aside
I'm using Helios and just found out it adds an env var for me from which I can get the hostname. But I was just curious in case I was using docker without Helios.
You can easily pass it as an environment variable
docker run .. -e HOST_HOSTNAME=`hostname` ..
using
-e HOST_HOSTNAME=`hostname`
will call the hostname and use it's return as an environment variable called HOST_HOSTNAME, of course you can customize the key as you like.
note that this works on bash shell, if you using a different shell you might need to see the alternative for "backtick", for example a fish shell alternative would be
docker run .. -e HOST_HOSTNAME=(hostname) ..
I'm adding this because it's not mentioned in any of the other answers. You can give a container a specific hostname at runtime with the -h directive.
docker run -h=my.docker.container.example.com ubuntu:latest
You can use backticks (or whatever equivalent your shell uses) to get the output of hosthame into the -h argument.
docker run -h=`hostname` ubuntu:latest
There is a caveat, the value of hostname will be taken from the host you run the command from, so if you want the hostname of a virtual machine that's running your docker container then using hostname as an argument may not be correct if you are using the host machine to execute docker commands on the virtual machine.
You can pass in the hostname as an environment variable. You could also mount /etc so you can cat /etc/hostname. But I agree with Vitaly, this isn't the intended use case for containers IMO.
Another option that worked for me was to bind the network namespace of the host to the docker.
By adding:
docker run --net host
You can pass it as an environment variable like this. Generally Node is the host that it is running in. The hostname is defaulted to the host name of the node when it is created.
docker service create -e 'FOO={{.Node.Hostname}}' nginx
Then you can do docker ps to get the process ID and look at the env
$ docker exec -it c81640b6d1f1 env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c81640b6d1f1
TERM=xterm
FOO=docker-desktop
NGINX_VERSION=1.17.4
NJS_VERSION=0.3.5
PKG_RELEASE=1~buster
HOME=/root
An example of usage would be with metricbeats so you know which node is having system issues which I put in https://github.com/trajano/elk-swarm:
metricbeat:
image: docker.elastic.co/beats/metricbeat:7.4.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /proc:/hostfs/proc:ro
- /:/hostfs:ro
user: root
hostname: "{{.Node.Hostname}}"
command:
- -E
- |
metricbeat.modules=[
{
module:docker,
hosts:[unix:///var/run/docker.sock],
period:10s,
enabled:true
}
]
- -E
- processors={1:{add_docker_metadata:{host:unix:///var/run/docker.sock}}}
- -E
- output.elasticsearch.enabled=false
- -E
- output.logstash.enabled=true
- -E
- output.logstash.hosts=["logstash:5044"]
deploy:
mode: global
I think the reason that I have the same issue is a bug in the latest Docker for Mac beta, but buried in the comments there I was able to find a solution that worked for me & my team. We're using this for local development, where we need our containerized services to talk to a monolith as we work to replace it. This is probably not a production-viable solution.
On the host machine, alias a known available IP address to the loopback interface:
$ sudo ifconfig lo0 alias 10.200.10.1/24
Then add that IP with a hostname to your docker config. In my case, I'm using docker-compose, so I added this to my docker-compose.yml:
extra_hosts:
# configure your host to alias 10.200.10.1 to the loopback interface:
# sudo ifconfig lo0 alias 10.200.10.1/24
- "relevant_hostname:10.200.10.1"
I then verified that the desired host service (a web server) was available from inside the container by attaching to a bash session, and using wget to request a page from the host's web server:
$ docker exec -it container_name /bin/bash
$ wget relevant_hostname/index.html
$ cat index.html
OK, this isn't the hostname (as OP was asking), but this will resolve to your docker host from inside your container for connectivity purposes.
host.docker.internal
I was redirected here when googling for this.
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print $2}' | cut -d / -f 1 | sed -n 1p`
docker run --add-host=myhost:${HOSTIP} --rm -it debian
Now you can access the host under the alias "myhost"
The first line won't run on cygwin, but you can figure out some other way to obtain the local IP address using ipconfig.
you can run:
docker run --network="host"
for sending the value of the machine host to the container.
I ran
docker info | grep Name: | xargs | cut -d' ' -f2
inside my container.
I know it's an old question, but I needed this solution too, and I acme with another solution.
I used an entrypoint.sh to execute the following line, and define a variable with the actual hostname for that instance:
HOST=`hostname --fqdn`
Then, I used it across my entrypoint script:
echo "Value: $HOST"
Hope this helps

Insert Docker parent host ip into container's hosts file

I'm new to Docker and trying to understand what is the best way to insert docker parent host ip into container hosts file.
I'm using the following command in my Dockerfile
RUN /sbin/ip route|awk '/default/ { print $3,"\tdockerhost" }' >> /etc/hosts
but sometimes hosts ip get change so its non relevant anymore...
The reason to do that, if you ask yourself, is that i need to access another 2 dockers containers (and link not offer this feature).
Thanks,
The --add-host option is made for this. So, in your docker run command, do something like:
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [my container]
--add-host option can be used when you create/run your container but since /sbin/ip command isn't available in operating systems like OSX, we can use a more generic solution:
docker run --add-host=dockerhost:`docker network inspect \
--format='{{range .IPAM.Config}}{{.Gateway}}{{end}}' bridge` [IMAGE]

Resources