Status of RabbitMQ in different docker container - docker

I am starting a docker container with RabbitMQ for testing purposes. I would like to start a second container which runs a short command and checks that the Rabbitmq is actually running. The second container should block my build pipeline until it has determined that RabbitMQ has successfully started in the first container.
How can I specify to rabbitmqctl which hostname to use to get the status of RabbitMq? I am linking the two containers together via docker so port issues should not be a problem.
Example:
rabbitmqctl -n rabbitmq status # does not work, prints diagnostic info
Status of node rabbitmq#rabbitmq ...
Error: unable to perform an operation on node 'rabbitmq#rabbitmq'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbitmq#rabbitmq
DIAGNOSTICS
attempted to contact: [rabbitmq#rabbitmq]
rabbitmq#rabbitmq:
* connected to epmd (port 4369) on rabbitmq
* epmd reports: node 'rabbitmq' not running at all
other nodes on rabbitmq: [rabbit]
* suggestion: start the node
Current node details:
* node name: rabbitmqcli52#e3ea1e73df02
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: AB9AFN3zvcyAWBl6ZVVOJw==

Your second container needs to be aware of the first one:
docker run --link rabbitmq ...
The host will be available now from inside the container:
$ grep rabbitmq /etc/hosts
172.17.0.2 rabbitmq 01ad3098b423
$ ping rabbitmq
PING rabbitmq (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.073 ms
Keep in mind container linking is deprecated in favor of custom networks.

First create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1
I'm going to assume you use rabbitmq management container, and for and I'll call it (hostname) rab1. I'll give the name ubuntu1 to the other container from which you want to access rab1.So first start rab1 and add ubuntu1 to hosts file:
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host ubuntu1:172.18.0.12 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
And after that start the ubuntu1con with hostname ubuntu1
docker run -d --net mynet1 --ip 172.18.0.12 --hostname ubuntu1 --add-host rab1:172.18.0.11 --name ubuntu1con ubuntu
Now when you go into ubuntu1con you are able to access rab1 by name or ip address.

Assuming the two containers are linked/networked properly, can ping each other, and the Rabbitmq client rabbitmqctl is installed in the second container, the following should work:
docker exec -it <second container's name or ID> rabbitmqctl -n rabbit#rabbitmq status
Pass --hostname rabbitmq to docker run when starting the rabbitmq container, and make sure that hostname -s inside the rabbitmq container prints rabbitmq.
If the error is still seen, add <IP address of rabbitmq container> rabbitmq in /etc/hosts of the second container and re-try.
https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks has info about how to network two containers so that they can ping each other on the same network.

Related

Docker internal DNS not resolving [duplicate]

I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html
That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2
EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md
As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret

Access container on second Docker host by name with overlay network

I have two Linux machines (host1 and host2) running Docker in swarm mode and configured an overlay network by running:
host1:$ docker network create -d overlay --attachable appnet
host1 is my manager there I have a web server and connected it to the network above:
host1:$ docker run --name web --network appnet --rm -d -p 8000:80 nginx
On host2 I created a simple container to test some connectivity:
host2:$ docker run -dit --network appnet --name alp1 alpine ash
host2:$ docker exec -it alp1 ash
Pinging the web container by name works fine:
# ping web -c 2
PING web (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.777 ms
64 bytes from 10.0.2.2: seq=1 ttl=64 time=0.473 ms
wget with the IP address of host1 also works:
# wget -O out <host1-ip>:8000
Connecting to <host1-ip>:8000 (<host1-ip>:8000)
...
But when I try to wget by container name the connection times out:
# wget -O out web
Connecting to web (10.0.2.2:80)
wget: can't connect to remote host (10.0.2.2): Operation timed out
Does someone know why this is happening? I would have expected this to work as I thought the whole point of connecting containers over overlays was to not have to publish a dozen ports.
The overlay network is a virtual network that all nodes in your swarm can communicate across. Containers that are associated with one of these networks get an IP address that belongs to that network which is different than the IP address of the host it's running on.
The only way your GET request to <host1-ip>:8000 would work would be if you exposed that port on the host. That's how traffic external to the swarm can hit one of your containers.
What you really want is a service.
# Get rid of your container named web on host1
host1:$ docker kill web
# Create a new service on a manage node named web that attaches to your network
host1:$ docker service create --name web --network appnet nginx
yqigiyn9yqhxqcw74voe0tdcg
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
At this point, you can reach the nginx container from any of the nodes on the port that it exposes (80 by default).
host2:$ curl http://web:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</html>
You can definitely get this working imperatively with the docker cli, but you'll have a much easier time managing changes and configuration if you look at docker compose. It's a declarative way of setting all of the switches and options you'll need for managing a service.

Docker - connecting to an open port in a container

I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)

Link containers in docker (RancherOS and command line)

I run a RancherOS to run docker containers
I created a container on the GUI to run my databases (image: mysql, name: r-mysql-e4e8df05). Different containers use it.
I can link other containers to it on the GUI
This time I would like to automate the creation and starting of a container on jenkins, but the linking is not working well
My command:
docker run -d --name=app-that-needs-mysql --link mysql:mysql myimages.mycompany.com/appthatneedsmysql
I get error:
Error response from daemon: Could not get container for mysql
I tried different things:
1)
--link r-mysql-e4e8df05:mysql
Error:
Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
2)
Try to use --net options
Running: docker network ls
NETWORK ID NAME DRIVER SCOPE
c..........e bridge bridge local
4..........c host host local
c..........a none null local
With --net none it succeeds but actually it is not working. The app cannot connect to the DB
With --net host error message conflicting options: host type networking can't be used with links. This would result in undefined behavior
With --net bridge error message: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
I also checked on rancher GUI where this mysql runs:
It get a continer IP startin with: 10.X.X.X
I also tried to add --net managed but the error: network managed not found
I believe I miss understanding something in this docker linking process. Please give me some idea, how can I make these work.
(previously it was working when I created the same container and linked to the mysql in the GUI)
Hey #Tomi you can expose the mysql container on whatever port you like, from rancher. That way you dont have to link the container, then your jenkins spawned container connect to that on the exposed port on the host. You could also use jenkins to spin up the container within rancher, using the rancher cli. Thay way you dont have to surface mysql on the hosts network... a few ways to skin that cat with rancher.
At first glance it seems that Rancher uses a managed network, which docker network ls does not show.
Reproducing the problem
I used dummy alpine containers to reproduce this:
# create some network
docker network create your_invisible_network
# run a container belonging to this network
docker container run \
--detach \
--name r-mysql-e4e8df05 \
--net your_invisible_network \
alpine tail -f /dev/null
# trying to link this container
docker container run \
--link r-mysql-e4e8df05:mysql \
alpine ping mysql
Indeed I get docker: Error response from daemon: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network.
Possible Solution
A workaround would be to create a user-defined bridge network and simpy add your mysql container to it:
# create a network
docker network create \
--driver bridge \
a_workaround_network
# connect the mysql to this network (and alias it)
docker network connect \
--alias mysql \
a_workaround_network r-mysql-e4e8df05
# try to ping it using its alias
docker container run \
--net a_workaround_network \
alpine \
ping mysql
# yay!
PING mysql (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.135 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.084 ms
As you can see in the output pinging the mysql container via its DNS name is possible.
Good to know:
With a user-created bridge networks DNS resolution works out of the box without having to explicitly --link containers :)
Containers can belong to several networks, this is why this works. In this case the mysql container belongs to both your_invisible_network and a_workaround_network
I hope this helps!

Docker ping container on other nodes

I have 2 virtual machines (VM1 with IP 192.168.56.101 and VM2 with IP 192.16.56.102 which can ping each other) and these are the steps I'm doing:
- Create consul container on VM1 with 'docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap'
- Create swarm manager on VM1 with 'docker run -d -p 3376:3376 swarm manage -H 0.0.0.0:3376 --advertise 192.168.56.101:3376 consul://192.168.56.101:8500
- Create swarm agents on each VM with 'docker run -d swarm join --advertise <VM-IP>:2376 consul://192.168.56.101:8500
If i run docker -H 0.0.0.0:3376 info I can see both nodes connected to the swarm and they are both healthy. I can also run container and they are scheduled to the nodes. However, If I create a network and assign a few nodes to this network and then SSH into one node and try to ping every other node I can only reach the nodes which are running on the same virtual machine.
Both Virtual Machines have these DOCKER_OPTS:
DOCKER_OPTS = DOCKER_OPTS="--cluster-store=consul://192.168.56.101:8500 --cluster-advertise=<VM-IP>:0 -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock"
I don't have a direct quote, but from what I've read on Docker GitHub issue tracker, ICMP packets (ping) are never routed between containers on different nodes.
TCP connection to explicitly opened ports should work. But as of Docker 1.12.1 it is buggy.
Docker 1.12.2 has some bug fixes wrt establishing a connection to containers on other hosts. But ping is not going to work across hosts.
You can only ping containers on the same node because you attach them to a local scope network.
As suggested in the comments, if you want to ping containers across hosts (meaning from a container on VM1 to a container on VM2) using docker swarm (or docker swarm mode) without explicitly opening ports, you need to create an overlay network (or globally scoped network) and assign/start containers on that network.
To create an overlay network:
docker network create -d overlay mynet
Then start the containers using that network:
For Docker Swarm mode:
docker service create --replicas 2 --network mynet --name web nginx
For Docker Swarm (legacy):
docker run -itd --network=mynet busybox
For example, if we create two containers (on legacy Swarm):
docker run -itd --network=mynet --name=test1 busybox
docker run -itd --network=mynet --name=test2 busybox
You should be able to docker attach on test2 to ping test1 and vice-versa.
For more details you can refer to the networking documentation.
Note: If containers still can't ping each other after the creation of an overlay network and attaching containers to it, check the firewall configurations of the VMs and make sure that these ports are open:
data plane / vxlan: UDP 4789
control plane / gossip: TCP/UDP 7946

Resources