Link containers in docker (RancherOS and command line) - docker

I run a RancherOS to run docker containers
I created a container on the GUI to run my databases (image: mysql, name: r-mysql-e4e8df05). Different containers use it.
I can link other containers to it on the GUI
This time I would like to automate the creation and starting of a container on jenkins, but the linking is not working well
My command:
docker run -d --name=app-that-needs-mysql --link mysql:mysql myimages.mycompany.com/appthatneedsmysql
I get error:
Error response from daemon: Could not get container for mysql
I tried different things:
1)
--link r-mysql-e4e8df05:mysql
Error:
Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
2)
Try to use --net options
Running: docker network ls
NETWORK ID NAME DRIVER SCOPE
c..........e bridge bridge local
4..........c host host local
c..........a none null local
With --net none it succeeds but actually it is not working. The app cannot connect to the DB
With --net host error message conflicting options: host type networking can't be used with links. This would result in undefined behavior
With --net bridge error message: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
I also checked on rancher GUI where this mysql runs:
It get a continer IP startin with: 10.X.X.X
I also tried to add --net managed but the error: network managed not found
I believe I miss understanding something in this docker linking process. Please give me some idea, how can I make these work.
(previously it was working when I created the same container and linked to the mysql in the GUI)

Hey #Tomi you can expose the mysql container on whatever port you like, from rancher. That way you dont have to link the container, then your jenkins spawned container connect to that on the exposed port on the host. You could also use jenkins to spin up the container within rancher, using the rancher cli. Thay way you dont have to surface mysql on the hosts network... a few ways to skin that cat with rancher.

At first glance it seems that Rancher uses a managed network, which docker network ls does not show.
Reproducing the problem
I used dummy alpine containers to reproduce this:
# create some network
docker network create your_invisible_network
# run a container belonging to this network
docker container run \
--detach \
--name r-mysql-e4e8df05 \
--net your_invisible_network \
alpine tail -f /dev/null
# trying to link this container
docker container run \
--link r-mysql-e4e8df05:mysql \
alpine ping mysql
Indeed I get docker: Error response from daemon: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network.
Possible Solution
A workaround would be to create a user-defined bridge network and simpy add your mysql container to it:
# create a network
docker network create \
--driver bridge \
a_workaround_network
# connect the mysql to this network (and alias it)
docker network connect \
--alias mysql \
a_workaround_network r-mysql-e4e8df05
# try to ping it using its alias
docker container run \
--net a_workaround_network \
alpine \
ping mysql
# yay!
PING mysql (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.135 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.084 ms
As you can see in the output pinging the mysql container via its DNS name is possible.
Good to know:
With a user-created bridge networks DNS resolution works out of the box without having to explicitly --link containers :)
Containers can belong to several networks, this is why this works. In this case the mysql container belongs to both your_invisible_network and a_workaround_network
I hope this helps!

Related

Docker internal DNS not resolving [duplicate]

I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html
That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2
EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md
As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret

Docker network bridge

I'm trying to run multiple containers with the same ports on docker.
For this, I have created a network in brigde mode and specified a subnet.
docker network create -d --subnet 192.168.99.0/24 mynetwork
Then connected the docker containers to it with a static IP.
docker run -i -t -d -p 2377:2377 -p 7946:7946 -p 4789:4789-name container image
docker network connect --ip 192.168.99.98 mynetwork container
I did this with three containers (using different IP's), after starting the second one I got:
Error response from daemon: driver failed programming external connectivity on endpoint container(...): Bind for 0.0.0.0:7946 failed: port is already allocated
As far as I'm concerned, I should not be getting this error, due to bridge mode.
The docker run -p option allocates a port on the host system; those are shared across all containers, independently of what Docker-private network they’re using. These also will conflict with non-Docker processes running on the host.
If your goal is just to be able to communicate between containers on the same network, you don’t need a -p option at all. They can use each others’ --name and the port the service inside the container is listening on to connect.
If you’re trying to run multiple Docker container stacks at the same time, you need to decide which specific instance port 2377 on your host will route to, and change the other container’ -p option.
Specifically setting the Docker-internal private IP addresses (or worrying about them at all) is almost never necessary. I’d delete those --subnet and --ip options. To communicate between containers, put them on the same network as described above; from outside you need a (unique) -p option.

What goes in "some-network" placeholder in dockerized redis cli?

I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.

Docker - connecting to an open port in a container

I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)

ipvlan L3 docker can't ping host

I was playing with ipvlan_mode=l3 by following the tutorial on docker github repo
https://gist.github.com/nerdalert/28168b016112b7c13040#ipvlan-l3-mode-example-usage
After running the commands my host and docker are not able to ping each other.
However two containers on diffrent subnets using the same parent iface are able to ping.
Commands :
docker network create -d ipvlan \
--subnet=192.168.214.0/24 \
--subnet=10.1.214.0/24 \
-o ipvlan_mode=l3 ipnet210
# Test 192.168.214.0/24 connectivity
$ docker run --net=ipnet210 --ip=192.168.214.10 -itd alpine /bin/sh
$ docker run --net=ipnet210 --ip=10.1.214.10 -itd alpine /bin/sh
# Test L3 connectivity from 10.1.214.0/24 to 192.168.212.0/24
$ docker run --net=ipnet210 --ip=192.168.214.9 -it --rm alpine ping -c 2 10.1.214.10
# Test L3 connectivity from 192.168.212.0/24 to 10.1.214.0/24
$ docker run --net=ipnet210 --ip=10.1.214.9 -it --rm alpine ping -c 2 192.168.214.10
Is there anything I'm missing ?
Thank in advance
You need to setup a static route on the host or upstream router to get a connection between host and docker subnet as mentioned in the documentation, end of the chapter:
In order to ping the containers from a remote Docker host or the container
be able to ping a remote host, the remote host or the physical network in
between need to have a route pointing to the host IP address of the
container’s Docker host eth interface.
For example (referenced to the picture) you have to create a route which point all traffic to subnet 172.16.20.0/24 to gateway 192.168.50.10/24.
I found this Q after reading about ipvlan l3 driver here
https://docs.docker.com/network/ipvlan/#ipvlan-l3-mode-example
And I see the same behavior on ubuntu 18.04 and ubuntu 20.04, both with:
kernel 5.4.0-96-generic
docker-ce 20.10.12
I assume it's by design like this that host can't even see those new networks with ip r
I would be very interested to hear how external connectivity for containers should work, in docs (link above) it's not explained, just simple picture without any details...not helpful at all.

Resources