questions about docker --link parameter - docker

As we know, in one host with docker daemon, containers connect to the docker0 bridge, and so containers can access each other by default.
Then what's the use of --link option? Is it any different with the direct access by ip way?
What does it actually do?

From the Docker docs:
When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source
When two containers are linked, Docker will set some environment variables in the target container to enable programmatic discovery of information related to the source container.
And some more:
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file. Here's an entry for the web container:
So, basically --link creates a set of environment variables and adds some entries to the /etc/hosts file in order to ease communication. But, the containers are still directly accessed via IP.

When you create a container using --link option, Docker exposes the linked container into the new one in two ways:
It creates a entry in /etc/hosts with the IP of the linked container and the alias given when creating the link.
It exposes some information as environmental variables about the linked container. As Docker documentation shows:
Docker will then also define a set of environment variables for each port that is exposed by the source container. The pattern followed is:
<name>_PORT_<port>_<protocol> will contain a URL reference to the port. Where <name> is the alias name specified in the --link parameter (e.g. webdb), <port> is the port number being exposed, and <protocol> is either TCP or UDP. The format of the URL will be: <protocol>://<container_ip_address>:<port> (e.g. tcp://172.17.0.82:8080). This URL will then be split into the following 3 environment variables for convenience:
<name>_PORT_<port>_<protocol>_ADDR will contain just the IP address from the URL (e.g. WEBDB_PORT_8080_TCP_ADDR=172.17.0.82).
<name>_PORT_<port>_<protocol>_PORT will contain just the port number from the URL (e.g. WEBDB_PORT_8080_TCP_PORT=8080).
<name>_PORT_<port>_<protocol>_PROTO will contain just the protocol from the URL (e.g. WEBDB_PORT_8080_TCP_PROTO=tcp).
It is not differences if you access via IP, but using links let setting the container ignoring the ip that will be assigned by Docker Daemon. Check Docker documentation for further information.

If you start the docker using --icc=false option, the container can't communicate to each other by default. You must use --link to connect two containers.

Related

Understanding why ports need to be exposed for inter container communication on docker0

I was going through docker official docs to understand the difference between user-defined and default bridge. Link to specific page - https://docs.docker.com/network/bridge/
In first point of section "Differences between user-defined bridges and the default bridge", it is stated that
If you run the same application stack on the default bridge network,
you need to open both the web port and the database port, using the -p
or --publish flag for each.
I don't understand this specific text, as to why it is need to explicitly publish(-p) required port of database container when it will be used only by some other container connected to the same bridge.
My existing understanding is that, unless explicitly blocked, containers connected to the docker0 can freely communicate with each other.
So, this extract has confused me. Can somebody help ?
If you take away one thing from that page, it's that you should always docker create network and then docker run --net containers on that network, if you're using plain Docker commands. (Docker Compose does this automatically for you; Kubernetes's networking model is fundamentally different.)
If you docker run a container without a --net option then you wind up using a backwards-compatiblitiy networking mode. In this mode (the "default bridge network") from the page you cite containers cannot communicate with each other by default. Your two options are for the server to publish a port (docker run -p) and the client to connect to the published port on the host, or for the server to expose a port (almost always done with an EXPOSE directive in the Dockerfile) and the client to --link to it.
There's no real reason to be using this "default" mode at this point, and in practice the paragraph you cite shouldn't matter except for fairly old scripted Docker setups.

Can't resolve set hostname from another docker container in same network

I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.

Update Prometheus Host/Port in Docker

Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.

How to link docker host to container by name

I know how to link two containers, but can I link the host to a container in a similar way?
I have an nginx server on the host, I want it to connect to a container named my-varnish, which is linked to my-apachephp linked to my-mysql.
Currently I either map a port -p 8080:80 or find the bridge IP address (which is different each time I destroy and build a new set of containers). I would like to use the bridge IP by hostname without adding a dyn-dns registration process to each container.
Thoughts?!
Use the Docker run's add host functionality.It adds the host mapping in the container's host file.
From Usage :
--add-host value Add a custom host-to-IP mapping (host:ip) (default [])

Meaning and usage of <alias>_PORT_<port>_<proto>_* env vars in linked containers

I can link docker containers with the --link name:alias parameter, which will create several environment variables within the container, for example ALIAS_PORT_1234_TCP_ADDR, ALIAS_PORT_1234_TCP_PORT, ALIAS_PORT_1234_TCP_PROTO.
The ALIAS_PORT_1234_TCP_ADDR var can be used to detect the IP of the linked container (even though is is advisable to use the etc/hosts entry instead, because it will be updated on container restart, while the env vars do not change).
But it is not obvious to my, what I could use these other two variables for. In the given example ALIAS_PORT_1234_TCP_ADDR will be 1234 and ALIAS_PORT_1234_TCP_PROTO will be tcp - but both values are already in the name of the vars.
Could somebody enlighten me, about the intended meaning and use of these variables?
Are there scenarios, in which the exposed port of a linked container is different to what it declares with EXPOSE? I know that I can bind a container port to the host via -p but my understanding was, that this has no consequence for linked containers, because they will talk directly to the port of the linked container and need not talk to the host (which would be more difficult because it is not so easy to get the IP of the host inside a container).
Also, why would the protocol of a port change or need to be detected?
I have wondered this for a long-time myself. I think the environment variables are mainly just a form of documentation; you can call env to discover what ports the container exposes. I suppose one use case may be to parse the names of the variables from an environment in a linked container; this would allow you to choose at run-time which port or ports to use. (Even then it would seem to make more sense to have a variable such as ALIAS_PORTS rather than names with the values already encoded).
Also, I think this may be changing in the future as the networking features of Docker evolve.
Are there scenarios, in which the exposed port of a linked container is different to what it declares with EXPORT
If you mean EXPOSE, then the main difference is that without link, you would need to publish the port to the host, in order for other container to discover it and communicate back to the first container which exported its port.
But with --links, you don't have to use -P or -p <hostport>:<containerport>, you can discover the port exposed by the first container with the environment variables. The host wouldn't need to use one of its port to channel back to the first container port.
See "Communication between linked docker containers" as an example which would use all three environment variables.
If you want to invoke invokes the URL udp://Container_1_IP:5043 from Container_2, you need the --link to set all 3 variables in order to use the right IP, port and protocol.
The OP comments:
Sure, I can construct "udp://Container_1_IP:5043" from env vars - but I can not see, why I would need to do:
$ALIAS_PORT_5043_UDP_PROTO:// $ALIAS_PORT_5043_UDP_ADDR:$ALIAS_PORT_5043_UDP_POR‌​T
That is true, but doesn't take into account port range (with PORT_START and PORT_END environment variable)

Resources