I need to know the hostnames (or ip addresses) of some container running on the same machine.
As I already commented here (but with no answer yet), I use docker-compose. The documentation says, compose will automatically create a hostname entry for all container defined in the same docker-compose.yml file:
Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
But I can't see any host entry via docker exec -it my_container tail -20 /etc/hosts.
I also tried to add links to my container, but nothing changed.
Docker 1.10 introduced some new networking features which include an internal DNS server where host lookups are done.
On the default bridge network (docker0), lookups continue to function via /etc/hosts as they use to. /etc/resolv.conf will point to your hosts resolvers.
On a user defined network, Docker will use the internal DNS server. /etc/resolv.conf will have an internal IP address for the Docker DNS server. This setup allows bridge, custom and overlay networks to work in a similar fashion. So an overlay network on swarm will populate host data from across the swarm like a local bridge network would.
The "legacy" setup was maintained so the new networking features could be introduced without impacting existing setups.
Discovery
The DNS resolver is able to provide IP's for a docker compose service via the name of that service.
For example, with a web and db service defined, and the db service scaled to 3, all db instances will resolve:
$ docker-compose run --rm web nslookup db
Name: db
Address 1: 172.22.0.4 composenetworks_db_2.composenetworks_mynet
Address 2: 172.22.0.5 composenetworks_db_3.composenetworks_mynet
Address 3: 172.22.0.3 composenetworks_db_1.composenetworks_mynet
Related
Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.
I have a set of docker containers created with docker compose, which creates a "user-defined" bridge network.
One of these docker containers is running DNSmasq so we can define custom (internal) domain names to point to local IPs.
Trouble is, none of the other docker containers can resolve these domain names. I think the issue is that I can't get the docker DNS to forward its requests to the docker container running DNSmasq (i.e. it doesn't even know it exists).
As a test, I did a docker network inspect <network created by docker compose> and noted the IP address of the DNSmasq container. Then, in one of the other containers' /etc/resolv.conf, I set nameserver to that IP address. Then I can resolve all these internal domains names.
Sadly, putting dnsmasq in there doesn't work, despite the fact that the user-defined network has automatic service discovery enabled.
It seems that one way to make this work then is to force my DNSmasq container to always have the same IP, and then make sure the other docker containers point to that as their nameserver, e.g. by defining the network explicitly in the compose file.
Is there no other way? I'd rather not have to define the entire network to replicate this automatically created one when all I want is to know the IP address of one container.
I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.
How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.
An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/