Docker internal DNS not resolving [duplicate] - docker

I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?

The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases

Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html

That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2

EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md

As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret

Related

Accessing a service on the PROPER IP running in docker container on a Linux host

My problem is specific to k6 and InfluxDB, but i think the root cause is more general.
I'm using the official k6 distribution and its docker-compose.yml to run Grafana and InfluxDB which i start with the docker-compose up -d influxdb grafana command.
Grafana dashboard is accessible from localhost:3000, but running k6 with the recommended command $ docker run -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db - <script.js (following this guide) k6 throws the following error (on Linux and MacOS as well):
level=error msg="InfluxDB: Couldn't write stats" error="Post \"http://localhost:8086/write?consistency=&db=myk6db&precision=ns&rp=\": dial tcp 127.0.0.1:8086: connect: connection refused"
I tried the command with localhost and 127.0.0.1 as well for InfluxDB as well. Also with IP addresses returned by docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' k6_influxdb_1 It either failed with the above error or didnt work, which means k6 didn't complain, but no data appeared in InfluxDB.
However, if i query the "internal IP" address which the network interface is using (with ifconfig command) and use that IP (192.168.1.66), everything works fine:
docker run -i loadimpact/k6 run --out influxdb=http://192.168.1.66:8086/k6db - <test.js
So my questions are:
Why does Grafana work fine with localhost:3000 and InfluxDB with localhost:8086 doesn't?
Why does only the "internal IP" work and no other IP?
I know there is a similar question, but that doesn't answer mine.
Docker containers run in an isolated network space. Docker can maintain internal networks, and there is Compose syntax to create them.
If you're making a call to a Docker container from outside Docker space but on the same host, you can usually connect to it as localhost, and the first port number listed in the Compose ports: section. If you look at the docker-compose.yml file you link to, it lists ports: [3000:3000], so port 3000 on the host forwards to port 3000 in the container; and if you're calling http://localhost:3000 from a browser on the same host, that will reach that forwarded port.
Otherwise, calls from one container to another can generally use the container's name (as in docker run --name) or the Compose service name; but, they must be on the same Docker network. That docker-compose.yml file also lists
services:
influxdb:
networks:
- k6
- grafana
so you can reach http://influxdb:8086 using the service's normal port number, provided the calling container is on one of those two networks. If the service has ports:, they're not considered for inter-container calls.
In the Docker documentation, Networking in Compose has more details on this setup.
There's one final trick that can help you run the specific command you're trying to run. docker-compose run will run a one-off command, using the setup for some container in the docker-compose.yml, except without its ports: and replacing its command:. The docker-compose.yml file you reference includes a k6 container, on the k6 network, running the loadimpact/k6 image. So you can probably run
docker-compose run k6 \
run --out influxdb=http://influxdb:8086/myk6db - \
<script.js
(And probably the K6_OUT environment variable in the docker-compose.yml can supply that --out option for you.)
You shouldn't ever need to look up the container-private IP addresses. They aren't usable in a variety of common scenarios, and in between Docker networking for calls between containers and published ports for calls from outside Docker, there are better ways to make calls to containers.

Link containers in docker (RancherOS and command line)

I run a RancherOS to run docker containers
I created a container on the GUI to run my databases (image: mysql, name: r-mysql-e4e8df05). Different containers use it.
I can link other containers to it on the GUI
This time I would like to automate the creation and starting of a container on jenkins, but the linking is not working well
My command:
docker run -d --name=app-that-needs-mysql --link mysql:mysql myimages.mycompany.com/appthatneedsmysql
I get error:
Error response from daemon: Could not get container for mysql
I tried different things:
1)
--link r-mysql-e4e8df05:mysql
Error:
Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
2)
Try to use --net options
Running: docker network ls
NETWORK ID NAME DRIVER SCOPE
c..........e bridge bridge local
4..........c host host local
c..........a none null local
With --net none it succeeds but actually it is not working. The app cannot connect to the DB
With --net host error message conflicting options: host type networking can't be used with links. This would result in undefined behavior
With --net bridge error message: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
I also checked on rancher GUI where this mysql runs:
It get a continer IP startin with: 10.X.X.X
I also tried to add --net managed but the error: network managed not found
I believe I miss understanding something in this docker linking process. Please give me some idea, how can I make these work.
(previously it was working when I created the same container and linked to the mysql in the GUI)
Hey #Tomi you can expose the mysql container on whatever port you like, from rancher. That way you dont have to link the container, then your jenkins spawned container connect to that on the exposed port on the host. You could also use jenkins to spin up the container within rancher, using the rancher cli. Thay way you dont have to surface mysql on the hosts network... a few ways to skin that cat with rancher.
At first glance it seems that Rancher uses a managed network, which docker network ls does not show.
Reproducing the problem
I used dummy alpine containers to reproduce this:
# create some network
docker network create your_invisible_network
# run a container belonging to this network
docker container run \
--detach \
--name r-mysql-e4e8df05 \
--net your_invisible_network \
alpine tail -f /dev/null
# trying to link this container
docker container run \
--link r-mysql-e4e8df05:mysql \
alpine ping mysql
Indeed I get docker: Error response from daemon: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network.
Possible Solution
A workaround would be to create a user-defined bridge network and simpy add your mysql container to it:
# create a network
docker network create \
--driver bridge \
a_workaround_network
# connect the mysql to this network (and alias it)
docker network connect \
--alias mysql \
a_workaround_network r-mysql-e4e8df05
# try to ping it using its alias
docker container run \
--net a_workaround_network \
alpine \
ping mysql
# yay!
PING mysql (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.135 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.084 ms
As you can see in the output pinging the mysql container via its DNS name is possible.
Good to know:
With a user-created bridge networks DNS resolution works out of the box without having to explicitly --link containers :)
Containers can belong to several networks, this is why this works. In this case the mysql container belongs to both your_invisible_network and a_workaround_network
I hope this helps!

Status of RabbitMQ in different docker container

I am starting a docker container with RabbitMQ for testing purposes. I would like to start a second container which runs a short command and checks that the Rabbitmq is actually running. The second container should block my build pipeline until it has determined that RabbitMQ has successfully started in the first container.
How can I specify to rabbitmqctl which hostname to use to get the status of RabbitMq? I am linking the two containers together via docker so port issues should not be a problem.
Example:
rabbitmqctl -n rabbitmq status # does not work, prints diagnostic info
Status of node rabbitmq#rabbitmq ...
Error: unable to perform an operation on node 'rabbitmq#rabbitmq'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbitmq#rabbitmq
DIAGNOSTICS
attempted to contact: [rabbitmq#rabbitmq]
rabbitmq#rabbitmq:
* connected to epmd (port 4369) on rabbitmq
* epmd reports: node 'rabbitmq' not running at all
other nodes on rabbitmq: [rabbit]
* suggestion: start the node
Current node details:
* node name: rabbitmqcli52#e3ea1e73df02
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: AB9AFN3zvcyAWBl6ZVVOJw==
Your second container needs to be aware of the first one:
docker run --link rabbitmq ...
The host will be available now from inside the container:
$ grep rabbitmq /etc/hosts
172.17.0.2 rabbitmq 01ad3098b423
$ ping rabbitmq
PING rabbitmq (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.073 ms
Keep in mind container linking is deprecated in favor of custom networks.
First create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1
I'm going to assume you use rabbitmq management container, and for and I'll call it (hostname) rab1. I'll give the name ubuntu1 to the other container from which you want to access rab1.So first start rab1 and add ubuntu1 to hosts file:
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host ubuntu1:172.18.0.12 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
And after that start the ubuntu1con with hostname ubuntu1
docker run -d --net mynet1 --ip 172.18.0.12 --hostname ubuntu1 --add-host rab1:172.18.0.11 --name ubuntu1con ubuntu
Now when you go into ubuntu1con you are able to access rab1 by name or ip address.
Assuming the two containers are linked/networked properly, can ping each other, and the Rabbitmq client rabbitmqctl is installed in the second container, the following should work:
docker exec -it <second container's name or ID> rabbitmqctl -n rabbit#rabbitmq status
Pass --hostname rabbitmq to docker run when starting the rabbitmq container, and make sure that hostname -s inside the rabbitmq container prints rabbitmq.
If the error is still seen, add <IP address of rabbitmq container> rabbitmq in /etc/hosts of the second container and re-try.
https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks has info about how to network two containers so that they can ping each other on the same network.

accessing a docker container from another container

I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.

How to get the IP address of the docker host from inside a docker container [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(41 answers)
Closed 10 months ago.
As the title says, I need to be able to retrieve the IP address the docker hosts and the portmaps from the host to the container, and doing that inside of the container.
/sbin/ip route|awk '/default/ { print $3 }'
As #MichaelNeale noticed, there is no sense to use this method in Dockerfile (except when we need this IP during build time only), because this IP will be hardcoded during build time.
As of version 18.03, you can use host.docker.internal as the host's IP.
Works in Docker for Mac, Docker for Windows, and perhaps other platforms as well.
This is an update from the Mac-specific docker.for.mac.localhost, available since version 17.06, and docker.for.mac.host.internal, available since version 17.12, which may also still work on that platform.
Note, as in the Mac and Windows documentation, this is for development purposes only.
For example, I have environment variables set on my host:
MONGO_SERVER=host.docker.internal
In my docker-compose.yml file, I have this:
version: '3'
services:
api:
build: ./api
volumes:
- ./api:/usr/src/app:ro
ports:
- "8000"
environment:
- MONGO_SERVER
command: /usr/local/bin/gunicorn -c /usr/src/app/gunicorn_config.py -w 1 -b :8000 wsgi
Update: On Docker for Mac, as of version 18.03, you can use host.docker.internal as the host's IP. See aljabear's answer. For prior versions of Docker for Mac the following answer may still be useful:
On Docker for Mac the docker0 bridge does not exist, so other answers here may not work. All outgoing traffic however, is routed through your parent host, so as long as you try to connect to an IP it recognizes as itself (and the docker container doesn't think is itself) you should be able to connect. For example if you run this from the parent machine run:
ipconfig getifaddr en0
This should show you the IP of your Mac on its current network and your docker container should be able to connect to this address as well. This is of course a pain if this IP address ever changes, but you can add a custom loopback IP to your Mac that the container doesn't think is itself by doing something like this on the parent machine:
sudo ifconfig lo0 alias 192.168.46.49
You can then test the connection from within the docker container with telnet. In my case I wanted to connect to a remote xdebug server:
telnet 192.168.46.49 9000
Now when traffic comes into your Mac addressed for 192.168.46.49 (and all the traffic leaving your container does go through your Mac) your Mac will assume that IP is itself. When you are finish using this IP, you can remove the loopback alias like this:
sudo ifconfig lo0 -alias 192.168.46.49
One thing to be careful about is that the docker container won't send traffic to the parent host if it thinks the traffic's destination is itself. So check the loopback interface inside the container if you have trouble:
sudo ip addr show lo
In my case, this showed inet 127.0.0.1/8 which means I couldn't use any IPs in the 127.* range. That's why I used 192.168.* in the example above. Make sure the IP you use doesn't conflict with something on your own network.
AFAIK, in the case of Docker for Linux (standard distribution), the IP address of the host will always be 172.17.0.1 (on the main network of docker, see comments to learn more).
The easiest way to get it is via ifconfig (interface docker0) from the host:
ifconfig
From inside a docker, the following command from a docker: ip -4 route show default | cut -d" " -f3
You can run it quickly in a docker with the following command line:
# 1. Run an ubuntu docker
# 2. Updates dependencies (quietly)
# 3. Install ip package (quietly)
# 4. Shows (nicely) the ip of the host
# 5. Removes the docker (thanks to `--rm` arg)
docker run -it --rm ubuntu:22.04 bash -c "apt-get update > /dev/null && apt-get install iproute2 -y > /dev/null && ip -4 route show default | cut -d' ' -f3"
For those running Docker in AWS, the instance meta-data for the host is still available from inside the container.
curl http://169.254.169.254/latest/meta-data/local-ipv4
For example:
$ docker run alpine /bin/sh -c "apk update ; apk add curl ; curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ; echo"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/community/x86_64/APKINDEX.tar.gz
v3.3.1-119-gb247c0a [http://dl-cdn.alpinelinux.org/alpine/v3.3/main]
v3.3.1-59-g48b0368 [http://dl-cdn.alpinelinux.org/alpine/v3.3/community]
OK: 5855 distinct packages available
(1/4) Installing openssl (1.0.2g-r0)
(2/4) Installing ca-certificates (20160104-r2)
(3/4) Installing libssh2 (1.6.0-r1)
(4/4) Installing curl (7.47.0-r0)
Executing busybox-1.24.1-r7.trigger
Executing ca-certificates-20160104-r2.trigger
OK: 7 MiB in 15 packages
172.31.27.238
$ ifconfig eth0 | grep -oP 'inet addr:\K\S+'
172.31.27.238
The only way is passing the host information as environment when you create a container
run --env <key>=<value>
The --add-host could be a more cleaner solution (but without the port part, only the host can be handled with this solution). So, in your docker run command, do something like:
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [my container]
(From https://stackoverflow.com/a/26864854/127400 )
docker network inspect bridge -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}'
It's possible to retrieve it using docker network inspect
The standard best practice for most apps looking to do this automatically is: you don't. Instead you have the person running the container inject an external hostname/ip address as configuration, e.g. as an environment variable or config file. Allowing the user to inject this gives you the most portable design.
Why would this be so difficult? Because containers will, by design, isolate the application from the host environment. The network is namespaced to just that container by default, and details of the host are protected from the process running inside the container which may not be fully trusted.
There are different options depending on your specific situation:
If your container is running with host networking, then you can look at the routing table on the host directly to see the default route out. From this question the following works for me e.g.:
ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'
An example showing this with host networking in a container looks like:
docker run --rm --net host busybox /bin/sh -c \
"ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"
For current versions of Docker Desktop, they injected a DNS entry into the embedded VM:
getent hosts host.docker.internal | awk '{print $1}'
With the 20.10 release, the host.docker.internal alias can also work on Linux if you run your containers with an extra option:
docker run --add-host host.docker.internal:host-gateway ...
If you are running in a cloud environment, you can check the metadata service from the cloud provider, e.g. the AWS one:
curl http://169.254.169.254/latest/meta-data/local-ipv4
If you want your external/internet address, you can query a remote service like:
curl ifconfig.co
Each of these have limitations and only work in specific scenarios. The most portable option is still to run your container with the IP address injected as a configuration, e.g. here's an option running the earlier ip command on the host and injecting it as an environment variable:
export HOST_IP=$(ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p')
docker run --rm -e HOST_IP busybox printenv HOST_IP
TLDR for Mac and Windows
docker run -it --rm alpine nslookup host.docker.internal
... prints the host's IP address ...
nslookup: can't resolve '(null)': Name does not resolve
Name: host.docker.internal
Address 1: 192.168.65.2
Details
On Mac and Windows, you can use the special DNS name host.docker.internal.
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
If you want real IP address (not a bridge IP) on Windows and you have docker 18.03 (or more recent) do the following:
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
In linux you can run
HOST_IP=`hostname -I | awk '{print $1}'`
In macOS your host machine is not the Docker host. Docker will install it's host OS in VirtualBox.
HOST_IP=`docker run busybox ping -c 1 docker.for.mac.localhost | awk 'FNR==2 {print $4}' | sed s'/.$//'`
I have Ubuntu 16.03. For me
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [image]
does NOT work (wrong ip was generating)
My working solution was that:
docker run --add-host dockerhost:`docker network inspect --format='{{range .IPAM.Config}}{{.Gateway}}{{end}}' bridge` [image]
Docker for Mac
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.
The gateway is also reachable as gateway.docker.internal.
https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
If you enabled the docker remote API (via -Htcp://0.0.0.0:4243 for instance) and know the host machine's hostname or IP address this can be done with a lot of bash.
Within my container's user's bashrc:
export hostIP=$(ip r | awk '/default/{print $3}')
export containerID=$(awk -F/ '/docker/{print $NF;exit;}' /proc/self/cgroup)
export proxyPort=$(
curl -s http://$hostIP:4243/containers/$containerID/json |
node -pe 'JSON.parse(require("fs").readFileSync("/dev/stdin").toString()).NetworkSettings.Ports["DESIRED_PORT/tcp"][0].HostPort'
)
The second line grabs the container ID from your local /proc/self/cgroup file.
Third line curls out to the host machine (assuming you're using 4243 as docker's port) then uses node to parse the returned JSON for the DESIRED_PORT.
My solution:
docker run --net=host
then in docker container:
hostname -I | awk '{print $1}'
Here is another option for those running Docker in AWS. This option avoids having using apk to add the curl package and saves the precious 7mb of space. Use the built-in wget (part of the monolithic BusyBox binary):
wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4
use hostname -I command on the terminal
Try this:
docker run --rm -i --net=host alpine ifconfig
So... if you are running your containers using a Rancher server, Rancher v1.6 (not sure if 2.0 has this) containers have access to http://rancher-metadata/ which has a lot of useful information.
From inside the container the IP address can be found here:
curl http://rancher-metadata/latest/self/host/agent_ip
For more details see:
https://rancher.com/docs/rancher/v1.6/en/rancher-services/metadata-service/
This is a minimalistic implementation in Node.js for who is running the host on AWS EC2 instances, using the afore mentioned EC2 Metadata instance
const cp = require('child_process');
const ec2 = function (callback) {
const URL = 'http://169.254.169.254/latest/meta-data/local-ipv4';
// we make it silent and timeout to 1 sec
const args = [URL, '-s', '--max-time', '1'];
const opts = {};
cp.execFile('curl', args, opts, (error, stdout) => {
if (error) return callback(new Error('ec2 ip error'));
else return callback(null, stdout);
})
.on('error', (error) => callback(new Error('ec2 ip error')));
}//ec2
and used as
ec2(function(err, ip) {
if(err) console.log(err)
else console.log(ip);
})
If you are running a Windows container on a Service Fabric cluster, the host's IP address is available via the environment variable Fabric_NodeIPOrFQDN. Service Fabric environment variables
Here is how I do it. In this case, it adds a hosts entry into /etc/hosts within the docker image pointing taurus-host to my local machine IP: :
TAURUS_HOST=`ipconfig getifaddr en0`
docker run -it --rm -e MY_ENVIRONMENT='local' --add-host "taurus-host:${TAURUS_HOST}" ...
Then, from within Docker container, script can use host name taurus-host to get out to my local machine which hosts the docker container.
Maybe the container I've created is useful as well https://github.com/qoomon/docker-host
You can simply use container name dns to access host system e.g. curl http://dockerhost:9200, so no need to hassle with any IP address.
The solution I use is based on a "server" that returns the external address of the Docker host when it receives a http request.
On the "server":
1) Start jwilder/nginx-proxy
# docker run -d -p <external server port>:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
2) Start ipify container
# docker run -e VIRTUAL_HOST=<external server name/address> --detach --name ipify osixia/ipify-api:0.1.0
Now when a container sends a http request to the server, e.g.
# curl http://<external server name/address>:<external server port>
the IP address of the Docker host is returned by ipify via http header "X-Forwarded-For"
Example (ipify server has name "ipify.example.com" and runs on port 80, docker host has IP 10.20.30.40):
# docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
# docker run -e VIRTUAL_HOST=ipify.example.com --detach --name ipify osixia/ipify-api:0.1.0
Inside the container you can now call:
# curl http://ipify.example.com
10.20.30.40
On Ubuntu, hostname command can be used with the following options:
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
For example:
$ hostname -i
172.17.0.2
To assign to the variable, the following one-liner can be used:
IP=$(hostname -i)
Another approach is based on traceroute and it's working on a Linux host for me, e.g. in a container based on Alpine:
traceroute -n 8.8.8.8 -m 4 -w 1 | awk '$1~/\d/&&$2!~/^172\./{print$2}' | head -1
It takes a moment, but lists the first hop's IP that does not start with 172. If there is no successful response, try increasing the limit on the tested hops using -m 4 argument.
With https://docs.docker.com/machine/install-machine/
a) $ docker-machine ip
b) Get the IP address of one or more machines.
$ docker-machine ip host_name
$ docker-machine ip host_name1 host_name2

Resources