How to access Docker container's internal IP address from host? - docker

I have 2 docker containers running on my Mac host - container 1 is Jenkins from Docker Hub and container 2 is SonarQube from Docker Hub. I have both containers running successfully. I can access Jenkins from my host by going to http://localhost:8080/ and I can access my SonarQube by going to http://localhost:9000/.
The Jenkins container was started like this:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:latest
The SonarQube container was started like this:
docker run -d -p 9000:9000 sonarqube
Now I want to have each container communicate with each other so I need to provide the IP address of the other container to each container.
I got the IP address of each container by executing this:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
This returns an IP address of 172.17.0.2 for the Jenkins container and 172.17.0.3 for the SonarQube container. But when I try and access the Jenkins container from my host by going to http://172.17.0.2:8080 I get a request timeout. The same thing happens when I try and access the SonarQube container from my host by going to http://172.17.0.3:9000
Is this normal behavior?
Shouldn't I be able to access each container from my host by their internal IP address?
And how can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?

Is this normal behavior? Shouldn't I be able to access each container from my host by their internal IP address?
What you describe is normal behavior: you can't directly reach the Docker-internal IP addresses from a MacOS host. See "Per-container IP addressing is not possible" in the Docker for Mac docs.
How can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
This isn't something I normally "test" per se. Start up both processes and have them make their normal (HTTP) connections; if it works you'll see appropriate log messages, and if it doesn't work you'll see complaints. (Getting a root shell in a container to send ICMP packets from one container to another seems to be a popular option but doesn't prove much.)
Also: don't make this connection by explicit IP address. As you've noticed already the Docker-internal IP addresses aren't usable in some contexts, and they'll change whenever you restart containers. Instead, Docker provides an internal DNS service that can resolve host names when communicating between containers, but you need to explicitly set up a non-default bridge network. That setup would look like:
docker network create jenkinsnet
docker run --name sonarqube -d --net jenkinsnet \
-p 9000:9000 \
sonarqube
docker run --name jenkins -d --net jenkinsnet \
-p 8080:8080 -p 50000:50000 \
-e SONARQUBE_URL=http://sonarqube:9000 \
jenkins/jenkins:latest
So I've explicitly created a network; started both containers connected to it; and told the client container (via an environment variable) where the server container is. You don't have to publish ports with docker run -p to reach them this way; whether you do or not, use the port the server process is listening on (the second port number in the docker run -p option).
From the host, your only (portable, reliable) path to reach the container is via its published ports.

Looks like you are using default bridge network model. Internal IPs are meant for each container to talk to each other under bridge networking. You cannot access them from host.
There are multiple options for you.
You can configure http://172.17.0.3:9000 as your sonar endpoint in Jenkins.
You can configure http://172.17.0.2:8000 as your jenkins endpoint in sonar.
If you don't want to hard code above Ips then both of your containers can talk to each using Docker Default GatewayIp(172.17.0.1) and their internal port. so essentially you can configure http://172.17.0.1 as well.
Note - Default Gateway Ip change change if you define user defined bridge network.
https://docs.docker.com/v17.09/engine/userguide/networking/#the-default-bridge-network
https://docs.docker.com/network/network-tutorial-standalone/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.

The accepted answer (https://stackoverflow.com/a/53992787/7730554) already provides valid options of which I personally usually prefer using docker compose.
But as you are running Docker on Mac you could also use host.docker.internal in combination with the defined forwarding host port. So Docker will take care that host.docker.internal is resolved to the corresponding IP even if your Host IP changes.
See https://docs.docker.com/desktop/mac/networking/.
Note that this is for development mode only and works when you use Docker Desktop.

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

How to ping Docker-Container inside host network?

I'm working with Docker containers for a while now but can't figure out how to ping docker containers which are part of my host network.
So until now I created my containers specifing the name and networks flags like described in many tutorials like: https://www.digitalocean.com/community/questions/how-to-ping-docker-container-from-another-container-by-name
Where I am able to create a network and afterwards run my containers in these networks for example like:
docker run -d --name web1 -n testnetwork
docker run -d --name web2 -n testnetwork
That would enable me to ping my containers from each other with:
docker exec -it web1 bash # enter container
ping web2 #ping second container
Now I have to use a given application which only runs in the "host" network for now. To access this container from my other containers they have to be in the same network (== "host").
But It seems like I cant ping my containers from each other anymore. I'm also unable to ping my containers from my host machine using their name.
Did I overlooked something?
Any help would be appreciated!
Best regards
If you set --network host, you basically disable Docker's entire networking stack. Among other things, that disables normal inter-container communications: if you're using host networking you can't call another container by its name. Host networking is very rarely necessary (and doesn't work well on some host platforms); the first thing I'd look at is whether you can switch back to standard (bridged) networking.
If you do run a container with --network host, it's indistinguishable from other processes running on that host. That means you can't directly send ICMP packets to it, any more than you can ping(1) your ssh daemon or Web browser. You need to connect to the container using the host's IP address or DNS name, even from other containers on the same host. From inside of a Docker container, how do I connect to the localhost of the machine? discusses several ways to do this.
(I don't think you can customize the behavior of Docker or Linux when a container receives an ICMP ECHO packet; ping(1) a container doesn't seem that useful.)

Communicating between a windows and linux docker container on the same host

This may seem trivial, but after some trial error I come to the SO community for a little help!
I create a network, call it docker-net.
I have a linux container, let's all it LC1, that has a published port of 6789 (so when created it had the parameter -p 6789:6789) and I make it join docker-net network (--network docker-net)
This works fine, through my host, I can communicate with it no problem.
I switch to the windows containers and check that LC1 is still running. It does! Amazing.
I create a container, let's call it WC1. It also publishes a port of 9000 that maps internally to 80 (-p 9000:80)
The application inside WC1 tries to connect to LC1 using the IP assigned from the network (docker inspect LC1) and I can't communicate.
There's probably a concept that I can't get my head around to.
I understand that the WC1 and LC1 have different gateways and subnets. Could that be the culprit?
Any help to get me to make that work is appreciated !
EDIT:
Here are the commands I ran for the scenario above:
docker network create docker-net
docker run -d -p 6789:6789 --name LC1 --network docker-net LC1
docker inspect LC1
The IP is 172.18.0.2
switch to the windows container
docker run -d -p 9000:80 --name WC1 WC1
In the docker network connect documentation it states that you can assign an IP to a container the same should work with docker run --network name --ip. Then use that IP to access the container.
Specify the IP address a container will use on a given network
You can specify the IP address you want to be assigned to the
container’s interface.
$ docker network connect --ip 10.10.36.122 multi-host-network
container2
I have found these:
a deleted question on serverfault about the same issue. See the cached-by-google version: Connect Windows container to Linux container running on same Docker host [closed]
an article: Run Linux and Windows Containers on Windows 10
and I think that the only way to make the 2 containers communicate is through the host and by exposing ports. For exampple LC1 will use -p [your app port]:8080 and WC1 -p [your app port]:9090.
By saying [your app port] I mean that it is up to you to decide what to use (a tcp/udp listening socket, a REST api...)
As docker evolves maybe there will be a better solution in the near future.

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

How to access docker container via its Ip from the host

I want to be able to access a docker container via its Ip eg the one I can see when I do docker container inspect foo
The reason is I am using zookeeper inside a docker container that is managing two other docker containers running solr. My code (not in docker and I don't at this stage want it to be) calls zookeeper to get the urls of the solr servers which zookeeper reports as the docker containers ip. My code then falls over because calling the docker containers ip from the host fails as it should be calling localhost.
So how can I allow a call to the docker containers ip from the host to be routed correctly. (I am using Docker native for Mac)
I'm not using Docker for Mac, so I'm not sure the newest version Docker for Mac is still based on Docker-machine (which based on VirtualBox) or not.
If you can confirm your Docker for Mac is based on VirtualBox, then you probably could get the inet IP of vboxnet0 network interface via ifconfig command. This IP should be used as your calling IP.
Besides, you should know the port number of your Zookeeper container. Normally the exposed port of a container could be configured in docker run command, for example:
docker run -p 5000:5001 -i -t ubuntu /bin/bash
Where -p indicated the exposed port of the container.

Resources