docker container port accessed from another container - docker

I have a container1 running a service1 on port1
also
I have a container2 running a service2 on port2
How can I access service2:port2 from service1:port1?
I mention that the container are linked together.
I ask if there is a way to do it without accessing the docker0 IP (where the port is visible)
thanks

The preferred solution is to place both containers on the same network, use the build-in dns discovery to reach the other node by name, and you'll be able to access them by the container port, not the host published port. By CLI, that looks like:
docker network create testnet
docker run -d --net testnet --name web nginx
docker run -it --rm --net testnet busybox wget -qO - http://web
The busybox shows a sample client container connecting to the nginx container with the name web, over port 80. Note that this port didn't need to be published to be reachable by other containers.
Setting up multi-container environments with their own network is a common task for docker-compose, so I'd recommend looking into this tool if you find yourself doing this a lot.

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

Docker - connecting to an open port in a container

I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)

How to use name of container to resolve to its IP in nginx's upstream server?

I'm running 2 Docker containers on a host. In my first container, I started it this way:
docker run -d --name site-a -p 127.0.0.1:3000:80 nginx
This maps the port 80 to the host machine's port 3000. It also has a name called site-a, which I want to use it in another container.
Then in my other container, which is a main reverse proxy container, I configured the nginx's configuration to have an upstream pointing the the first container (site-a):
upstream my-site-a {
server site-a:80;
}
I then run the reverse proxy container this way:
docker run -d --name reverse-proxy -p 80:80 nginx
So that my reverse-proxy container will serve from site-a container.
However, there are 2 problems here:
The upstream in my nginx configuration doesn't work when I use server site-a:80;. How can I get
nginx to resolve the alias "site-a" to the IP of site-a container?
When starting site-a container, I followed an answer at here
and bound it to the host machine's port 3000 with this: -p 127.0.0.1:3000:80 Is this neccessary?
In order for your containers to be mutually reachable via their name, you need to add them to the same network.
First create a network with this command:
docker network create my-network
Then, when running your containers, add the --network flag like this:
docker run -d --name site-a -p 127.0.0.1:3000:80 --network my-network nginx
Of course you need to do the same thing to both containers.
As per your second question, there's no need to map the port on your host with the -p flag as long as you don't want to reach site-a's container directly from your host.
Of course you still need to use the -p flag on the reverse proxy container in order to make it reachable.
If you combine multiple containers to more complex infrastructure it's time to move to more complex technologies. Basically you have the choice between docker-compose and docker stack. Kubernetes could also be an option but it's more complicated.
That techniques provide solutions for container discovery and internal name resolving.
I suggest to use docker stack. Instead of compose it has no additional requirements beside docker.

With docker --net="host" i can access the ports

When i run a container for a web-application that listens on port 8090
With
docker run -p 8090:8090 -h=%ComputerName% mycontainer
Then i can access the services on http://localhost:8090
If i started the container with
docker run --net="host" -h=%ComputerName% mycontainer
Then i can't access to the services on http://localhost:8090
Why ??
Is not supposed that with -net="host" the container shares the network of the host, then why i can't access to http://localhost:8090 with --net="host"
This is not what --net=host does.
In your first example; you are mapping the ports of the container to your host - which allows you to via the services of the container.
In the second example; you remove the -p option so no ports are now mapped.
What the --net=host does - is allows your container to view port on the host machine as if they were local to the container. So say you had a database running on port 5000 of your host machine, and it was not in a Docker container - you would be able to access this on the container via localhost:5000. (Note - there are some caveats to this; such as Docker for Mac would actually need docker.for.mac.localhost)

When to use --hostname in docker?

Is --hostname like a domain name system in docker container environment that can replace --ip when referring to other container?
The --hostname flag only changes the hostname inside your container. This may be needed if your application expects a specific value for the hostname. It does not change DNS outside of docker, nor does it change the networking isolation, so it will not allow others to connect to the container with that name.
You can use the container name or the container's (short, 12 character) id to connect from container to container with docker's embedded dns as long as you have both containers on the same network and that network is not the default bridge.
--hostname is a parameter which can be given along with docker run command which will set the specified name as containers hostname whereas --ip is parameter to set specific ip address(ipv4) to that particular container.
docker run --hostname test --ip 10.1.2.3 ubuntu:14.04
The following command will create a docker container with base image as ubuntu-14.04 with hostname as test and container ip address as 10.1.2.3
If you need to change the hostname in a way that other containers from the same network will see it, just use --net-alias=${MY_NEW_DNS_NAME}
For example:
docker run -d --net-alias=${MY_NEW_DNS_NAME} --net=my-test-env --name=my-docker-name-test <dokcer-contanier>
Please see: Difference between --link and --alias in overlay docker network?
This is not a direct answer, I just want to summarise something that is not immediately clear.
To get containers to talk to each other,
Create a non default network:
docker network create MyNetwork
Connect containers to this network at run time:
docker run --network MyNetwork --name Container1 Image1
docker run --network MyNetwork --name Container2 Image2
Now, if Container1 is for example a web server running on port 80, Processes inside Container2 will be able to resolve it using a host name of Container1 and port 80
Further if Container1 is set up like this:
docker run --network MyNetwork --name Container1 -p 8080:80 Image1
Then
Container2 can access Container1:80
the Host can access 127.0.0.1:8080
This is summarised from here https://jaaq.medium.com/making-docker-containers-talk-to-each-other-by-hostname-using-container-networking-94835a6f6a5b
You can also confirm containers are connected and check their internal IP addresses using this:
docker network inspect MyNetwork

Resources