Docker container talks to docker container in the same local host? [duplicate] - docker

This question already has answers here:
accessing a docker container from another container
(8 answers)
Closed 1 year ago.
I am in a confusion right now. I try many things I can find on the web, but, none solved it. I have Win10 and Docker desktop installed using WSL 2 to host Linux containers. I use the following command to start the Jenkins website.
docker run --name jenkins-master-c -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
This works fine. I can access the website using http://localhost:8080/
The problem is, I try to curl http://localhost:8080 from another alpine docker container, but, I am not getting the web page back, it said connection refused. I tried my own tiny web service on my Windows machine without docker. Same thing. I can access the web service using web browser on Windows 10. However, if I get inside a container, I couldn't access the web service on the localhost.
I know I am missing some thing really basic, because the web doesn't seem to have this topic. I am just on my own computer without anything fancy. Thus, I just want to use localhost. The web said the default is supposed to use bridge which the container should talk to each other easily, but, it is not working for me. What am I missing. Maybe I shouldn't type localhost? But, what else should I do?
thank you
Edit: just want to explain what I did to get my problem solved. The creating network --network my-network-name was what I originally did, which failed because the way I curl the webpage is wrong. I did --name jenkins-master-c only to make it easy locate my container on the docker ps. But, as pointed out in my question, I suspected the localhost is wrong, which is confirmed by the solution. Instead of using localhost, I do curl http://jenkins-master-c:8080 which worked. Thanks

localhost is always a question of perspective, it refers to the current machine. This means if you call localhost from a container it speaks to himself and not the machine you see as localhost. If you want to call a service running on this one you have to use its real IP address.

You can imagine that docker containers are individual virtual machines, they have their own localhost. And they are isolated from your host pc and other containers.
Now if you want to communicate among two or more docker containers then you can use bridge network. In docker perspective, a bridge network allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. You can see the docker doc for bridge network.
On the other hand, if you want to communicate with your docker container from your host then you need to fort-forward for opening/exposing a port to connect with the container (which you did in -p 8080:8080)
Another way you can bring your containers under a local host is using kubernetes, in kuernetes you can run one or more containers in a pod and then they will share same network space. kubernetes pod

Probably these two containers are not in the same network, you they cannot see and talk to each other.
First of all, create a network by docker command docker network create SOMENAME, and then run containers again (both of them):
docker run --name jenkins-master-c --network SOMENAME -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
Now it should talk to another docker container.

Related

Cannot make http requests from docker container to outside

I have set up a new Ubuntu 22.04.1 server with Docker version 20.10.21, using docker images from the exact same dockerfiles that work without any problems on another Ubuntu server (20.04 though).
In my new docker installation, I experience problem reaching into the docker containers, but I can neither reach the outside world from within the docker containers.
For example, issuing this from a bash within the docker container:
# wget google.com
Resolving google.com (google.com)... 216.58.212.142, 2a00:1450:4001:82f::200e
Connecting to google.com (google.com)|216.58.212.142|:80...
That's all, it just hangs there forever. Doing the same in the other installation works just fine. So I suspect there is some significant difference between those installations, but I can't find out what it is.
I'm also running a reverse proxy docker container within the same docker network, and it cannot reach the app container in the broken environment. However, I feel that if I knew what block my outgoing requests, this would explain the other issues as well.
How can I find out what causes the docker container requests to be blocked?
This is my docker network setup:
Create the network
docker network create docker.mynet --driver bridge
Connect container #1
docker network connect docker.mynet container1
Run and connect container 2
docker run --name container2 -d -p 8485:8080 \
--network docker.mynet \
$IMAGE:$VERSION
Now
I can always wget outside from container1
I can wget outside from container2 on the old server, but not on the new one
Turned out that, while the default bridge worked as expected, any user-defined network (although defined with bridge driver) did not work at all:
requests from container to outside world not possible
requests into container not possible
requests between containers in the same network not possible
Because container1 was created first, then connected connected to user-defined network, it was still connected the default bridge, too, and thus was able to connect to the outside while container2 wasn't.
The solution is actually in the Docker docs under Enable forwarding from Docker containers to the outside world:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I don't think I had to make these changes on my Ubuntu 20.04 server, but I'm not 100% sure. However, after applying these changes, the connection issues were resolved.
I'm still looking how to make this configuration changes permanent (so they survive a reboot). Once I know it, I'll update this answer.

How to assign public IP's to docker conteiners?

I want to run 5 docker cointeiners with same app, it uses port 50505, I want expose this port to internet.
My server runs Ubuntu 18.04
Assigned IPs
204.12.240.210-214
So looks like i got 4 IP adresses
I can ssh to any of them and it works.
Now I am bit fresh with docker still learning it.
Anyone could give commands how to create network with this IP and then how to start instance with this IP's ?
I believe its possible.
Normalli I start app like this:
docker run -d -P --net=host -v /mnt/chaindata:/go-matrix/chaindata --name matrix --restart always disarmm/matrix
But when u start 2nd instance it will crash cuz ports are used by first one
So I could fix that with IP's
It doesn't seem a question regarding Docker, if I were you, I would setup a Nginx as a proxy/load balancer and pass traffic to the backend service, be it any service running on Docker or others
Also you can create Nginx using Docker
Next is how you assign the IP and find the port Nginx is listening to
To be honestly, I don't know how to handle it with --net=host.
But, one possible solution is: not use host network, use default bridge0's bridge network. But you need to know the ports of your application used, and expose it by yourself. Then, when expose the ports, you can specify the ip.
Something like follows:
docker run -d -p 204.12.240.210:80:80 -p 204.12.240.210:8080:8080 disarmm/matrix
docker run -d -p 204.12.240.211:80:80 -p 204.12.240.211:8080:8080 disarmm/matrix
Then, you can see in host, you have two 80/8080 ports open on same host machine binding with different IP.
As mentioned above, the limit is: you have to know what ports you used exactly.
See this for format.

Communicating between a windows and linux docker container on the same host

This may seem trivial, but after some trial error I come to the SO community for a little help!
I create a network, call it docker-net.
I have a linux container, let's all it LC1, that has a published port of 6789 (so when created it had the parameter -p 6789:6789) and I make it join docker-net network (--network docker-net)
This works fine, through my host, I can communicate with it no problem.
I switch to the windows containers and check that LC1 is still running. It does! Amazing.
I create a container, let's call it WC1. It also publishes a port of 9000 that maps internally to 80 (-p 9000:80)
The application inside WC1 tries to connect to LC1 using the IP assigned from the network (docker inspect LC1) and I can't communicate.
There's probably a concept that I can't get my head around to.
I understand that the WC1 and LC1 have different gateways and subnets. Could that be the culprit?
Any help to get me to make that work is appreciated !
EDIT:
Here are the commands I ran for the scenario above:
docker network create docker-net
docker run -d -p 6789:6789 --name LC1 --network docker-net LC1
docker inspect LC1
The IP is 172.18.0.2
switch to the windows container
docker run -d -p 9000:80 --name WC1 WC1
In the docker network connect documentation it states that you can assign an IP to a container the same should work with docker run --network name --ip. Then use that IP to access the container.
Specify the IP address a container will use on a given network
You can specify the IP address you want to be assigned to the
container’s interface.
$ docker network connect --ip 10.10.36.122 multi-host-network
container2
I have found these:
a deleted question on serverfault about the same issue. See the cached-by-google version: Connect Windows container to Linux container running on same Docker host [closed]
an article: Run Linux and Windows Containers on Windows 10
and I think that the only way to make the 2 containers communicate is through the host and by exposing ports. For exampple LC1 will use -p [your app port]:8080 and WC1 -p [your app port]:9090.
By saying [your app port] I mean that it is up to you to decide what to use (a tcp/udp listening socket, a REST api...)
As docker evolves maybe there will be a better solution in the near future.

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Docker for Windows container connect to host in NAT mode (modern docker not boot2docker)

Please note that this is NOT a dupe of previous questions that at first seem similar, at least that I could find.
I'm running Windows docker, recent version, trying to get a container to be able to connect to localhost for testing purposes. I have good reason to use NAT mode Virtualbox, to do things like avoid exposing the database to the larger network.
My setup process needs to be entirely automatic, which is another point for leaving VBox in NAT mode, and is intermediate in complexity.
My build process includes lines like this:
docker run --volumes-from mystore -p 5671:5671 -p 5672:5672 -p 15672:15672 -p 15671:15671 -p 15670:15670 --name myrmyq -d myrmqimage
docker run -p 80:80 -p 443:443 --link mypg:mypg --link myrmq:myrmq --name myserver -d myserverimage
In order to allow the outside world to connect, it also does things like this:
vboxmanage controlvm "myvm" natpf1 "tcp-port15672,tcp,,15672,,15672" (not exact, this is from python script with lots of quotes)
Thanks to the NAT routing, this allows only specific ports on the computer to be accessed on whatever container I want from outside the computer, even though more are exposed (such as Postgres ports in another container). So far, so good: only the ports that I want are exposed to the outside world, and it works well!
Also, there's a service running on the machine that makes connections with the docker containers, so it's handy having all the -p ports exposed locally without having, say, the above-mentioned database exposed to the network.
Only, now I need to be able to allow software running like this in Docker to connect to a Django server running on localhost. Is there a way to do this without setting VBox's networking to bridge mode and throwing all the config out the window? If I have to do bridged, can it be done reliably in an automated fashion? Alternatively, as a work-around purely for testing, could I have it connect to another computer on my local network (i.e. my Macbook) running said server?
Thanks so much for help & guidance.
For anyone who finds this in the future,
since it was purely for testing, I added a bridge network adapter as a third interface, temporarily. It appears to be impossible with a pure NAT network.

Resources