Docker Intercontainer communication on CentOS 7 - docker

I am setting up a microservices architecture using docker for each service. I am also using kong API gateway running in its own docker container. The docker host is Centos 7 running in a VM with an IP 192.168.222.76.
On the host command line, I can access the starter service on port 7000 fine. However, within the kong VM, I ping the IP address but cannot access the service. As you can see from the output below, it says "Host is unreachable".
I am starting docker with --icc=true and --iptables=true and I have made several suggested changes to the firewalld and rich rules, etc. but I continue to not be able to reach the other container from within the kong container.
I am starting the kong container with a named network "kong-net" and the kong database is instance is in the same docker network and THEY seem to be able to communicate. I have added my starter service container to the same network on start up and still no joy. The kong container CAN access the outside world, just not other docker containers on the same host.
Output is below:
[root#docker ~]# clear
[root#docker ~]# curl 192.168.222.76:7000/starter/hello
Hello Anonymous Person!!
[root#docker ~]# docker exec -it kong /bin/ash
# curl 192.168.222.76:7000/starter/hello
curl: (7) Failed to connect to 192.168.222.76 port 7000: Host is unreachable
# curl www.google.com
HTML returned properly...
Any help on this appreciated!

You must have to reach the other container with his container name.
Try this:
docker exec -t kong curl servicename:7000/starter/hello
Kong container and service containers must share the same network

I was able to get ICC working by disabling firewalld all together (stop, disable, mask with systemctl) and opening up everything in iptables. Now its just a matter of setting up rules to block inbound access except on the API gateway and SSH.
Thanks!

I have come across this problem before. If disabling the firewall fixes the problem, DO NOT leave the firewall disabled, this is a very big security concern. The proper way to go about it is firstly, reactivate the firewall and then add a network masquerade.
firewall-cmd --zone=public --add-masquerade --permanent

Related

Cannot make http requests from docker container to outside

I have set up a new Ubuntu 22.04.1 server with Docker version 20.10.21, using docker images from the exact same dockerfiles that work without any problems on another Ubuntu server (20.04 though).
In my new docker installation, I experience problem reaching into the docker containers, but I can neither reach the outside world from within the docker containers.
For example, issuing this from a bash within the docker container:
# wget google.com
Resolving google.com (google.com)... 216.58.212.142, 2a00:1450:4001:82f::200e
Connecting to google.com (google.com)|216.58.212.142|:80...
That's all, it just hangs there forever. Doing the same in the other installation works just fine. So I suspect there is some significant difference between those installations, but I can't find out what it is.
I'm also running a reverse proxy docker container within the same docker network, and it cannot reach the app container in the broken environment. However, I feel that if I knew what block my outgoing requests, this would explain the other issues as well.
How can I find out what causes the docker container requests to be blocked?
This is my docker network setup:
Create the network
docker network create docker.mynet --driver bridge
Connect container #1
docker network connect docker.mynet container1
Run and connect container 2
docker run --name container2 -d -p 8485:8080 \
--network docker.mynet \
$IMAGE:$VERSION
Now
I can always wget outside from container1
I can wget outside from container2 on the old server, but not on the new one
Turned out that, while the default bridge worked as expected, any user-defined network (although defined with bridge driver) did not work at all:
requests from container to outside world not possible
requests into container not possible
requests between containers in the same network not possible
Because container1 was created first, then connected connected to user-defined network, it was still connected the default bridge, too, and thus was able to connect to the outside while container2 wasn't.
The solution is actually in the Docker docs under Enable forwarding from Docker containers to the outside world:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I don't think I had to make these changes on my Ubuntu 20.04 server, but I'm not 100% sure. However, after applying these changes, the connection issues were resolved.
I'm still looking how to make this configuration changes permanent (so they survive a reboot). Once I know it, I'll update this answer.

How to setup docker squid container and host to route requests from container host to internet?

Right now I have a docker container running squid listening on a range of ports. I ran it using the following command so that the port range is published to the host as well.
docker run -ti -v /var/log/squid:/var/log/squid -p 3133-3168:3133-3168 my_image/squid_test4 -name squid
I am trying to setup this up so clients can hit the container host on a port within the port range described above and still get out to the internet.
From the container host I can run curl -x http://172.17.x.x:3134 http://ipinfo.io and get out no problem. Whenever I try to use the hosts ip (ie curl -x http://host_ip:3134 http://ipinfo.io) on a client it hangs and times out. I can see the request hit the host via tcpdump but nothing is returned.
When I run netstat -tlpn on the host I can see entries saying that docker is listening on the port range I specify. When I am on a client and do something like telnet host_ip 3134 it connects and tells me something is listening there.
Do I need to setup iptables PREROUTE NAT on the host to forward traffic to those ports or could I use something like HA proxy on the host and set the squid container up as a backend? Kind of stumped here...
Ugh, simple check and noticed iptables/ufw not running. However when I did iptables -L or iptables-save they showed current in memory rules. Restarted UFW all good now...now I feel dumb.

Docker container can't connect to host application using IP whitelist

I have an application running on my host which has the following features: it listens to port 4001 (configurable) and only accepts connections from a whitelist of trusted IP addresses (127.0.0.1 only by default, other addresses can be be added but one by one, not using a mask).
(It's the interactive brokers gateway application which is run in java but I don't think that's important)
I have another application running inside a docker container which needs to connect to the host application.
(It's a python application accessing the IB API, but again I don't think that matters)
Ultimately I have will multiple containers on multiple machines trying to do the same thing, but I can't even get it working with one running on the same machine.
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(No response from IB Gateway on host machine)
IDEALLY I'd be able to set up the docker containers / bridge so that all the docker containers appear as if they are on a specific IP address, add it to the whitelist, and voila.
What I've tried:
1) using -p and EXPOSE
sudo docker run -t -p 4001:4001 myimage
Bind for 0.0.0.0:4001 failed: port is already allocated.
(No response from gateway)
This eithier doesn't work or leads to a "port already in use" conflict. I gather that these settings are designed for the opposite problem (host can't see a particular port on the container).
2) setting --net=host
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
This should work since the docker container should now look like it's 127.0.0.1... but it doesn't.
3) setting --net=host and adding the local host's real IP address 192.168.0.12 (as suggested in comments) to the whitelist
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
4) adding 172.17.0.1, ...2, ...3 to the whitelist on the host application (the bridge network is 172.17.0.0 and subsequent containers get allocated in this range)
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(no response from host)
This is horribly hacky but doesn't work eithier.
PS Note this is different from the problem of trying to run the host application IB Gateway inside a container - I am not doing that.
I don't want to run the host application inside another container, although in some ways that might be a neater solution.
Running the IB gateway is tricky on a number of different levels, including connecting to it, and especially if you want to automate the process.
We took a close look at connecting to it from other IPs, and finally gave up on it--gateway bug as far as we could tell. There is a setting to white IPs that can connect to the gateway, but it does not work and can not be scripted.
In our build process we create a docker base image, then add the gateway and any/all of the gateway's clients to that image. Then we run that final image.
(Posted on behalf of the OP).
Setting --net=host and changing the port from 4001 so it doesn't conflict with a live version of the gateway on the same network. The only IP address required in the whitelist is 127.0.0.1.
sudo docker run -t --net=host myimage
Use socat to forward port from the gateway to a new port which can listen on any address. For example, set the gateway to listen on port 4002 (localhost only) and use command in the container
socat tcp-listen:4001,reuseaddr,fork tcp:localhost:4002
to forward the port to 4001.
Then you can connect to the gateway from outside of the container using port 4001 when running the container with parameter -p 4001:4001.
In case this one is useful for another person. I tried a couple suggestions that were put here to connect from my python app running on a Docker container to a TWS IBGateway instance running on another server and none of them were 100% working. The socat option was connecting, but then the connection was being drop due an issue with the socat buffer that we couldn't fix.
The solution we found was to create an ssh tunnel from the machine that is running the Docker container to the machine that is running the TWS IBGateway.
ssh -i ib-gateway.pem <ib-gateway-server-user>#<ib-gateway-server-ip> -f -N -L 4002:127.0.0.1:4001
After you establish this ssh tunnel, you can test it running
telnet 127.0.0.1 4002
If this command run successfully, your ssh tunnel is ready. The next step would be to configure your python application to connect to 127.0.0.1 on port 4002 and start your docker container with --net=host to be able to access the ssh tunnel running on Docker host machine.

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

docker-machine, create and run a nginx container is not exposing port 80

From this article it was really easy to "docker-machine create" a VM host on google compute engine. My problem is that when using the ip(docker-machine NameOfVM) to access a running nginx container it does not responde.
I can see that nginx is running, when I SSH into the VM and run "curl localhost".
I can ping the VM but curl or browser is not responding.
Do you know what I am missing?
ifconfig shows a docker0 adapter and a eth0.
Do I have to configure docker any further? As I understand, docker is not running any boot2docker/VM's on a linux machine?
Thanks
You may read firewall section of the Google Compute Engine docs to configure your firewall:
You can also create a firewall rule that allows HTTP traffic from anywhere to all instances on the example-network network. Execute the following:
$ gcloud compute firewall-rules create web --network example-network --allow tcp:80`

Resources