Docker container can't connect to ip host - docker

I have deployed a netflix hystrix dashboard with turbine on a docker container, I can access to http://ip:8081/hystrix but when I try to monitor the stream of turbine it freeze and doesn't return any information, I test using curl inside the container and execute curl http://localhost:8081/turbine.stream and curl http://containername:8081/turbine.stream, with those two command works perfectly but when I use the host ip as curl http://hostip:8081/turbine.stream the curl throws Failed to connect to hostip port 8081: No route to host, I can't found a solution, can someone help me with this issue?,
Thanks in advance.

In order to access the container through Host IP you need to ensure the following:
Port mapping is allowing through the Host/Public IP itself not only localhost.
You can check this by executing docker ps on the docker host and look for the PORTS column the default should be as the following 0.0.0.0:8081->8081/tcp which means it can accept connection from any interface either public, private or localhost.
The firewall is not blocking the connection on port 8081.
By default the firewall of the host should be managed by Docker daemon itself so the port 8081 will be allowed in the firewall but you might have a different case either Docker is not managing the firewall of the host or there is an extra layer that prevents the connection

Related

Docker not exposing the port

There is a python application which I'm trying to run inside a docker container.
So inside the container when I'm trying to curl I can see the output but when I try to see the output on my host machine using curl it says
curl: (56) Recv failure: Connection reset by peer
and I'm not able to see any output in the browser as well
The port is exposed on 8050
host machine is centos 7
firewall and selinux are disabled
It would help if you posted the docker command / docker-compose file you use.
From what you say, it looks like you used the expose option (or, the container was made exposing that port).
I find the name "expose" a bit misleading.
Exposing a port simply means that the container listens to that port. It does not mean that this port is available ("exposed") to the host.
For that, you need to use publish (-p <host port>:<container port>).
How did you run the container ?
Connection Reset to a Docker container usually indicates that you've defined a port mapping for the container that does not point to an application.
So, if you've defined a mapping of 8050:8050, check that your process inside the docker instance is in fact running on port 8050 (netstat -an|grep LISTEN).

Dockerized Telnet over SSH Reverse Tunnel

I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)

how to block external access to docker container linux centos 7

I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.

Docker container doesn't connect to another docker container on server

I'm using a Digital Ocean docker droplet and have 3 docker containers: 1 for front-end, 1 for back-end and 1 for other tools with different dependencies, let's call it back-end 2.
The front-end calls the back-end 1, the back-end 1 in turn calls the back-end 2. The back-end 2 container exposes a gRPC service over port 50051. Locally, by running the following command, I was able to identify the docker service to be running with the IP 127.17.0.1:
docker network inspect bridge --format='{{json .IPAM.Config}}'
Therefore, I understand that my gRPC server is accessible from the following url 127.17.0.1:50051 within the server.
Unfortunately, the gRPC server refuses connections when running from the docker droplet while it works perfectly well when running locally.
Any idea what may be different?
You should generally set up a Docker private network to communicate between containers using their container names; see e.g. How to communicate between Docker containers via "hostname". The Docker-internal IP addresses are subject to change if you delete and recreate a container and aren't reachable from off-host, and trying to find them generally isn't a best practice.
172.17.0.0/16 is a typical default for the Docker-internal IP network (127.0.0.0/8 is the reserved IPv4 loopback network) and it looks like you might have typoed the address you got from docker network inspect.
Try docker run with following command:
docker run -d -p {server ip}:12345 {back-end 2 image}
It will expose IP port to docker container and will be accessible from other servers.
Note: also check firewall rules, if firewall is blocking access.
You could run docker binding to ip and port as shown by Aakash. Please restrict access to this specific IP and port to be accessed only from the other docker IP and port - this will help to run docker private and doesn't allow other (even the other docker/instances within your network).

Docker network settings and iptables NAT

I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.

Resources