Dockerized Telnet over SSH Reverse Tunnel - docker

I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.

The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)

Related

docker swarm causing problems with iptables

I'm having an issue with docker swarm with it modifying iptables when i need control over the ports. I am trying to use UFW to make my own rules.
My setup has an nginx proxy that routes all traffic on designated ports to containers on different nodes to specific ports on a local network interface. So all of the node servers i want to block every single port on the public interface so all traffic has to come via the proxy server.
The problem is docker opens the ports on the public interface, let's say i have a container on 80:80 it opens port 80 on the public interface, when i don't want that server to be directly accessed, it needs to come in via the reverse proxy and down through the private network interface only.
I read with docker compose you can bind the port to the 127.0.0.1 ip address instead of letting docker bind to 0.0.0.0 like this:
"127.0.0.1:80:80"
However this doesn't work with docker swarm yaml config, when i try it gives this error:
error decoding 'Ports': Invalid hostPort: 127.0.0.1
This is causing me a headache, i don't want docker touching my iptables rules at all but i can't find a solid answer on how to stop it.
At the moment i am using OVH firewall directly on the ip addresses, however this isn't an ideal solution as OVH is basic and doesn't allow me to set port ranges which i need to do;

how to block external access to docker container linux centos 7

I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.

Docker container can't connect to ip host

I have deployed a netflix hystrix dashboard with turbine on a docker container, I can access to http://ip:8081/hystrix but when I try to monitor the stream of turbine it freeze and doesn't return any information, I test using curl inside the container and execute curl http://localhost:8081/turbine.stream and curl http://containername:8081/turbine.stream, with those two command works perfectly but when I use the host ip as curl http://hostip:8081/turbine.stream the curl throws Failed to connect to hostip port 8081: No route to host, I can't found a solution, can someone help me with this issue?,
Thanks in advance.
In order to access the container through Host IP you need to ensure the following:
Port mapping is allowing through the Host/Public IP itself not only localhost.
You can check this by executing docker ps on the docker host and look for the PORTS column the default should be as the following 0.0.0.0:8081->8081/tcp which means it can accept connection from any interface either public, private or localhost.
The firewall is not blocking the connection on port 8081.
By default the firewall of the host should be managed by Docker daemon itself so the port 8081 will be allowed in the firewall but you might have a different case either Docker is not managing the firewall of the host or there is an extra layer that prevents the connection

Port forwarding Ubuntu - Docker

I have following problem:
Assume that I started two Docker containers on host machine: A and B.
docker run A -ti -p 2000:2000
docker run B -ti -p 2001:2001
I want to be able to get to each of this containers FROM INTERNET by:
http://example.com:2000
http://example.com:2001
How to reach that?
The rest of the equation here is just normal TCP / IP flow. You'll need to make sure of the following:
If the host has some an implicit deny for incoming traffic on its physical interface, you will need to open up ports 2000 and 2001, just like you would for any service (Docker or not).
If the host is behind a NAT or other external means of routing, you'll need to punch holes for those ports there as well.
You'll need the external IP address (either the one attached to the host or the one in front of the NAT allowing access to the ports).
As far as Docker is concerned, you've done what is required to open the ports to the service running in that container correctly.

Docker network settings and iptables NAT

I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.

Resources