How to link internal port to outside port in docker? - docker

I am not sure if I understand the docker port concept. Say I have an application inside a container that listens on port 6000 for tcp connections. This container is on server A.
I want to connect to the application from another server B. But I want to start multiple instances of the same container on server A and the internal port should stay 6000. However the external port should change.
E.g
container 1 6000->9660
container 2 6000->9661
...
So from outside the application should expose 9660, 9661,... Is this possible? I tried with:
docker run -p 9660:6000 ...
however the client could not connect. Any ideas?

I forgot to
EXPOSE 6000
inside my Dockerfile. Now it works :)

Related

Docker not exposing the port

There is a python application which I'm trying to run inside a docker container.
So inside the container when I'm trying to curl I can see the output but when I try to see the output on my host machine using curl it says
curl: (56) Recv failure: Connection reset by peer
and I'm not able to see any output in the browser as well
The port is exposed on 8050
host machine is centos 7
firewall and selinux are disabled
It would help if you posted the docker command / docker-compose file you use.
From what you say, it looks like you used the expose option (or, the container was made exposing that port).
I find the name "expose" a bit misleading.
Exposing a port simply means that the container listens to that port. It does not mean that this port is available ("exposed") to the host.
For that, you need to use publish (-p <host port>:<container port>).
How did you run the container ?
Connection Reset to a Docker container usually indicates that you've defined a port mapping for the container that does not point to an application.
So, if you've defined a mapping of 8050:8050, check that your process inside the docker instance is in fact running on port 8050 (netstat -an|grep LISTEN).

Dockerized Telnet over SSH Reverse Tunnel

I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)

can the same docker port be used for two different applications?

If we have two applications app1.py and app2.py both running in docker container as flask services with following commands:
docker run -p 5000:5002 app1.py
docker run -p 9000:5002 app2.py
Is it possible to keep the same docker port 5002 for both containers?
Secondly, if I use app.run(host='0.0.0.0', port=5000, debug=True) in flask endpoint.py file which is used for image building, is port=5000 the docker port in container or the port available externally on host?
Yes, each container runs in an isolated network namespace, so every one can listen on the same port and they won’t conflict. Inside your application code the port you listen on is the container-internal port, and unless you get told in some other way (like an HTTP Host: header) you have no way of knowing what ports you’ve been remapped to externally.
Is it possible to keep the same docker port 5002 for both containers?
Yes, of course. Typically every container runs in an isolate network namespace, which means containers cannot communicate with each unless they are configured to do so. What confuses you maybe that containers do communicate with each other well by default, which should thank Docker network default setting. But there are still other use cases. You may see more about The container Network Model and network namespace here.
Is port=5000 the port in container or the port valid externally on host?
It’s the port in container with no doubt. We could notice it is a user-defined argument for function run() in Flask. Since Flask application runs in container, so 5000 will be the port which Flask app would listen on in container.
We should map it out if we wanna access 5000 on host(outside of container). The flag -p would help you.
Hope this helps~

How can I inverse EXPOSE a port to a container?

I am new to Docker and I may not be looking into the right place in the documentation because I couldn't find a way to do what I call "inverse EXPOSE".
So for example, I have one web application that EXPOSE 80. That same application is using a postgresql database. When I am locally developing it works fine because I connect to localhost:5432 but when I containerize the app, it says something like "connection refused". I think the Docker philosophy is to containerize as much as possible and make those containers communicate between each other through a docker network. But I am curious if it actually is possible to say that localhost:5432 in my container actually refers to the port 5432 on the actual machine that hosts my container.
Localhost inside a container is not your docker host, it's a namespaced network inside the container. So if you try to communicate with localhost or 127.0.0.1 inside a container, it's only going to communicate with other apps running inside that container.
Instead, you should use the routeable IP of the host, so that requests can come out of the container and back into the docker host interface to reach applications running outside of a container.
When the app is running in the container you should use the IP:5432 e.g. 192.168.99.100:5432 of the host and not localhost.
When using localhost in the container it refers to localhost (127.0.0.1) of the container and not the one of the host.

How to obtain the published ports from within a docker container?

I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.

Resources