I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.
Related
I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)
i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.
I have deployed a netflix hystrix dashboard with turbine on a docker container, I can access to http://ip:8081/hystrix but when I try to monitor the stream of turbine it freeze and doesn't return any information, I test using curl inside the container and execute curl http://localhost:8081/turbine.stream and curl http://containername:8081/turbine.stream, with those two command works perfectly but when I use the host ip as curl http://hostip:8081/turbine.stream the curl throws Failed to connect to hostip port 8081: No route to host, I can't found a solution, can someone help me with this issue?,
Thanks in advance.
In order to access the container through Host IP you need to ensure the following:
Port mapping is allowing through the Host/Public IP itself not only localhost.
You can check this by executing docker ps on the docker host and look for the PORTS column the default should be as the following 0.0.0.0:8081->8081/tcp which means it can accept connection from any interface either public, private or localhost.
The firewall is not blocking the connection on port 8081.
By default the firewall of the host should be managed by Docker daemon itself so the port 8081 will be allowed in the firewall but you might have a different case either Docker is not managing the firewall of the host or there is an extra layer that prevents the connection
I'm using a Digital Ocean docker droplet and have 3 docker containers: 1 for front-end, 1 for back-end and 1 for other tools with different dependencies, let's call it back-end 2.
The front-end calls the back-end 1, the back-end 1 in turn calls the back-end 2. The back-end 2 container exposes a gRPC service over port 50051. Locally, by running the following command, I was able to identify the docker service to be running with the IP 127.17.0.1:
docker network inspect bridge --format='{{json .IPAM.Config}}'
Therefore, I understand that my gRPC server is accessible from the following url 127.17.0.1:50051 within the server.
Unfortunately, the gRPC server refuses connections when running from the docker droplet while it works perfectly well when running locally.
Any idea what may be different?
You should generally set up a Docker private network to communicate between containers using their container names; see e.g. How to communicate between Docker containers via "hostname". The Docker-internal IP addresses are subject to change if you delete and recreate a container and aren't reachable from off-host, and trying to find them generally isn't a best practice.
172.17.0.0/16 is a typical default for the Docker-internal IP network (127.0.0.0/8 is the reserved IPv4 loopback network) and it looks like you might have typoed the address you got from docker network inspect.
Try docker run with following command:
docker run -d -p {server ip}:12345 {back-end 2 image}
It will expose IP port to docker container and will be accessible from other servers.
Note: also check firewall rules, if firewall is blocking access.
You could run docker binding to ip and port as shown by Aakash. Please restrict access to this specific IP and port to be accessed only from the other docker IP and port - this will help to run docker private and doesn't allow other (even the other docker/instances within your network).
I am deploying a eureka server on a VM(say host external IP is a.b.c.d) as a docker image. Trying this in 2 ways.
1.I am running the docker image without explicit port mapping : docker run -p 8671 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:32769->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:32769 , its not available.
2.I am running the docker image with explicit port mapping : docker run -p 8761:8761 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:8761->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:8761 , its available.
Why in the first case the eureka server is not available from out side the host machine even if there is a random port(32769) assigned by docker.
Is it necessary to have explicit port mapping to have docker app available from external network ?
Since you're looking for access from the outside world to the host via the mapped port you'll need to ensure that the source traffic is allowed to reach that port on the host and given protocol. I'm not a network security specialist, but I'd suggest that opening up an entire range of ports simply because you don't know which port docker will pick would be a bad idea. If you can, I'd say pick a port and explicitly map it and ensure the firewall allows access to that port from the appropriate source address(es) e.g. ALLOW TCP/8671 in from 10.0.1.2/32 as an example - obviously your specific address range will vary on your network configuration. Docker compose may help you keep this consistent (as will other orchestration technologies like Kubernetes). In addition if you use cloud hosting services like AWS you may be able to leverage VPC security groups to help you whitelist source traffic to the port without knowing all possible source IP addresses ahead of time.
You either have the firewall blocking this port, or from wherever you are making the requests, for certain ports your outgoing traffic is disabled, so your requests never leave your machine.
Some companies do this. They leave port 80, 443, and couple of more for their intranet, and disable all other destination ports.