Cannot access Docker container remotely - docker

So I have a custom Docker image based on Debian (essentially just installs some extra packages and copies a few files to configure the web server). However, I have been unable to get access to the web server externally.
The command I use to run the image after being built is:
docker container run --publish 8080:80 --name ipcast -t -d ipcast
Of course, the first thing I assumed was that the port was not open in the Firewall, though I double checked the Firewall and it seems to be open through all hops to the Docker host.
Furthermore, I ran nmap -p 8080 ourdockerserver.com on my remote machine and it yielded:
PORT STATE SERVICE
8080/tcp closed http-proxy
Which implies that the port is open in the Firewall (since it's closed not filtered) just that nothing is listening on it.
Nevertheless, when I run netstat -tulpn on the Docker host, I can see it running on port 8080:
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 579620/docker-proxy
And running nmap -p 8080 localhost on the Docker host seems to work fine:
PORT STATE SERVICE
8080/tcp open http-proxy
I'm not particularly knowledgeable with IP Tables, however, I've never seen a port being used internally but closed externally without being filtered by a Firewall. I understand that this may be something to do with the port forwarding magic that Docker does on the host but I cannot get it to work even after scouring the internet.
I also tried creating a bridged network with docker network create -d bridge my-net and then running my container with:
docker container run --network my-net --publish 8080:80 --name ipcast -t -d ipcast
This did not work either...

Related

Unable to access jenkins on port 8080 when running docker network host

I have a Jenkins running in a docker container in Linux ec2 instance. I am running testcontainers within it and I want to expose all ports to the host. For that I am using network host.
When I run the jenkins container with -p 8080:8080 everything works fine and I am able to access jenkins on {ec2-ip}:8080
docker run id -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
however, If I want to run the same image using --network=host as I want to expose all ports to the host
docker run id --network=host jenkins/jenkins:lts
{ec2-ip}:8080 becomes unreachable. I can curl to it locally within the container localhost:8080 but accessing jenkins from the browser doesn't work.
I am not sure how network host would change the way I access jenkins on port 8080. the application should be still available on port 8080 on the host IP address?
Check if you are enabling the port 8080 in the security group for the instance.
When a Docker container is running in the host network mode using the --network=host option, it shares the network stack with the Docker host. This means that the container is not isolated and uses the same network interface as the host.
In your case, you should be able to access the Jenkins from the browser with ec2-ip:8080
I tested it by running Jenkins with the following command:
docker run -id --name jenkins --network=host jenkins/jenkins:lts
if the issue still persists, you can check the following:
make sure the container is running
make sure that there is no other process is running on port 8080
make sure that you enabled the port 8080 for your ec2
AFAIU --network doesn't do what you expect it to do. --network flag allows you to connect the container to a network. For example, when you do --nerwork=host your container will be able to use the Docker host network stack. Not the other way around. Take a look at the official documentation.
Figured it out. needed to update iptables to allow port 8080 on network host.
sudo iptables -D INPUT -i eth0 -p tcp -m tcp --dport 8080 -m comment --comment "# jenkins #" -j ACCEPT

How to Connect Docker Container to localhost [duplicate]

This question already has answers here:
Exposing a port on a live Docker container
(16 answers)
Closed yesterday.
I've created a docker container with ubuntu image. In that I've created a react app and when I try to run the app I could not see the app is running on localhost but in the terminal of container it says its running on the port. How can we connect a docker container with our localhost.
If You have a docker file just pass the port to docker run with -p
3001:3000
If you have a docker compose set the port with:
ports:
- 3001:3000
and run docker-compose up -d
Finally navigate to localhost:{Port}
From official documentation : docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.
Then docker ps and verify its running and ports are exposed.
Also check about your firewall, it may block ports.

Docker network bridge not working from outside

I am trying to start a jenkins container. The port 8080 is mapped to the host port 80.
docker run -p 80:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
I can curl the jenkins app from the host, however I can not reach it from outside the host.
curl localhost:80 works inside host
but
curl <fqdn or ip address>:80 does not work outside host (same network as host) and timeout.
If I set --net=host, the above command works, but I would like to not use the host network in case I need to add more services in the future.
I think it is not a firewall issue because by setting the network to host, the link is working.
With tcpdump I can see the requests in the host main interface, however, no packets are forwarded to the docker0 interface. Is it normal behaviour ?
What can I do to resolve the issue in order to reach jenkins outside the host ?

Docker - connecting to an open port in a container

I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)

Docker port exposed to outside world

I've installed docker in a VM which is publicy available on internet. I've installed mongodb in a docker container in the VM.Mongodb is listening on 27017 port.
I've installed using the following steps
docker run -p 27017:27017 --name da-mongo -v ~/mongo-data:/data/db -d mongo
The port from container is redirected to the host using the -p flag. But the port 27017 is exposed on the internet. I don't want it to happen.
Is there any way to fix it?
Well, if you want it available for certain hosts then you need a firewall. But, if all you need is it working on localhost (your VM machine), then you don't need to expose/bind the port with the host. I suggest you to run the container without the -p option, then, run the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' your_container_id_or_name
After that, it will display an IP, it is the IP of the container you've just ran (Yes, docker uses somewhat an internal virtual network connecting your containers and your host machine between them).
After that, you can connect to it using the IP and port combination, something like:
172.17.0.2:27017
When you publish the port, you can select which host interface to publish on:
docker run -p 127.0.0.1:27017:27017 --name da-mongo \
-v ~/mongo-data:/data/db -d mongo
That will publish the container port 27017 to host interface 127.0.0.1 port 27017. You can only add the interface to the host port, the container itself must still bind to 0.0.0.0.

Resources