Access docker container from Local Area Network devices - docker

I’m brand new to docker; I am running docker desktop for Mac and I have a container with an IP 192.168.73.10.
I set port forwarding to ports 80 and 443 during initial setup. I can access web service to this container from the local host (my Mac) just fine; however, all devices connected to my LAN are on a 10.20.0.0/24 subnet.
How exactly do I access the web service on the container from devices on my LAN (10.20.0.0/24 subnet)? I have port 80 and 443 open on my Mac. Haven’t been able to find any helpful answers on the forum. Please help!

There are a couple of ways. Lets say for example your started the container like this:
docker run --restart always -p 9017:80 -d --name organizr --net=my-bridge organizrtools/organizr-v2
In the above case you can connect to the site by the port 9017 since you exposed that port on your machine. So, if your machine's IP is for example 10.20.0.1 you'd use http://10.20.0.1:9017. You can use that from any machine on your LAN.
OR if you don't open up a port on your machine, and just go with the port setup within the container, you can call it by it's hostname, which by default is also the container's name.
So for example, you created the container like this:
docker run --restart always -d --name organizr --net=my-bridge organizrtools/organizr-v2
since the default port setup within the container is port 80, you'd get to the page like this: http://organizr:80. That needs to be called from within one of your docker networks though.

It was actually a firewall issue on my network. Thanks for the suggestions and responses.

Related

Exposing a docker container to the internet

I deployed a ghost blogging platform on my server using docker. Now I want to expose it to the internet but I'm having some difficulties doing so.
I opened port 8000 in my router a forwarded it to port 32769 which is the one assign to that container. Using port 32769 inside my network I can access the website fine but when I try to access it from the internet it gives a took too long to respond error.
Local IP + PORT: http://10.0.0.140:32769/
Docker port config
Port tester
Router settings
This post was also added to Super User since it has been said that it would be responded better in there.
Let's say your application inside docker is now working on port 8000
You want to expose your application to internet.
The request would go: internet -> router -> physical computer (host machine) -> docker.
You need to export your application to your host machine, this could be done via EXPOSE 8000 instruction in Dockerfile.
That port should be accessible from your host machine first, so, when starting your docker image as docker container, you should add -p parameter, such as
sudo docker run -d -it -p 8000:8000 --name docker_contaier_name docker_image_name
From now on, your docker application can be access within your host machine, let's say it is your physical computer.
Forward port from your router to your host machine
This time, you may want to do as what you did in your question.
Access your application from internet.
If I am thinking correctly, the ip address 10.0.0.140 is just your computer LAN IP address, it cannot accessible from internet.
You can only able to connect to your app via an internet IP, to do that, you can check your router to see what is your WAN IP address, which will be assigned to your router by your internet service provider. Or go google with "what is my IP"
What works for me, more or less, is setting up Apache2 as reverse proxy, redirecting a path in Apache2 to the port of the Docker container. This probably could also be done for example with NGINX.
This way the traffic from the net gets proxied to the container and back to the net, and I see the WordPress site. So regarding the question of OP, the docker container is now exposed to the internet.
However 1: This still doesn't explain why I don't get return traffic from the Docker container if I access it directly from the net.
However 2: Not all the url's in the WordPress site are correct, but that seems to be a WordPress issue and not a Docker / routing issue.

How to expose the docker container ip to the external network?

i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.

Docker container doesn't connect to another docker container on server

I'm using a Digital Ocean docker droplet and have 3 docker containers: 1 for front-end, 1 for back-end and 1 for other tools with different dependencies, let's call it back-end 2.
The front-end calls the back-end 1, the back-end 1 in turn calls the back-end 2. The back-end 2 container exposes a gRPC service over port 50051. Locally, by running the following command, I was able to identify the docker service to be running with the IP 127.17.0.1:
docker network inspect bridge --format='{{json .IPAM.Config}}'
Therefore, I understand that my gRPC server is accessible from the following url 127.17.0.1:50051 within the server.
Unfortunately, the gRPC server refuses connections when running from the docker droplet while it works perfectly well when running locally.
Any idea what may be different?
You should generally set up a Docker private network to communicate between containers using their container names; see e.g. How to communicate between Docker containers via "hostname". The Docker-internal IP addresses are subject to change if you delete and recreate a container and aren't reachable from off-host, and trying to find them generally isn't a best practice.
172.17.0.0/16 is a typical default for the Docker-internal IP network (127.0.0.0/8 is the reserved IPv4 loopback network) and it looks like you might have typoed the address you got from docker network inspect.
Try docker run with following command:
docker run -d -p {server ip}:12345 {back-end 2 image}
It will expose IP port to docker container and will be accessible from other servers.
Note: also check firewall rules, if firewall is blocking access.
You could run docker binding to ip and port as shown by Aakash. Please restrict access to this specific IP and port to be accessed only from the other docker IP and port - this will help to run docker private and doesn't allow other (even the other docker/instances within your network).

Unable to setup networking to access docker container IPs from outside?

Context:
I have a web server hosting a UI from which users can request for emulator instances for my product. Each emulator instance is a webapp running on nodejs. When a user requests an emulator instance from the UI, I spawn a docker container. I would like to return to the user an IP address(+port) from which this emulator container can be accessed.
Note: Presently, docker and the webserver facing the user are running on the same system.
Problems:
1) The default container on the docker0 network is accessible only with it's local IP address on the host. e.g. http://172.17.0.5. I can't access the container with http://localhost:32768 (container was started with -P and was assigned the port 32768). I get a message that the site can't be reached.
2) I can't use the docker host network driver because the emulator uses ports internally which I don't want to expose in the host network
3) I don't want to use the macvlan driver because I will be using up too many IPs.
Is it possibly to map various ports on the host to IPs on the docker0 subnet? If yes, how do I go about this? If this is possible I could expose the host IP and the container specific port to the user.
What is best way to give users access to the containers?
How about a nginx container acting as a proxy? Make your containers have same name always.
Serve new app instance:
docker run -d --rm --name=static_prefix__unique_id your_image
Have a wildcard domain:
unique_id.yourdomain.com
Or simply:
yourdomain.com/unique_id
You can dynamically proxy the request (I assume you're using port 3000 for the nodejs app):
proxy_pass http://static_prefix__$extractedNameFromRequestUri:3000
Docker will do the hard job for you and route traffic from outside to the static_prefix__unique_id container.

Port forwarding Ubuntu - Docker

I have following problem:
Assume that I started two Docker containers on host machine: A and B.
docker run A -ti -p 2000:2000
docker run B -ti -p 2001:2001
I want to be able to get to each of this containers FROM INTERNET by:
http://example.com:2000
http://example.com:2001
How to reach that?
The rest of the equation here is just normal TCP / IP flow. You'll need to make sure of the following:
If the host has some an implicit deny for incoming traffic on its physical interface, you will need to open up ports 2000 and 2001, just like you would for any service (Docker or not).
If the host is behind a NAT or other external means of routing, you'll need to punch holes for those ports there as well.
You'll need the external IP address (either the one attached to the host or the one in front of the NAT allowing access to the ports).
As far as Docker is concerned, you've done what is required to open the ports to the service running in that container correctly.

Resources