I've read this post and I've tried adding ports: "7080:7080" in docker-compose.yml but still can't connect to the container using 172.18.0.2:7080 (btw I'm a docker newbie)
The container is one of several in a DockStation project on Windows 10. The image I'm using is for OpenLiteSpeed with WordPress.
The docker-compose.yml file contents is below:
version: '2'
services:
gnome-3-28-1804:
image: ubuntudesktop/gnome-3-28-1804
firefox:
image: jlesage/firefox
browser-box:
image: jim3ma/browser-box
openlitespeed:
image: litespeedtech/openlitespeed
ports:
- "7080:7080"
Any ideas please?
UPDATE: IP 172.17.0.1 appears to be the default bridge gateway IP so I assume 172.18.0.2 for this container is in some way related to that; Docker and DockStation are both running locally on host 10.0.0.10 Not sure if the setup should even be using a bridge. http://localhost:7080/ says ERR_CONNECTION_REFUSED
UPDATE 2: I'm using Docker for Windows (Docker Desktop). Tried turning off the Windows firewall but makes no difference. Still getting ERR_CONNECTION_REFUSED for http://localhost:7080/ and http://10.0.0.10:7080/. There are 3 other containers in the project but not running, only the LiteSpeed one is running.
UPDATE 3: I created a new project and installed tutum/hello-world/ then ran the new container. The hello-world container is running and I've not found any error in the logs, but neither localhost nor 10.0.0.10 will connect, the error in Chrome is ERR_CONNECTION_REFUSED. Same if I run docker run -d -p 80 tutum/hello-world in Windows command prompt.
What is this IP (172.18.0.2) representing? Is it a remote machine where DockStation is running?
If this is a case, check if this port is publicly available on that machine. You did add ports section to the Dockerfile which will map container's port to machine's port - but it is a matter whether e.g. firewall blocks outside access to that port.
I would first troubleshoot it by trying to access localhost:7080 from 172.18.0.2 machine - if it works, your Docker configuration is good and you need to look for the problem in that machine's configuration (e.g. firewall).
I tried your Compose file on my system and it works as expected - I can access port 7080 both using my host's system IP and hostname and the container's IP and ports 80 and 443 using only the container's IP (since they're not mapped to any of the host's ports).
You did not specify whether you're using Docker for Windows or Docker Toolbox - DockStation works with both, but if you're using Docker Toolbox, then you'll have to use the virtual machine's IP or hostname to access port 7080, instead of localhost. If you're using Docker for Windows, then I do not understand what is going on - are you sure the containers are running?
As for where those IP's you mentioned come from - 172.17.0.1 is most likely your hosts IP on Docker's default bridged network. Docker-compose, by default, creates its own bridged networks for every project. In your case, in your project's network, your host's IP would be 172.18.0.1. You can view Docker's networks with command docker network ls and their details with docker network inspect <network-name>.
You should not use any of those IP's for any reason, since there's no guarantee they'll remain the same. If you need to connect from outside, map internal container ports to your Docker's host's ports, like you did with port 7080 and if you need containers to connect to each other - with docker-compose you can use service names as hostnames, without it you have to connect them to the same, non-default, bridged Docker network and use their container names as hostnames.
This solution worked for me.
docker run -d -p 127.0.0.1:80:80 tutum/hello-world
Apparently you have to specify you want the port exposed under localhost. Then localhost entered in the browser address bar loaded the Hello World page - hurrah!
Once I changed the ports in docker-compose.yml to '127.0.0.1:80:80' then it also worked when run from DockStation.
Related
I was doing some devops and writing a script to turn my current host/nginx server/nginx setup into a host/docker/nginx server/docker/nginx set up so I can keep directories and etc the same between them.
The problem is that any ports I expose on a docker container are only accessible on the host and not from any other machines on the host network.
When typing 192.168.0.2 from a machine such as 192.168.0.3 it just says took too long to respond, but typing 192.168.0.2 from 192.168.0.2 will bring up the welcome to nginx page?! The interesting part is I did a wireshark analysis on en0 on port 80 and there are actually some packets coming through
See pastebins of packet inspections:
LAN to docker: https://pastebin.com/4qR2d1GV
Host to docker: https://pastebin.com/Wbng9nDB
I've tried using docker run -p 80:80 nginx/nginx and docker run -p 192.168.0.2:80:80 nginx/nginx and docker run -p 127.0.0.1:80:80 nginx/nginx but this doesn't seem to fix anything.
Should see welcome to nginx when connecting from 192.168.0.3 to 192.168.0.2.
this is in my dev environment which is an osx 10.13.5 system.
when I push this to my ubuntu 16.04 server it works just fine with the containerized nginx accessible from the www and when I run ngnix on my host without docker I can connect from external machines on the network too
Your description is a bit confusing the 127.0.0.1 within the port line will bind it to localhost only - you won't be able to access the docker from another machine. Remove the IP address and you should be able to access the docker from outside localhost.
I am running docker on my windows 10 machine , i have docker for windows installed and when i run selenium hub image with exposes ports it works fine , I can view the selenium hub console with localhost:4444 (4444 is the exposed port) . Now i want other machines connected to the same network to be able to connect to my selenium hub container .
How can i achieve this .
I have exposed ports using -p 4444:4444 , but this looks good for working between the host machine and the docker container.
hub:
image: selenium/hub:latest
ports:
- "4444:4444"
You have already done everything you need, as mentioned #mostafa in comments.
When you expose ports, you actually map internal docker network ports onto host's ones, thus making your services available from host.
The only thing you have to care about - is host interface, to which the services are being bound. By default, when you write -p 123:123 docker maps to 0.0.0.0 that means services become available on all networks, where host is connected.
You can explicitly specify this in form -p <interface>:123:456 to make this port only visible in specified network.
I have a situation where I need to let several jobs inside a single docker container orchestrated by docker-compose 1.16.1 communicate with a legacy system.
The legacy system runs in a vagrant box on the same host and binds to three ports (7880, 58608, and 58709). I understand that the default configuration of docker allows accessing the host as 172.17.0.1, but for obscure technical reasons due to network differences I need the host port available on "localhost".
So, how do I make "localhost port 7880" as seen from inside the docker container port forward to the host port 7880?
I have full control of the docker instance and invocation.
Just add network_mode: host section to your docker-compose file and share localhost with containers and host.
I'm a little bit beginner to Docker. I couldn't find any clear description of what this option does in docker run command in deep and bit confused about it.
Can we use it to access the applications running on docker containers without specifying a port? As an example if I run a webapp deployed via a docker image in port 8080 by using option -p 8080:8080 in docker run command, I know I will have to access it on 8080 port on Docker containers ip /theWebAppName. But I cannot really think of a way how --net=host option works.
After the docker installation you have 3 networks by default:
docker network ls
NETWORK ID NAME DRIVER SCOPE
f3be8b1ef7ce bridge bridge local
fbff927877c1 host host local
023bb5940080 none null local
I'm trying to keep this simple. So if you start a container by default it will be created inside the bridge (docker0) network.
$ docker run -d jenkins
1498e581cdba jenkins "/bin/tini -- /usr..." 3 minutes ago Up 3 minutes 8080/tcp, 50000/tcp friendly_bell
In the dockerfile of jenkins the ports 8080 and 50000 are exposed. Those ports are opened for the container on its bridge network. So everything inside that bridge network can access the container on port 8080 and 50000. Everything in the bridge network is in the private range of "Subnet": "172.17.0.0/16", If you want to access them from the outside you have to map the ports with -p 8080:8080. This will map the port of your container to the port of your real server (the host network). So accessing your server on 8080 will route to your bridgenetwork on port 8080.
Now you also have your host network. Which does not containerize the containers networking. So if you start a container in the host network it will look like this (it's the first one):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efd834949b2 jenkins "/bin/tini -- /usr..." 6 minutes ago Up 6 minutes eloquent_panini
1498e581cdba jenkins "/bin/tini -- /usr..." 10 minutes ago Up 10 minutes 8080/tcp, 50000/tcp friendly_bell
The difference is with the ports. Your container is now inside your host network. So if you open port 8080 on your host you will acces the container immediately.
$ sudo iptables -I INPUT 5 -p tcp -m tcp --dport 8080 -j ACCEPT
I've opened port 8080 in my firewall and when I'm now accesing my server on port 8080 I'm accessing my jenkins. I think this blog is also useful to understand it better.
The --net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.
Normally you have to forward ports from the host machine into a container, but when the containers share the host's network, any network activity happens directly on the host machine - just as it would if the program was running locally on the host instead of inside a container.
While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can't have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.
For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as --net=host.
Generally speaking, --net=host is only needed when you are running programs with very specific, unusual network needs.
Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use --net=host then you'll get all the container's ports listening on the host, even those that aren't listed in the Dockerfile. This means you will need to check the container closely (especially if it's not yours, e.g. an official one provided by a software project) to make sure you don't inadvertently expose extra services on the machine.
Remember one point that the host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server
you can create your own new network like --net="anyname"
this is done to isolate the services from different container.
suppose the same service are running in different containers, but the port mapping
remains same, the first container starts well , but the same service from second container will fail.
so to avoid this, either change the port mappings or create a network.
What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.