For context - I am attempting to deploy OKD in an air-gapped environment, which requires mirroring an image registry. This private, secured registry is then pulled from by other machines in the network during the installation process.
To describe the environment - the host machine where the registry container is running is running Centos 7.6. The other machines are all VMs running Fedora coreOS in using libvirt. The VMs and the host are connected using a virtual network created using libvirt which includes DHCP settings (dnsmasq) for the VMs to give them static IPs. The host machine also hosts the DNS server, which, as far as I can tell is configured properly, as I can ping every machine from every other machine using its fully qualified domain name and access specific ports (such as the port the apache server listens on). Podman is used instead of Docker for container management for OKD, but as far as I can tell the commands are exactly the same.
I have the registry running in the air-gapped environment using the following command:
sudo podman run --name mirror-registry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem \
-d docker.io/library/registry:latest
It is accessible using curl -u username:password https://host-machine.example.local:5000/v2/_catalog which returns {"repositories":[]}. I believe this confirms that my TLS and authorization configurations are correct. However, if I transfer the ca.pem file (used to sign the SSL certificates the registry uses) over to one of the VM's on the virtual network, and attempt to use the same curl command, I get an error:
connect to 192.168.x.x port 5000 failed: Connection refused
Failed to connect to host-machine.example.local port 5000: Connection refused
Closing connection 0
This is quite strange to me, as I've been able to use this method to communicate with the registry from the VMs in the past, and I'm not sure what has changed.
After some further digging, it seems like there is some sort of issue with the port itself, but I can't be sure where the issue is stemming from. For example, If I run sudo netstat -tulpn | grep LISTEN on the host, I receive a line indicating that podman (conmon) is listening on the correct port:
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 48337/conmon
but if I test whether the port is accessible from the VM, (nc -zvw5 192.168.x.x 5000) I get a similar error: Ncat: Connection refused. If I use the same test on any of the other listening ports on the host, it indicates successful connections to those ports.
Please note, I have completely disabled firewalld, so as far as I know, all ports are open.
I'm not sure if the issue is with my DNS settings, or the virtual network, or with the registry itself and I'm not quite sure how to further diagnose the issue. Any insight would be much appreciated.
I am running a basic web application (PHP) inside Docker on a Debian VM, using Docker Compose.
When preforming sudo docker-compose up -d all containers start running just fine.
I have my ports setup as follows: 8007 for the application itself, 8008 for PHPMyAdmin, 9009 for Portainer.
The IP of the Debian VM is 192.168.56.102
When browsing using curl inside the VM to http://192.168.56.102:8007 the page loads without issues.
However, when browsing to the same URL on my Windows 10 host (Chrome) I get a connection timeout.
Pinging to 192.168.56.102 from host to VM and viceversa works fine, and so does SSH.
Does anyone know why I can't browse to these pages, eventhough everything works fine within the VM and the host and VM are clearly able to communicate?
Thanks.
I managed to fix the problem.
As suggested by #larsks some firewall rule inside Debian was blocking the connection. I had already tried /sbin/iptables -F which flushes the firewall rules, which I thought was enough, but it turns out that's not the case.
After running all these commands, the Debian firewall was completly reset and the issue was fixed:
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Hey i'm quite new to these docker stuff. I tried to start an docker container with bitbucket, but i get this output.
root#rv1175:~# docker run -v bitbucketVolume:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
6da32052deeba204d5d08518c93e887ac9cc27ac10ffca60fa20581ff45f9959
docker: Error response from daemon: driver failed programming external connectivity on endpoint bitbucket (55d12e0e4d76ad7b7e8ae59d5275f6ee85c8690d9f803ec65fdc77a935a25110): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 7999 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)).
root#rv1175:~#
I got the same output every time i tried to activate any docker
container. Can someone help me?
P.S. one more question.
What does 172.1.0.2 mean? I can only say, that this is not my ip.
172.17.0.2 would be the IP assigned to the container within the default Docker bridge network (docker0 virtual interface). These are not reachable from the outside, though you are instructing the Docker engine to "publish" (in Docker terminology) two ports.
To do so, the engine creates port forwarding rules with iptables, which forward (in your case) all incoming traffic to ports tcp/7990 and tcp/7999 on all interfaces of the host to the same ports at 172.17.0.2 on the docker0 interface (where the process in the container is hopefully listening).
It looks like the DOCKER iptables chain where this happens is not present. Maybe you have other tools manipulating iptables that might be erasing what the Docker engine is doing. Try to identify them and restart the Docker engine (it should re-create everything on startup).
You can also instruct the engine not to manipulate iptables by configuring the Docker daemon appropriately. You would then need to set things up yourself if you want to use the network bridge driver (though you could also use the host driver). Here is a good example of doing so.
I am running the latest Docker CE, 17.09, under Windows 10 Pro, and using two different examples am getting Permission denied.
Docker site example:
docker run -d -p 80:80 --name webserver nginx
AWS site Docker example:
docker run -p 80:80 hello-world
both returned the same error.
docker: Error response from daemon: driver failed programming external connectivity on endpoint XXXXX: Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied.
I solved my issue on Windows 10 Pro, turned out I had the World Wide Web Publishing Service turned on somehow. Took me a while to find that, after noting via netstat -a -n that I had a :80 listener somewhere/somehow. Silly me. Shut it down, and I was fine with port 80.
Change the port using these commands as follow:
docker container ls //show the container infos, note the ports info.
docker stop webserver
docker rm webserver //shut down currently webserver
docker run -d -p 8080:80 --name webserver nginx (or 8000:80)
Finally, let's input localhost:8080 to show whether the connection is successful in the browser.
The problem is general-use ports like 80, 443, 22, .. (in general ports < 1024) are system-protected so you need privileges to use them, here it'll be enough to be a system administrator and execute command as a administrator.
If it doesn't have to be :80 try using other port, like :8080, if that doesn't help and the error doesn't change, the problem goes deeper.
On macOS Mojave Version 10.14.2 this command worked for me:
sudo apachectl stop
Before executing this command, run
sudo lsof -i -P | grep "LISTEN"
and check if httpd is the identifier of the listener on :80 e.g.:
If it is, then it's actually the Mac apache that causes the problem.
The First course of action that you should take is to run the command:
netstat -aon | findstr [port#]
This will tell you if a process is running on the given port. If that is the case then you can kill the process with the command:
taskkill /PID [PID] /F
This will kill the process using that port. You will then be able to bind a new process to the port.
I also had come across a time when netstat -aon did not return that a process was running for a port that I desired to use but it certianly had a process running on it was wasn't allowing me to run a new process on the port. I was able to remedy the problem with the following:
Start Windows in Safe Mode with Networking
In powershell/cmd run the command:
netsh int ipv4 add excludedportrange protocol=tcp startport=[PORT] numberofports=1
This will reserve the port so when you boot back into normal windows mode no application will steal the port before you can use it.
The reason I got this error was because the port was already in use. Changed to a different port and I no longer received this error.
On Windows 10 Pro running Docker command from a CMD Window with As Administrator, I still have the issues (as per #mikael-chudinov above).
I really want to use port 80 so the other answers are not suitable for me.
Please see this blog post by Jens at www.jens79.de
From powershell, run the command:
Get-NetTCPConnection -LocalPort 80 | Format-List
This for me showed up a single process with pid = 4
In System monitor this is the "System" process, but as per the article listed above, it is actually IIS running as "World Wide Web Publishing Service".
Assuming that you don't need IIS running, in the Windows Services console, Stop and Disable "World Wide Web Publishing Service", then try again.
The port is in use by VisualStudio without debugging. Close VS then reopen.
I recently came across this issue while trying to load up a docker environment to maintain an old project. In my case, the default instance of Apache was running on my Mac after a recent OS update, and needed to be shut down before port 80 was available. You can shut it down with this command:
sudo /usr/sbin/apachectl stop
If you're still having trouble, you could use the following command to see the PIDs of what's running on a given port (in this case, 80):
lsof -t -i :80
You can attempt to shut down whatever is running on those ports with the kill command; just be sure you aren't going to kill anything important!
kill $(lsof -t -i :80)
This helped me. The port mentioned in the error message indeed was within one of reserved port ranges: Windows can't bind to port above 49690
Had same issue, my container just would not start and display following error msg when trying to start the container:
Error response from daemon: driver failed programming external connectivity on endpoint ..... Error starting userland proxy: Bind for 0.0.0.0:1521: unexpected error Permission denied.
with following command for starting a oracle container:
docker run -d -p 1521:1521 ...
For me i think its the result of a uninstalled oracle instance that is not properly uninstalled. Port still used or something. But simply changing to another port fixed the issue as shown below:
docker run -d -p 1523:1521 ...
Listening on a privileged port (lower then 1024) requires special capabilities from the kernel.
You have two options:
1 ) Run your container as root - don't do it.
2 ) Give the container the relevant capability only - in your case its the NET_BIND_SERVICE capability which bind a socket to privileged ports.
So if the image you use is running as root by default - make sure first to create a non root user and attach it to a group - add this line to your Dockerfile:
RUN set -x \
&& addgroup --system --gid 101 nginx \
&& adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos "nginx user" --shell /bin/false --uid 101 nginx
And run the container with net_bind_service only:
docker run -it -p 8080:80 --cap-drop all --cap-add net_bind_service <image-name>:<tag>
I also have the same issue. If a proxy is already installed on your system, then the container port is surrounded by a proxy and you need to use a proxy to run the container once and you will not need to do this for the next time.
The problem is you are not having permission to run image in port 80. To do so add --user root in your docker run command. This will provide root privileges and it will run.
Verify if the nginx in the host machine is started and stop it.
sudo service nginx stop
1,docker run -p 80:80 nginx
If command 1 does’t work then try command 2.
2, docker run -d -p 8080:80 --name webserver nginx
After that go to browser and type localhost:8080
The above command will solve.
The main goal is to do a real NAT instead of NAPT. Note normal docker run -p ip:port2:port1 command actally is doing NAPT (address+port translation) instead of NAT(address translation). Is it possible to map address only, but keep all exposed ports the same as the container, like docker run -p=ip1:*:* ... , instead of one by one or a range?
ps.1. My port range is rather big (22-50070, ssh-hdfs) so port range approach won't work.
ps.2. Maybe I need a swarm of virtual machines and join the host into the swarm.
ps.3 I raised an feature request on github. Not sure if they will accept it but currently there are 2000+ open issues (it's so popular).
Solution
On linux, you can access any container by ip and port without any binding (no -p) ootb. Docker version: CE 17+
If your host is windows, and docker is running on a linux VM like me, to access the containers, the only thing need to do is adding the route on windows route add -p 172.16.0.0 mask 255.240.0.0 ip_of_your_vm. Now you can access all containers by IP:port without any port mapping from both windows host and linux VM.
There are few options you have. One is to decide which PORT range you want to map then use that in your docker run
docker run -p 192.168.33.101:80-200:80-200 <your image>
Above will map all ports from 80 to 200 on your container. Assuming your idle IP is 192.168.33.100. But unfortunately it is not possible to map a larger port range as docker creates multiple iptables forks to setup the tables and bombs the memory. It would raise an error like below
docker: Error response from daemon: driver failed programming external connectivity on endpoint zen_goodall (0ae6cec360831b46fe3668d6aad9f5f72b6dac5d26cc6c817452d1402d12f02c): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8513 -j DNAT --to-destination 172.17.0.3:8513 ! -i docker0: (fork/exec /sbin/iptables: resource temporarily unavailable)).
This is not right way of docker mapping it. But this is not a use case that they would agree to, so may not fix the above issue. Next option is to run your docker container without any port publishing and use below iptables rules
DOCKER_IP=172.17.0.2
ACTION=A
IP=192.168.33.101
sudo iptables -t nat -$ACTION DOCKER -d $IP -j DNAT --to-destination $DOCKER_IP ! -i docker0
sudo iptables -t filter -$ACTION DOCKER ! -i docker0 -o docker0 -p tcp -d $DOCKER_IP -j ACCEPT
sudo iptables -t nat -$ACTION POSTROUTING -p tcp -s $DOCKER_IP -d $DOCKER_IP -j MASQUERADE
ACTION=A will add the rules and ACTION=D will delete the rules. This would setup complete traffic from your IP to the DOCKER_IP. This only good if you are doing it on a testing server. Not recommended on staging or production. Docker adds a lot more rules to prevent other containers poking into your container but this offers no protection whatsoever
I dont think there is a direct way to do what you are asking.
If you use "-P" option with "docker run", all ports that are exposed using "EXPOSE" in Dockerfile will automatically get exposed with random ports in the host. With "-p" option, the only way is to specify the option multiple times for multiple ports.