Docker IP-TABLES Error - docker

Hey i'm quite new to these docker stuff. I tried to start an docker container with bitbucket, but i get this output.
root#rv1175:~# docker run -v bitbucketVolume:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
6da32052deeba204d5d08518c93e887ac9cc27ac10ffca60fa20581ff45f9959
docker: Error response from daemon: driver failed programming external connectivity on endpoint bitbucket (55d12e0e4d76ad7b7e8ae59d5275f6ee85c8690d9f803ec65fdc77a935a25110): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 7999 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)).
root#rv1175:~#
I got the same output every time i tried to activate any docker
container. Can someone help me?
P.S. one more question.
What does 172.1.0.2 mean? I can only say, that this is not my ip.

172.17.0.2 would be the IP assigned to the container within the default Docker bridge network (docker0 virtual interface). These are not reachable from the outside, though you are instructing the Docker engine to "publish" (in Docker terminology) two ports.
To do so, the engine creates port forwarding rules with iptables, which forward (in your case) all incoming traffic to ports tcp/7990 and tcp/7999 on all interfaces of the host to the same ports at 172.17.0.2 on the docker0 interface (where the process in the container is hopefully listening).
It looks like the DOCKER iptables chain where this happens is not present. Maybe you have other tools manipulating iptables that might be erasing what the Docker engine is doing. Try to identify them and restart the Docker engine (it should re-create everything on startup).
You can also instruct the engine not to manipulate iptables by configuring the Docker daemon appropriately. You would then need to set things up yourself if you want to use the network bridge driver (though you could also use the host driver). Here is a good example of doing so.

Related

Docker: browsing to container on host computer not working

I am running a basic web application (PHP) inside Docker on a Debian VM, using Docker Compose.
When preforming sudo docker-compose up -d all containers start running just fine.
I have my ports setup as follows: 8007 for the application itself, 8008 for PHPMyAdmin, 9009 for Portainer.
The IP of the Debian VM is 192.168.56.102
When browsing using curl inside the VM to http://192.168.56.102:8007 the page loads without issues.
However, when browsing to the same URL on my Windows 10 host (Chrome) I get a connection timeout.
Pinging to 192.168.56.102 from host to VM and viceversa works fine, and so does SSH.
Does anyone know why I can't browse to these pages, eventhough everything works fine within the VM and the host and VM are clearly able to communicate?
Thanks.
I managed to fix the problem.
As suggested by #larsks some firewall rule inside Debian was blocking the connection. I had already tried /sbin/iptables -F which flushes the firewall rules, which I thought was enough, but it turns out that's not the case.
After running all these commands, the Debian firewall was completly reset and the issue was fixed:
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT

How ot map docker container ip to a host ip (NAT instead of NAPT)?

The main goal is to do a real NAT instead of NAPT. Note normal docker run -p ip:port2:port1 command actally is doing NAPT (address+port translation) instead of NAT(address translation). Is it possible to map address only, but keep all exposed ports the same as the container, like docker run -p=ip1:*:* ... , instead of one by one or a range?
ps.1. My port range is rather big (22-50070, ssh-hdfs) so port range approach won't work.
ps.2. Maybe I need a swarm of virtual machines and join the host into the swarm.
ps.3 I raised an feature request on github. Not sure if they will accept it but currently there are 2000+ open issues (it's so popular).
Solution
On linux, you can access any container by ip and port without any binding (no -p) ootb. Docker version: CE 17+
If your host is windows, and docker is running on a linux VM like me, to access the containers, the only thing need to do is adding the route on windows route add -p 172.16.0.0 mask 255.240.0.0 ip_of_your_vm. Now you can access all containers by IP:port without any port mapping from both windows host and linux VM.
There are few options you have. One is to decide which PORT range you want to map then use that in your docker run
docker run -p 192.168.33.101:80-200:80-200 <your image>
Above will map all ports from 80 to 200 on your container. Assuming your idle IP is 192.168.33.100. But unfortunately it is not possible to map a larger port range as docker creates multiple iptables forks to setup the tables and bombs the memory. It would raise an error like below
docker: Error response from daemon: driver failed programming external connectivity on endpoint zen_goodall (0ae6cec360831b46fe3668d6aad9f5f72b6dac5d26cc6c817452d1402d12f02c): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8513 -j DNAT --to-destination 172.17.0.3:8513 ! -i docker0: (fork/exec /sbin/iptables: resource temporarily unavailable)).
This is not right way of docker mapping it. But this is not a use case that they would agree to, so may not fix the above issue. Next option is to run your docker container without any port publishing and use below iptables rules
DOCKER_IP=172.17.0.2
ACTION=A
IP=192.168.33.101
sudo iptables -t nat -$ACTION DOCKER -d $IP -j DNAT --to-destination $DOCKER_IP ! -i docker0
sudo iptables -t filter -$ACTION DOCKER ! -i docker0 -o docker0 -p tcp -d $DOCKER_IP -j ACCEPT
sudo iptables -t nat -$ACTION POSTROUTING -p tcp -s $DOCKER_IP -d $DOCKER_IP -j MASQUERADE
ACTION=A will add the rules and ACTION=D will delete the rules. This would setup complete traffic from your IP to the DOCKER_IP. This only good if you are doing it on a testing server. Not recommended on staging or production. Docker adds a lot more rules to prevent other containers poking into your container but this offers no protection whatsoever
I dont think there is a direct way to do what you are asking.
If you use "-P" option with "docker run", all ports that are exposed using "EXPOSE" in Dockerfile will automatically get exposed with random ports in the host. With "-p" option, the only way is to specify the option multiple times for multiple ports.

Relationship between docker0, Docker Bridge Driver and Containers

I was watching a YouTube video on Docker networking and saw this slide:
And I'm trying to make sense of it. From the docker0 docs:
"By default, the Docker server creates and configures the host system’s docker0 a network interface called docker0, which is an ethernet bridge device. If you don’t specify a different network when starting a container, the container is connected to the bridge and all traffic coming from and going to the container flows over the bridge to the Docker daemon, which handles routing on behalf of the container."
But I'm still a little confused on the flow of traffic here. Let's say I install Docker on a new host. I assume docker0 is created & configured at installation time. So now my host has this docker0 ethernet bridge on it.
Now let's say I start a container on my new Docker host:
docker run -it -p 9200:9200 -d --name myapp myapp
Since I didn't specify a network driver, bridge is selected for me by default. According to the blurb in docs above, the container should now be sending/receiving traffic over that docker0 bridge. However, in the diagram above that, the indication is that there's no traffic flowing to/from the bridge-based containers (C4, C5, C6) from docker0, and I'm wondering: why?! Any ideas? Thanks in advance!
You are right, that scheme is not fitting exactly what is happening. I didn't saw the video, maybe that "picture" is a snapshot of a concrete moment. Maybe we should see the video to understand the context.
Anyway, when Docker create docker0 inteface, there are some iptables rules created using new chains (DOCKER and DOCKER-ISOLATION). By default, Docker containers are only accesible from your host. Then using -p option on docker run command you are mapping ports from your host to the container directly. Doing that you can reach certain port on your host which is really on the container. You can check the NAT table before and after running the container using iptables -t nat -L. You'll see the difference and the rule for the mapping.
And yes, the containers are created on the same network and they can try to communicate between them on that network. By default, the network range used for docker is 172.17.0.0/16 so your first container will be 172.17.0.2 the second will be 172.17.0.3 and so on. (172.17.0.1 is your docker0 ip).

Communicating between Docker containers in different networks on the same host

Any possibility to make containers in different networks within the same host to communicate? Please note that I am not using docker-compose at the moment.
The following is a summary of what I did. I created two networks using the following commands
docker network create --driver bridge mynetwork1
docker network create --driver bridge mynetwork2
Then I ran two containers on each of these created networks using the commands:
docker run --net=mynetwork1 -it name=mynet1container1 mycontainerimage
docker run --net=mynetwork1 -it name=mynet1container2 mycontainerimage
docker run --net=mynetwork2 -it name=mynet2container1 mycontainerimage
docker run --net=mynetwork2 -it name=mynet2container2 mycontainerimage
I then identified the IP Addresses of each of the containers from the networks created using
docker network inspect mynetwork1
docker network inspect mynetwork2
Using those I was able to communicate between the containers in the same network, but I could not communicate between the containers across the networks. Communication was possible only by adding the containers to the same network.
Much thanks...
Containers in different networks can not communicate with each other because iptables drop such packets. This is shown in the DOCKER-ISOLATION-STAGE-1 and DOCKER-ISOLATION-STAGE-2 chains in the filter table.
sudo iptables -t filter -vL
Rules can be added to DOCKER-USER chain to allow communication between different networks. In the above scenario, the following commands will allow ANY container in mynetwork1 to communicate with ANY containers in mynetwork2.
The bridge interface names of the network (mynetwork1 and mynetwork2) need to be found first. Their names are usually look like br-07d0d51191df or br-85f51d1cfbf6 and they can be found using command "ifconfig" or "ip link show". Since there are multiple bridge interfaces, to identify the correct ones for the networks of interest, the inet address of the bridge interface (shown in ifconfig) should match the subnet address shown in command 'docker network inspect mynetwork1'
sudo iptables -I DOCKER-USER -i br-########1 -o br-########2 -j ACCEPT
sudo iptables -I DOCKER-USER -i br-########2 -o br-########1 -j ACCEPT
The rules can be fine tuned to allow only communications between specific IPs. E.g,
sudo iptables -I DOCKER-USER -i br-########1 -o br-########2 -s 172.17.0.2 -d 172.19.0.2 -j ACCEPT
sudo iptables -I DOCKER-USER -i br-########2 -o br-########1 -s 172.19.0.2 -d 172.17.0.2 -j ACCEPT
Issue
Two containers cannot communicate because there are not on the same network.
Solution a)
Connect one container into the other network overlay (this may not meet the constraint you have).
Solution b)
Create a third network and plug both containers into this network.
How to
The command docker run accept only one occurrence of the option --net, what you have to do is to docker start the containers and then to docker network connect them to a shared network.
The answer you are looking for is here: https://stackoverflow.com/a/34038381/5321002
According to Docker Docs Containers can only communicate within networks but not across networks You can attach a container to two networks and be able to communicate that way.
edit: Although at that point why have two networks in the first place.
Here's the link:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
-Bruce

Remote access to webserver in docker container

I've started using docker for dev, with the following setup:
Host machine - ubuntu server.
Docker container - webapp w/ tomcat server (using https).
As far as host-container access goes - everything works fine.
However, I can't manage to access the container's webapp from a remote machine (though still within the same network).
When running
docker port <container-id> 443
output is as expected, so docker's port binding seems fine.
172.16.*.*:<random-port>
Any ideas?
Thanks!
I figured out what I missed, so here's a simple flow for accessing docker containers webapps from remote machines:
Step #1 : Bind physical host ports (e.g. 22, 443, 80, ...) to container's virtual ports.
possible syntax:
docker run -p 127.0.0.1:443:3444 -d <docker-image-name>
(see docker docs for port redirection with all options)
Step #2 : Redirect host's physical port to container's allocated virtual port. possible (linux) syntax:
iptables -t nat -A PREROUTING -i <host-interface-device> -p tcp --dport <host-physical-port> -j REDIRECT --to-port <container-virtual-port>
That should cover the basic use case.
Good luck!
Correct me if I'm wrong but as far as I'm aware docker host creates a private network for it's containers which is inaccessible from the outside. That said your best bet would probably be to access the container at {host_IP}:{mapped_port}.
If your container was built with a Dockerfile that has an EXPOSE statement, e.g. EXPOSE 443, then you can start the container with the -P option (as in "publish" or "public"). The port will be made available to connections from remote machines:
$ docker run -d -P mywebservice
If you didn't use a Dockerfile, or if it didn't have an EXPOSE statement (it should!), then you can also do an explicit port mapping:
$ docker run -d -p 80 mywebservice
In both cases, the result will be a publicly-accessible port:
$ docker ps
9bcb… mywebservice:latest … 0.0.0.0:49153->80/tcp …
Last but not least, you can force the port number if you need to:
$ docker run -d -p 8442:80 mywebservice
In that case, connecting to your Docker host IP address on port 8442 will reach the container.
There are some alternatives of how to access docker containers from an external device (in the same network), check out this post for more information http://blog.nunes.io/2015/05/02/how-to-access-docker-containers-from-external-devices.html

Resources