I'm running docker behind nginx, with the registry container and my own container running a gunicorn django webapp.
The django webapp runs fine outside the docker container. However, as soon I try and run the django webapp from within the container, the webapp fails with this message from nginx:
2018/03/20 15:39:30 [error] 14767#0: *360 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.38.181.123, server: app.ukrdc.nhs.uk, request: "POST /convert/pv-to-rda/ HTTP/1.1", upstream: "http://127.0.0.1:9300/convert/pv-to-rda/", host: "app.ukrdc.nhs.uk"
when I do a get on the webapp.
The registry container works fine.
I've exposed the right port in the Dockerfile
Run command is:
docker run -ti -p 9300:9300 ukrdc/ukrdc-webapi
Added the port to the iptables.
(output from iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9300 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.20/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9300 -j ACCEPT)
The signs point to something being wrong with my container and/or firewall rules, but I'm not sure what. I think I'm doing the same as the registry container.
Running on Centos 6.9 with Docker version 1.7.1, build 786b29d/1.7.1
The answer is:
run the django app with
exec gunicorn mysite.wsgi \
- -b 127.0.0.1:9300 \
+ -b 0.0.0.0:9300 \
--name ukrdc_django \
--workers 3 \
--log-level=info \
I'd bound it to the local loop-back address. It's now bound to all addresses, and now works.
Try adding -P to the run command:
docker run -P <container>
That will automatically publish the exposed ports. Note the difference: exposing a port makes it available to other containers on the docker network, where as publishing the port makes it available to the host machine, as well as other containers on the network.
I think you're using EXPOSE when you really want a -P or -p flag on your docker run command, where "P" is for "publish". According to the docker docs, EXPOSE is just for linking ports between containers, where as docker run -P <container> or docker run -p 1234:1234/tcp <container> will actually make a port or ports available outside the container so nginx can reach it from the host machine. Another option is you could run nginx in a container on the same network (there is an easy-to-use standard nginx container out there), and then nginx could reach all of the exposed ports on the network, but you would need to publish at least one of the nginx container's ports anyway in that instance.
Here's another SO post that helped me a lot with expose vs. publish:
Difference between "expose" and "publish" in docker
Related
I have installed a docker service:
docker run --rm --name some-nginx -d -p 8080:80 nginx:stable-alpine
Then I have added iptable rule for it:
PORT=8080
/sbin/iptables -A OUTPUT -p tcp --dport $PORT -m state --state NEW,ESTABLISHED -j ACCEPT
/sbin/iptables -A INPUT -p tcp --sport $PORT -m state --state ESTABLISHED -j ACCEPT
When I try to check if port is open it says it is open:
$netstat -anp | grep 8080
tcp6 0 0 :::8080 :::* LISTEN 11211/docker-proxy
When I try to investigete more it seems still open:
iptables-save | grep 8080
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.3:80
-A INPUT -p tcp -m tcp --sport 8080 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
I see docker added some rules but my rules are present.
The problem is I cant connect to nginx at given port!
telnet XX.XX.XX.XX 8080
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection timed out
When I turn off iptables it works even with browser giving nginx start page.
How to enable iptables or docker to work with given port?
i have some pages you can read.:
https://fralef.me/docker-and-iptables.html
https://riptutorial.com/docker/topic/9201/iptables-with-docker
I have also a linux, try this here:
iptables -A INPUT -p tcp -m tcp --dport xxx -j ACCEPT
iptables -A OUTPUT -p tcp --sport xxx -m state --state ESTABLISHED -j ACCEPT
or
iptables -A OUTPUT -o eth0 -p tcp --dport xxx -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport xx -m state --state ESTABLISHED -j ACCEPT
Let me know if it helps.
TLDR; - Added some iptable rules to a docker container to limit internet access. Working fine except that now I am unable to access container app from host machine but can do so from within container itself
I have a container running a webapp. This container uses mysql, redis etc. Every dependency are remote, accessible by an ip address, on a particular port.
So, for instance, mysql is accessible on ip 13.255.255.255
What I want is to allow the container only to be able to use mysql ip address and not any other. There are few curl requests originating from within the code which I do not want to go beyond my host machine's network.
I've added an entrypoint script in docker which adds some iptables rule in container.
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255 #For mysql access
#iptables -P FORWARD DROP # we aren't a router
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.1 -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -A INPUT -s 0.0.0.0 -j ACCEPT
iptables -A INPUT -s ::1 -j ACCEPT
iptables -A OUTPUT -d ::1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
When I run container and do
docker-compose exec <container-name> curl http://google.com
I get following in response:
curl: (6) Could not resolve host: google.com
which is expected. Now, when I do
docker-compose exec <container-name> curl http://0.0.0.0
I get following response:
"Hello World!"
Which again is expected. However, when I do curl http://0.0.0.0 from my host machine, following is the output
* Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0.0.0.0 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: 0.0.0.0
> User-Agent: curl/7.62.0
> Accept: */*
> // Hangs here
So, I am unable to connect to http://0.0.0.0 from host machine, but can connect from inside the docker.
I am utterly stupid to overlook the iptables rules I posted myself.
Wrong set of rules
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255 #For mysql access
#iptables -P FORWARD DROP # we aren't a router
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.1 -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -A INPUT -s 0.0.0.0 -j ACCEPT
iptables -A INPUT -s ::1 -j ACCEPT
iptables -A OUTPUT -d ::1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
You can see, in the above section:
There is no OUTPUT ACCEPT for ip 0.0.0.0
There is no OUTPUT ACCEPT rule for ip 192.168.x.x which is the ip address of my docker-0 network interface
Both docker and host machine communicates using the docker0 network interface if network mode bridged is used while launching container (which happened to be my case).
Another thing I noticed, I didn't required the 0.0.0.0 or 127.0.0.1 rules at all. Since the entrypoint script will add the iptable rules within docker container, we may never want to access webapp from within container itself. Hence, why bother with 127.0.0.1?
All in all, here is what I did:
Get my ip address for docker0 network. ip addr show docker0. It outputted 192.168.144.1/20
I added 192.168.0.0/16 to ACCEPT rules in my entrypoint iptable rules, which covered my ipaddress in point 1
Now I can access my container from outside
My iptable rules looks like this now:
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255
ALLOWED_CIDR3=192.168.0.0/16
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -P OUTPUT DROP
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR2 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR3 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR3 -j ACCEPT
I am running google cloud container instance (cos-beta-70-11021-29-0) and I run nginx:
docker run --name xx -d -p 80:80 nginx
I can access nginx welcome page despite port 80 not being open in iptables:
$ sudo iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 23 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
Why is so?
In order to expose a port, you have to communicate the internal docker network with the external one, so Docker adds it's own DOCKER chain to iptables, managed by itself. When you expose a port on a container, using the -p 80:80 option, Docker adds a rule to that chain.
On your rules list you can find:
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
If you don't want Docker to fiddle with iptables, you can add the argument --iptables=false to your Docker daemon executor, but then probably the 'expose' part of your docker command might not work automatically, and you might need to add some additional iptables rules. I haven't tested that.
You might find that options /etc/default/docker or /etc/systemd/system/docker.service.d depending if you're using systemd, upstart, or others...
You might want to check either of this links:
https://docs.docker.com/config/daemon/systemd/
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
I am trying Rancher (v.1.2.3) and I am not able to run the agent in the nodes.
1) I've installed the racher server in one node with the following command:
sudo docker run -d --restart=unless-stopped -p 80:8080 rancher/server:v1.2.3
2) Then I go to Add Host, and Ranchers gives me the command to add it.
3) I go to the Node 1, and put the following:
sudo docker run -d --privileged -v /var/run/docker.sock:/var/run /docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.1.2 http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
4) The command line returns
docker: Error response from daemon: containerd: container did not start before the specified
I don't know what is going wrong, I think the container can not access to Rancher Server, but If do a
curl http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
I can access it. In addition this is my IPTABLES:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N CATTLE_FORWARD
-N DOCKER
-N DOCKER-ISOLATION
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp- mss-to-pmtu
-A FORWARD -j CATTLE_FORWARD
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o docker_gwbridge -j DOCKER
-A FORWARD -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A CATTLE_FORWARD -m mark --mark 0x668a0 -j ACCEPT
-A DOCKER-ISOLATION -i docker_gwbridge -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o docker_gwbridge -j DROP
-A DOCKER-ISOLATION -j RETURN
Ubuntu v14.04
Docker v1.12.3
It would be greatly appreciated if you could help me.
Thanks
The full error is presumably "containerd: container did not start before the specified timeout", which means Docker isn't starting the container. Rebooting the host will probably help.
If the nodes which you are using, ie the one where you start rancher/server:v1.2.3 and the one where you start the agent are the same, then there could be internal port access issue.
Rancher uses UDP services/ports like 500 for internal communications. These must be permitted, maybe by adding to firewalld zones etc. Issues might occur if you use managed networking.
When the docker-demon starts it adds a couple of rules to iptables.
When all rules are deleted via iptables -F i have to stop and restart the docker demon to re-create dockers rules.
Is there a way to have docker re-add it's additional rules?
the best way is to restart your docker service, then it'll re-add your docker rules to iptables. (on deb-based: sudo service docker restart)
however, if you just want to restore those rules without restarting your service, i saved mine so you can inspect, and adjust it to work for you, then load using sudo iptables-restore ./iptables-docker-ports.backup
edit and save this to ./iptables-docker-ports.backup
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*nat
:PREROUTING ACCEPT [18:1080]
:INPUT ACCEPT [18:1080]
:OUTPUT ACCEPT [22:1550]
:POSTROUTING ACCEPT [22:1550]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.1/32 -d 172.17.0.1/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 3001 -j DNAT --to-destination 172.17.0.1:80
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*filter
:INPUT ACCEPT [495:53218]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [480:89217]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
If you're running Ubuntu on the host, you can use the iptables-save utility to save the iptables rules to a file after you start the docker daemon. Then, once you flush the old rules, you can simply restore the original docker rules using iptables-restore & the saved rules file.
If you don't want to restore all the old iptables rules, you can alter the saved rules file to keep only the ones you need.
If you're running another operating system, you might find a similar alternative.
Docker in default configuration, when running in bridge mode, does manipulate iptables (a lot) unless you disable it (then you would have to configure your own NAT rules).
The default network-related configuration is probably following, although the config /etc/docker/daemon.json might not exist (and as of now you can't print effective configuration):
{
"userland-proxy": true,
"iptables": true,
"ip-forward": true,
"ip-masq": true,
"ipv6": false
}
After Docker daemon starts, it injects following rules (in filter):
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
In order to understand what Docker does, here is a list of Docker-generated iptables rules with a short explanation. If you flush iptables rules, while Docker daemon and some containers are running, you might break access to existing containers (but probably won't break anything, more about this below).
After service docker restart all default rules are injected into firewall (you can check it by running iptables-save or iptables -S, iptables -S -t nat). Assuming you want to keep your containers running and only generate missing NAT rules.
docker ps gives us list of running containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
865569da8d36 nginx "nginx -g 'daemon of…" 17 hours ago Up 17 hours 0.0.0.0:4564->80/tcp, 0.0.0.0:32237->80/tcp jovial_sammet
And from docker inspect we can obtain the port mapping
$ docker inspect -f '{{.NetworkSettings.Ports}}' 865569da8d36
map[80/tcp:[{0.0.0.0 4564} {0.0.0.0 32237}]]
now we need just the internal IP address of Docker container:
$ docker inspect -f '{{.NetworkSettings.IPAddress}}' 865569da8d36
172.17.0.2
Now using some bash/jq we can generate the dynamic iptables rules:
$ bash docker_iptables --noop
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 4564 -j DNAT --to-destination 172.17.0.2:80
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 32237 -j DNAT --to-destination 172.17.0.2:80
So the answer to the question is: No, not without stopping all containers. But the rules can be re-added manually (NOTE: this script doesn't cover all Docker functionality, e.g. if you're exposing some service running in other network than Docker container).
When you start Docker container with exposed ports (-p):
docker run --rm -d -p 32237:80 -p 4564:80 nginx
Docker spins up also docker-proxy. What's that?
$ netstat -tulpn | grep docker-proxy
tcp 0 0 0.0.0.0:32237 0.0.0.0:* LISTEN 20487/docker-proxy
tcp 0 0 0.0.0.0:4564 0.0.0.0:* LISTEN 20479/docker-proxy
The Linux kernel does not allow the routing of loopback traffic, and therefore it's not possible to apply netfilter NAT rules to packets originating from 127.0.0.0/8. docker-proxy is generally considered as an inelegant solution to such problems.
When you restore iptables without Docker rules, the container ports might be still available via docker-proxy. However this might bring some performance issues in networking, as docker-proxy won't be as fast as kernel's netfilter.