I am trying Rancher (v.1.2.3) and I am not able to run the agent in the nodes.
1) I've installed the racher server in one node with the following command:
sudo docker run -d --restart=unless-stopped -p 80:8080 rancher/server:v1.2.3
2) Then I go to Add Host, and Ranchers gives me the command to add it.
3) I go to the Node 1, and put the following:
sudo docker run -d --privileged -v /var/run/docker.sock:/var/run /docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.1.2 http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
4) The command line returns
docker: Error response from daemon: containerd: container did not start before the specified
I don't know what is going wrong, I think the container can not access to Rancher Server, but If do a
curl http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
I can access it. In addition this is my IPTABLES:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N CATTLE_FORWARD
-N DOCKER
-N DOCKER-ISOLATION
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp- mss-to-pmtu
-A FORWARD -j CATTLE_FORWARD
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o docker_gwbridge -j DOCKER
-A FORWARD -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A CATTLE_FORWARD -m mark --mark 0x668a0 -j ACCEPT
-A DOCKER-ISOLATION -i docker_gwbridge -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o docker_gwbridge -j DROP
-A DOCKER-ISOLATION -j RETURN
Ubuntu v14.04
Docker v1.12.3
It would be greatly appreciated if you could help me.
Thanks
The full error is presumably "containerd: container did not start before the specified timeout", which means Docker isn't starting the container. Rebooting the host will probably help.
If the nodes which you are using, ie the one where you start rancher/server:v1.2.3 and the one where you start the agent are the same, then there could be internal port access issue.
Rancher uses UDP services/ports like 500 for internal communications. These must be permitted, maybe by adding to firewalld zones etc. Issues might occur if you use managed networking.
Related
I am trying to understand the IP table configurations inserted by docker on my host.
Below is the output of sudo iptables -t nat -S DOCKER
For the output below, what does the ! after the chain name do?
-N DOCKER
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-d98117320157 -j RETURN
-A DOCKER ! -i br-d98117320157 -p tcp -m tcp --dport 9202 -j DNAT --to-destination 172.23.0.3:9200
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8034 -j DNAT --to-destination 172.17.0.2:8004
I am running google cloud container instance (cos-beta-70-11021-29-0) and I run nginx:
docker run --name xx -d -p 80:80 nginx
I can access nginx welcome page despite port 80 not being open in iptables:
$ sudo iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 23 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
Why is so?
In order to expose a port, you have to communicate the internal docker network with the external one, so Docker adds it's own DOCKER chain to iptables, managed by itself. When you expose a port on a container, using the -p 80:80 option, Docker adds a rule to that chain.
On your rules list you can find:
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
If you don't want Docker to fiddle with iptables, you can add the argument --iptables=false to your Docker daemon executor, but then probably the 'expose' part of your docker command might not work automatically, and you might need to add some additional iptables rules. I haven't tested that.
You might find that options /etc/default/docker or /etc/systemd/system/docker.service.d depending if you're using systemd, upstart, or others...
You might want to check either of this links:
https://docs.docker.com/config/daemon/systemd/
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
I'm running docker behind nginx, with the registry container and my own container running a gunicorn django webapp.
The django webapp runs fine outside the docker container. However, as soon I try and run the django webapp from within the container, the webapp fails with this message from nginx:
2018/03/20 15:39:30 [error] 14767#0: *360 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.38.181.123, server: app.ukrdc.nhs.uk, request: "POST /convert/pv-to-rda/ HTTP/1.1", upstream: "http://127.0.0.1:9300/convert/pv-to-rda/", host: "app.ukrdc.nhs.uk"
when I do a get on the webapp.
The registry container works fine.
I've exposed the right port in the Dockerfile
Run command is:
docker run -ti -p 9300:9300 ukrdc/ukrdc-webapi
Added the port to the iptables.
(output from iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9300 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.20/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9300 -j ACCEPT)
The signs point to something being wrong with my container and/or firewall rules, but I'm not sure what. I think I'm doing the same as the registry container.
Running on Centos 6.9 with Docker version 1.7.1, build 786b29d/1.7.1
The answer is:
run the django app with
exec gunicorn mysite.wsgi \
- -b 127.0.0.1:9300 \
+ -b 0.0.0.0:9300 \
--name ukrdc_django \
--workers 3 \
--log-level=info \
I'd bound it to the local loop-back address. It's now bound to all addresses, and now works.
Try adding -P to the run command:
docker run -P <container>
That will automatically publish the exposed ports. Note the difference: exposing a port makes it available to other containers on the docker network, where as publishing the port makes it available to the host machine, as well as other containers on the network.
I think you're using EXPOSE when you really want a -P or -p flag on your docker run command, where "P" is for "publish". According to the docker docs, EXPOSE is just for linking ports between containers, where as docker run -P <container> or docker run -p 1234:1234/tcp <container> will actually make a port or ports available outside the container so nginx can reach it from the host machine. Another option is you could run nginx in a container on the same network (there is an easy-to-use standard nginx container out there), and then nginx could reach all of the exposed ports on the network, but you would need to publish at least one of the nginx container's ports anyway in that instance.
Here's another SO post that helped me a lot with expose vs. publish:
Difference between "expose" and "publish" in docker
When using docker, we start with a image. And I created a container with docker.
docker run --name register -d -p 1180:5000 registry
iptables rules can be listed by running iptables-save:
# Generated by iptables-save v1.4.21 on Mon Oct 16 14:01:03 2017
*nat
:PREROUTING ACCEPT [129:14002]
:INPUT ACCEPT [129:14002]
:OUTPUT ACCEPT [25:1792]
:POSTROUTING ACCEPT [25:1792]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 1180 -j DNAT --to-destination 172.17.0.2:5000
COMMIT
# Completed on Mon Oct 16 14:01:03 2017
# Generated by iptables-save v1.4.21 on Mon Oct 16 14:01:03 2017
*filter
:INPUT ACCEPT [2721358:1990060388]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2726902:1992988803]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT
# Completed on Mon Oct 16 14:01:03 2017
I don't understand this rule.
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
Best guess is that rule is to fix an edge case when you have the iptables POSTROUTING table defaulting to DENY any packets that don't match a rule, this allows connections from the container to itself on a mapped port through. In normal operation the rules do nothing functional.
I think this is the pull request (#7003) that added the MASQ rule but there's no documentation as to why it was added. The commit was labeled "Create tests for pkg/iptables". The work was generally around fixing Docker on distro's that have default DENY tables.
There is a suggestion in issue #12632 that the rule won't be touched unless the userland port mapping proxy is turned off.
This is illustrated in "Bind container ports to the host"
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers. Each outgoing connection will appear to originate from one of the host machine’s own IP addresses thanks to an iptables masquerading rule on the host machine that the Docker server creates when it starts:
$ sudo iptables -t nat -L -n
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
...
The Docker server creates a masquerade rule that lets containers connect to IP addresses in the outside world.
You can see in this thread what happens when those rules are not generated. (they need to be restored)
You can study those same options in "Build your own bridge".
When the docker-demon starts it adds a couple of rules to iptables.
When all rules are deleted via iptables -F i have to stop and restart the docker demon to re-create dockers rules.
Is there a way to have docker re-add it's additional rules?
the best way is to restart your docker service, then it'll re-add your docker rules to iptables. (on deb-based: sudo service docker restart)
however, if you just want to restore those rules without restarting your service, i saved mine so you can inspect, and adjust it to work for you, then load using sudo iptables-restore ./iptables-docker-ports.backup
edit and save this to ./iptables-docker-ports.backup
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*nat
:PREROUTING ACCEPT [18:1080]
:INPUT ACCEPT [18:1080]
:OUTPUT ACCEPT [22:1550]
:POSTROUTING ACCEPT [22:1550]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.1/32 -d 172.17.0.1/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 3001 -j DNAT --to-destination 172.17.0.1:80
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*filter
:INPUT ACCEPT [495:53218]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [480:89217]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
If you're running Ubuntu on the host, you can use the iptables-save utility to save the iptables rules to a file after you start the docker daemon. Then, once you flush the old rules, you can simply restore the original docker rules using iptables-restore & the saved rules file.
If you don't want to restore all the old iptables rules, you can alter the saved rules file to keep only the ones you need.
If you're running another operating system, you might find a similar alternative.
Docker in default configuration, when running in bridge mode, does manipulate iptables (a lot) unless you disable it (then you would have to configure your own NAT rules).
The default network-related configuration is probably following, although the config /etc/docker/daemon.json might not exist (and as of now you can't print effective configuration):
{
"userland-proxy": true,
"iptables": true,
"ip-forward": true,
"ip-masq": true,
"ipv6": false
}
After Docker daemon starts, it injects following rules (in filter):
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
In order to understand what Docker does, here is a list of Docker-generated iptables rules with a short explanation. If you flush iptables rules, while Docker daemon and some containers are running, you might break access to existing containers (but probably won't break anything, more about this below).
After service docker restart all default rules are injected into firewall (you can check it by running iptables-save or iptables -S, iptables -S -t nat). Assuming you want to keep your containers running and only generate missing NAT rules.
docker ps gives us list of running containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
865569da8d36 nginx "nginx -g 'daemon of…" 17 hours ago Up 17 hours 0.0.0.0:4564->80/tcp, 0.0.0.0:32237->80/tcp jovial_sammet
And from docker inspect we can obtain the port mapping
$ docker inspect -f '{{.NetworkSettings.Ports}}' 865569da8d36
map[80/tcp:[{0.0.0.0 4564} {0.0.0.0 32237}]]
now we need just the internal IP address of Docker container:
$ docker inspect -f '{{.NetworkSettings.IPAddress}}' 865569da8d36
172.17.0.2
Now using some bash/jq we can generate the dynamic iptables rules:
$ bash docker_iptables --noop
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 4564 -j DNAT --to-destination 172.17.0.2:80
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 32237 -j DNAT --to-destination 172.17.0.2:80
So the answer to the question is: No, not without stopping all containers. But the rules can be re-added manually (NOTE: this script doesn't cover all Docker functionality, e.g. if you're exposing some service running in other network than Docker container).
When you start Docker container with exposed ports (-p):
docker run --rm -d -p 32237:80 -p 4564:80 nginx
Docker spins up also docker-proxy. What's that?
$ netstat -tulpn | grep docker-proxy
tcp 0 0 0.0.0.0:32237 0.0.0.0:* LISTEN 20487/docker-proxy
tcp 0 0 0.0.0.0:4564 0.0.0.0:* LISTEN 20479/docker-proxy
The Linux kernel does not allow the routing of loopback traffic, and therefore it's not possible to apply netfilter NAT rules to packets originating from 127.0.0.0/8. docker-proxy is generally considered as an inelegant solution to such problems.
When you restore iptables without Docker rules, the container ports might be still available via docker-proxy. However this might bring some performance issues in networking, as docker-proxy won't be as fast as kernel's netfilter.