We have a container that needs to contact the ECS container agent introspection endpoint at runtime.
The ecs task is using bridge networking mode.
The default iptables on our Amazon Linux 2 contain the following INPUT chain:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:51678
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
I've added the rule ACCEPT tcp -- anywhere anywhere tcp dpt:51678 as an attempt to allow our containers to access the introspection endpoint.
However, it doesn't work.
If I delete REJECT all -- anywhere anywhere reject-with icmp-port-unreachable I can access the ECS container agent introspection no issues at all.
It feels bad removing the REJECT all from a security standpoint. Am I wrong? Is my attempt incorrect?
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-introspection.html
If you're wondering, this is how we are hitting the endpoint at runtime from within our container
EC2_INSTANCE_ID=$(curl --silent ${ECS_CONTAINER_METADATA_URI_V4}/taskWithTags | jq -r '.ContainerInstanceTags.instanceid')
Help is greatly appreciated.
The amazon Linux 2 base ami we used had a reject all saved in the iptables chain INPUT.
Our old amazon Linux 1 instances didn't have this rule in their iptables.
To resolve this I did an iptables --flush and then added my desired rules and saved them.
Related
I have (2) Docker-Compose configs running on the SAME host but DIFFERENT Docker bridges. Each are on different subnets so their traffic must be routed. One Docker-Compose config is for a containerized website, while the other Docker-Compose config is for a Zabbix Agent to monitor the website Docker-Compose config.
Although the Docker host they both live on has routes in the Gateway Router to both subnets, the Linux Docker host itself is also configured as a router, so should route traffic between the subnets it hosts.
Why can't traffic pass between different bridges on the SAME Docker host?!?!?
Intro:
Before implementing containerized monitoring, I had no prior requirement to pass traffic between docker bridges on the same host. I'm a Linux & network engineer and this wasted an hour of my life trying to understand how things were breaking, so I imagine if you're not a network engineer, you'll waste a lot more time- or fail completely. Thus felt it was worthy of a moment to document.
Short Answer:
Docker was being "helpful" - again- by automagically inserting iptables rules in the DOCKER-ISOLATION-STAGE-1 & 2 Chains in the FORWARD table breaking connectivity. Delete these rules and connectivity between containers raised on different bridges assigned to different subnets on the same host can now be achieved.
Longer Answer w/ Proofs:
Diagnostics:
I re-cut the image for the Zabbix Agent including some diagnostic tools- traceroute, inetutils-ping & iproute2- and after logging into the container with docker exec -u root -it <container ID> bash, I found the Agent's container couldn't ping the containers on the other bridge despite ip route list proving there was a correct route out of the Agent's container.
A review of the Docker host's firewall rules revealed that passing traffic between the Docker Bridges is DISALLOWED by design:
iptables -nvx -L --line-numbers
<SNIP>
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 530374 174564169 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 3559 5117334 DOCKER-ISOLATION-STAGE-2 all -- br-2dfcb90fe695 !br-2dfcb90fe695 0.0.0.0/0 0.0.0.0/0
3 1229457 499057258 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 16 960 DROP all -- * br-2dfcb90fe695 0.0.0.0/0 0.0.0.0/0
3 533917 179680543 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
<SNIP>
If the source & destination of traffic are NOT the same for our Docker bridge in DOCKER-ISOLATION-STAGE-1, then it matches the chain's rule 2 which passes it to Chain DOCKER-ISOLATION-STAGE-2 where it matches rule 2 here and drops the traffic.
We know this rule is having effect as we can see the packets incrementing for it; traffic is indeed being dropped. So Rule 2 in Chain DOCKER-ISOLATION-STAGE-2 if the offender.
Solution:
Print the rules so we can determine the rule numbers of the iptables busting our connectivity:
sudo iptables -nvx -L --line-numbers
Then delete the problematic rules by their respective numbers. Note the final number "2" at the end of each iptables command is the rule number you want to delete. We'll delete both the target and the referring rule:
sudo iptables -D DOCKER-ISOLATION-STAGE-1 2
sudo iptables -D DOCKER-ISOLATION-STAGE-2 2
WARNING: Although restarting containers will NOT cause the deleted iptables rules to be recreated, doing a docker-compose down followed by an up WILL.
Hope this saves others wasted cycles figuring out broken container network connectivity...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I would like to block direct access to the docker containers from outside. I use a haproxy and want to only allow access to port 80, 443.
I added the following rule to iptables. But I still can access docker containers through different ports.
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
COMMIT
This probably due to the DOCKER chain
# iptables -L
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (4 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:http
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
What rules would I need to create to block direct access?
Rather than doing this with IP tables you could use the docker network create NETWORK command to create a network to connect your apps to as well as your proxy. Also don't expose the apps on any ports. The only container you should expose is your proxy. From within the proxy you can then route traffic using the container name as a hostname. Each container on the same network can be reached by other containers.
For example if
I have container A which has a name of my-service and a service running on port 3000 and no ports published to the host
Container B which is a proxy running on port 80 published to the host. My proxy can pass requests to http://my-service:3000 and it will route traffic to the container.
If I try to go to http://mydomain:3000 this wont work as ports have not been exposed and the only way to reach the app is via the proxy on port 80
I'd suggest taking a read of https://docs.docker.com/engine/userguide/networking/work-with-networks/ as this explains how to get started with networking.
Full Disclosure: I run this kind of setup on my personal VPS and cannot access my containers via ports directly. Using the built in docker networking will probably play better than messing around with your IP tables
Hope this is useful.
Dylan
Edit
I have generalised the process as I do not know the specifics of your setup with regards to proxies, network restrictions etc. I have also not gone into specific commands as the link above covers it better than I would.
I realize I'm responding to an old thread, but I've spent most of a morning frustrated by this problem. This post shows at the top of a google search, but I feel the accepted answer does not answer the OP's question, but instead offers a different design as a way of avoiding the problem stated in the original question. That solution requires standing up a new docker image to act as a gateway to the original docker.
It is possible the following information was not available at the time of the original question, but what I found from Docker.com is this link
https://docs.docker.com/network/iptables/
which appears to answer the original question when it states:
"
By default, all external source IPs are allowed to connect to the Docker daemon. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER filter chain. For example, the following rule restricts external access to all IP addresses except 192.168.1.1:
$ iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP"
and
"If you need to add rules which load before Docker’s rules, add them to the DOCKER-USER chain."
But regrettably, I have attempted that solution and it too does not appear to work for me on docker version 17.05.0-ce
As #dpg points out, this problem is frustating if you need to tackle it from a newbie point of view.
The main problem for me (as I try to resolve also the problems of #dpg's answer), is that the Docker documentation is confusing in two of the pages that address this (link1 and link2)
To summarize, and to save time for others, if you don't have a lot of knowledge, and fall into the "Docker and iptables", the answer is there, just that they have missed this: where ext_if is the name of the interface providing external connectivity to the host.
Instead, in the "Understand container communication" link, there is indeed a little text that exactly points that ext_if should be the network interface.
So, for me to limit the access to a docker exposed port (ex: 6782) (that means that the DOCKER-USER needs to be modified and not the common INPUT chain) to a certain IP (ex: 192.27.27.90) and restrict all others, I need to do this, which works in my case:
sudo iptables -I DOCKER-USER -p tcp -i eth0 ! -s 192.27.27.90 --dport 6782 -j REJECT
(Here I suppose that the network interface that communicates with the outside world is eth0 and that you want to REJECT instead of DROP).
If more clarification is needed, I will be glad to assist.
Addition to comment by #Ezarate11 (since I dont have enough rep to comment), make sure the --dport is the port that is being forwarded to, not the port that is exposed.
For example, if your configuration is 0.0.0.0:64743->80, then you would need to do
sudo iptables -I DOCKER-USER -p tcp -i eth0 ! -s 192.27.27.90 --dport 80 -j REJECT
This detail alone took me a while to figure out, I didn't see this mentioned anywhere else.
I am using Docker version 1.9.1 and docker-compose 1.5.2 with --x-networking (experimental networking).
I start a trivial node application with docker-compose up; this application maps port 8000 to port 9999 on the host.
From the host I can curl http://localhost:9999; or http://[host-ip]:9999; or any of the 172.x.0.1 addresses that the host has and they all work.
I start another application with docker-compose up. If I attempt to curl http://[host-ip]:9999, or any of the http://172.x.0.1 addresses the packet is dropped due to iptables entries -- in particular the entry that specifies DROP from the subnet of this container to the first container.
I understand that container-to-container communication may not be allowed but how can my second container talk to the first via the port mapped on the host?
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DROP all -- 172.17.0.0/16 172.19.0.0/16
DROP all -- 172.19.0.0/16 172.17.0.0/16
DROP all -- 172.18.0.0/16 172.19.0.0/16
DROP all -- 172.19.0.0/16 172.18.0.0/16
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DROP all -- 172.17.0.0/16 172.18.0.0/16
DROP all -- 172.18.0.0/16 172.17.0.0/16
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (3 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:8000
Container to container communication is allowed of course. You could forbid it with firewall rules etc...
What you actually need is to have these two containers in the same subnet. So you need to create a subnet with
docker network create --subnet=172.18.0.0/16 mySubNet
then run the containers with
docker run --net mynet123
And that is it. Additionally when running you could assign a static ip to container with --ip, assign a hostname with --hostname or add another host entry with --add-host.
EDIT: I see now your docker version so I have to say that what I wrote here works with docker 1.10.x
Subnet solution
You can either create a subnet for your containers, but to keep things clean you will need a subnet for each distributed application in order to isolate them. Not the easiest nor the simplest way of doing so while it works.
--link solution
Another solution is to link your containers. I suggest you to read this comment, so just I don't copy/paste its content ;)
Looking at the iptables of my docker host i get something like this:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http-alt
I was able to assign a durable IP address like this:
sudo ip addr add 10.0.0.99/8 dev eth0
docker run -d -p 10.0.0.99:8888:8080 tomcat:8
but that address is only available on this machine, as in I need to ssh into it and ping it from this box.
Reading through this, it looks like i need to add a custom bridge:
Custom Docker Bridge
Is there a way to make the bridge hand fresh ips from the DHCP server? For example if my DHCP server assigned addresses from 10.1.1.x - I want to assign those addresses to Docker containers.
Would this involve a generic *nix way of pushing my iptable /etc/hosts ip addresses and dns names to a DNS server so other machines outside of the Docker cluster can ping those machines?
I have port forwarding working but need to do the same with the ip addresses as Zookeeper only tracks the internal Docker ip addresses and hostnames as I've defined them in my docker-compose.yml.
I'm having troubles accessing jira after a fresh install.
If I wget localhost:8080 from the machine where jira runs, I'll get an html file.
If I try to access IP_ADDRESS:8080 from another computer, the browser response with a "Cant connect to..."
If I nmap my jira machine, it says the following:
Starting Nmap 5.51 ( http://nmap.org ) at 2014-04-29 11:28 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000017s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
8080/tcp open http-proxy
I'm not a linux expert, so I dont know much about iptables and stuff like that.
I also checked my access_log of the tomcat installation, but the file is empty.
Anyone know, what to do?
It was a firewall problem:
[root#testing logs]# iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere
2 ACCEPT icmp -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere
4 ACCEPT tcp -- anywhere anywhere
5 REJECT all -- anywhere anywhere --> I removed this line
You should've executed this line instead:
iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT