How to prevent docker containers from accessing my local network - docker

I would like to be able to prevent docker containers connected to a bridge network from accessing my local network in order to add extra security since they will be accessible from outside (in case a container is compromised). I saw that I should probably use ebtables or the physdev module of iptables but I can't create a rule that works. Thanks to the one who can help me.

After some research and if anyone is interested, it is possible to use ebtables.
# Authorize DNS queries
ebtables -A INPUT -p IPV4 --ip-protocol TCP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
ebtables -A INPUT -p IPV4 --ip-protocol UDP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
# Drop all others packets
ebtables -A INPUT -p IPV4 --ip-destination 192.168.1.0/24 --ip-source 172.18.0.0/16 -j DROP
Do not forget to replace the 172.18.0.0/16 subnet with the one on which your containers are connected.

I was stumbling through this myself and found one solution was to insert (-I) a new rule into the DOCKER-USER chain.
Please see this answer: https://stackoverflow.com/a/73994723/20189349

Related

How to grant internet access to application servers through load balancer

I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward

Inside a container, how to resolve DNS on the host, on a specific port

I've an instance running a consul agent & docker. Consul agent can be used to resolve DNS queries on 0.0.0.0:8600. I'ld like to use this from inside a container.
A manual test works, running dig #172.17.0.1 -p 8600 rabbitmq.service.consul inside a container resolve properly.
A first solution is to run --network-mode host. It works. I'll do this until better. But I don't like it, security-wise.
Another idea, use docker's --dns and associated options. Even if I can script grabbing the IP, I can't get how to specify port=8600. Maybe in --dns-opts, but how ?
Along this line, writing the container's resolv.conf could do. But again, how to specify the port, I saw no hints in man resolv.conf, I believe it's not possible.
Last, I can set up a dnsmasq inside the container or in a sidecar container, along the line of this Q/A. But it's a bit heavy.
Anyone can help on this one ?
You can achieve this with the following configuration.
Configure each Consul container with a static IP address.
Use Docker's --dns option to provide these IPs as resolvers to other containers.
Create an iptables rule on the host system which redirects traffic destined to port 53 of the Consul server to port 8600.
For example:
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto udp \
--dst 1920.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600
# Repeat for TCP
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto tcp \
--dst 192.0.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600

iptables: Access from local machine to docker container is not possible

I have an issue in regards to my iptables setup. My goal is to reach the https based webserver inside a docker container from the server machine itself.
The setup is the following:
The server is connected to the internet via eth0 and serves http via port 443.
Any users from the outside (internet) reach the server via the ip address 1.2.3.4.
It is connected to the internal network via eth1 and serves dhcp, dns and some more services.
Any users from the inside (intranet) reach the server via the ip address 10.0.0.1.
The docker container is connected via docker1 on the server. The later has the ip address 10.8.0.2 inside the docker network.
The docker container serves the webserver on port 1443, but iptables forwards (NAT) requests on port 443 to its address 10.8.0.1 and the destination port 1443.
What is working:
The webserver is perfectly reachable from the internet and the intranet.
The webserver can be reached from the server itself using the address 10.8.0.1:1443.
What is not working:
Any client which is working directly on the server can not reach the docker webserver using https://example.com:443. Using https://10.8.0.1:8443 would work, but fails due to a certificate error. It is not a goal to skip the certificate check as a workaround.
Excerpt of the iptable configuration:
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i docker1 -o docker1 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp -d 1.2.3.4 --dport 443 -j DNAT --to-destination 10.1.0.1:1443
iptables -A FORWARD -o docker1 -p tcp --dport 1443 -j ACCEPT
iptables -A INPUT -i docker1 -j DROP
iptables -A FORWARD -i docker1 -j DROP
Due to that "complicated" setup I am no longer able to understand which of the iptable rules and chains need to be applied to make this work so I am seeking for your help to solve that issue.
Brainstorming about the issue using a simplified model and my understanding of the iptable chains the way of the packages might/should look like this:
Origin is a local application (wget).
The packages go through the OUTPUT table.
The packages go through the POSTROUTING table.
Magic happens...
The packages arrive again in the PREROUTING table.
The packages might go trough INPUT again.
The packages might arrive at the target application (webserver).

Logspout can't connect to papertrail

I can't get logspout to connect to papertrail. I get the following error:
!! lookup logs5.papertrailapp.com on 127.0.0.11:53: read udp 127.0.0.1:46185->127.0.0.11:53: i/o timeout
where 46185 changes every time I run the container. It seems like a DNS error, but nslookup logs5.papertrailapp.com gives the expected output, as does docker run busybox nslookup logs5.papertrailapp.com.
Beyond that, I don't even know how to interpret that error message, let alone address it. Any help debugging this would be hugely appreciated.
My Docker Compose file:
version: '2'
services:
logspout:
image: gliderlabs/logspout
command: "syslog://logs5.papertrailapp.com:12345"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
sleep:
image: benwhitehead/env-loop
Where 12345 is the actual papertrail port. Result is the same whether using syslog:// or syslog-tls://.
From https://docs.docker.com/engine/userguide/networking/configure-dns/:
the docker daemon implements an embedded DNS server which provides built-in service discovery for any container
It looks like your container is unable to connect to this DNS server. If your container is on the default bridge network, it won't reach the embedded DNS server. You can either set --dns to be an outside source or update /etc/resolv.conf. It doesn't sound like a Papertrail issue, at all.
(source)
Docker and iptables got in a fight. So I spun up a new machine, failed to set up iptables, and the problem was solved: no firewall at all to get in the way of Docker's connections!
Just kidding, don't do that. I got a toy database hacked that way.
Fortunately, it's now relatively easy to get iptables and Docker to live in harmony, using the DOCKER_USER iptables chain.
The solution, excerpted from my blog:
Configure Docker with iptables=true, and append to iptables configuration:
iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP

RoR: Securing a Closed API

I have two rails apps on separate virtual servers, but in the same facility. Both apps can communicate via local ip addresses.
This is a two part question:
1) How do I check where the request is originating and limit requests only to those from that location?
2) Do you think this would be secure enough?
My gut is telling me this isn't secure enough because of IP spoofing, but I'm thinking OAuth or similar is a little too hardcore for my needs. Though, maybe not.
This is the first time I've approached something like this and I'm looking for anyone that can push me in the right direction here.
Thanks.
Depending on who's hosting you, the local network (to which your local addresses belong) could be a private network only accessible to your instances or, more likely, it would be shared with other virtual machines that do not belong to you. You would not be open to direct external attacks, but any compromised virtual machine sharing the same local network as you can be a springboard for attack, so your concerns are absolutely valid.
Answering, in order, your two concerns:
Configure iptables for the local interfaces to only accept requests coming on specific ports from specific local IPs (read a tutorial for a better understanding of iptables configuration.) All other virtual machines on the local network should not be able to probe you, although they might be able to intercept your traffic (addressed below.)
No; you should use SSL over all intra-node connections. This will protect you in two ways: firstly it will protect you from spoofing (an attacker will be rejected if he does not have your certificate, even if he bypasses iptables by spoofing his address, or because your iptables config gets overwritten by an admin), and secondly it will protect your data from prying eyes (e.g. an attacker will not be able to snoop traffic for your passwords.) Some applications (e.g. most database engines, net-snmpd set up in v3 mode, etc.) support SSL natively. Alternatively, establish and use ssh tunnels, or use stunnel
Sample base iptables configuration allowing basic services (HTTP, SSH etc.) on the public (internet) interface, as well as allowing www1 and www2 to connect to this node's MySQL on port 3306 on the eth0 interface (www1 and www2 are defined in /etc/hosts so they resolve to the appropriate IP addresses.):
# * raw
#
# Allows internal traffic without loading conntrack
# -A PREROUTING -i lo -d 127.0.0.0/8 -j NOTRACK
*filter
# Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -i ! lo -d 127.0.0.0/8 -j DROP
# Accepts all established inbound connections (TCP, UDP, ICMP incl. "network unreachable" etc.)
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allows all outbound traffic
# You can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
# Allows SSH connections
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
# Allow ping
-A INPUT -p icmp -m icmp --icmp-type echo-request -j ACCEPT
# log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level debug
# Reject all other inbound - default deny unless explicitly allowed policy
-A INPUT -j REJECT
-A FORWARD -j REJECT
# Allows MySQL within our cluster ONLY
-A INPUT -p tcp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p tcp -s www2 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www2 -i eth0 --dport 3306 -j ACCEPT
COMMIT
This doesn't really sound like a Rails question, it's more of a question about web architecture. I'm assuming that both machines are accessible to the outside world via HTTP. If that's the case, you may want to consider putting a firewall in front of both machines to create a local network that the two machines are on.
Once you've done that, you should be able to configure the firewall to disallow requests based on any criteria you specify. Given that this is a Rails application I'm going to assume that the API is a set of resources. If that's the case you could configure your firewall to filter requests to the private API.
This way, the machines on the local network can communicate freely as their requests to one another aren't going through the firewall.

Resources