Coreos security - docker

I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.
I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.
I've explored a lot of options for this, but none of them seem satisfactory:
Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
I can't easily test services by SSHing in, because that traffic originates from localhost
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.
When I tried to configured iptables on the host, I used the method here: https://docs.docker.com/articles/networking/#communication-between-containers-and-the-wider-world, by adding a DROP rule to the docker chain, but it didn't work, and packets still got through
So what's the best approach, and I'll invest time in making it work.
Overall, I guess I need to find something that I can:
Roll out to all the hosts reliably
Something that is reasonably flexible going forward
Something that allows for 'edge machines' which are accessible from the wider internet.
Solution
I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.
I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:
ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...
But, as mentioned, that doesn't work. So what to do? Get the shell!
ExecStart=/usr/bin/sh -c "\
export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
echo $ip;\
/usr/bin/docker run -p $ip:7000:7000"
I hit a few problems along the way.
I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
I included the echo so that I could see what the command thought the ip was.
When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.
Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.

Instead of grep-ping the IP address, you can use the environment files to get the IP address (both public and private) of the host the service gets scheduled on. This allows you to bind your container ports to either public or private ports in a simple way.
Like so:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/docker run --name myservice -p \
${COREOS_PUBLIC_IPV4}:80:80 \
${COREOS_PRIVATE_IPV4}:3306:3306 \
ubuntu /bin/bash

I've got private networking set up for all the hosts, and now I'd like
to ensure that some containers only open ports to localhost and to the
internal interface.
This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.
If I start a container like this:
docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.
This seems to meet your goals:
The container port is exposed over the "private" eth1 interface.
The container port is accessible from the host.
I can't easily test services by SSHing in, because that traffic originates from localhost.
If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
With this solution, there is no need to inject the machine ip into the container.

Related

Enable forwarding from Docker containers to the outside world

I've been wondering why docker installation does not enable by default port forwarding to containers.
To save you a click, what I mean is:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I assume it is some sort of security risk, but I just wonder what the risk it is.
Basically I want to create some piece of code that enables this by default, but I want to know what is the bad that can happen.
I googled this and couldn't find anything.
Generally FORWARD ACCEPT seems to be considered too permissive (?)
If so, what can I change to make this more secure?
My network is rather simple, it is a bunch of pcs in a local lan (10.0.0.0/24) with an openvpn server and those pcs may deploy docker hosts (I'm doing this by hand, not using docker compose or swarm or anything because nodes change) that need to see each other. So no real outside access. Another detail is that I am not using network overlay which I could do without swarm, but the writer of the post warns it could be deprecated soon, so also wonder if I should just start using docker-swarm straight away.
EDIT: My question here is maybe more theoretical I guess than what it may seem at first. I want to know why they decided not to do this. I pretty much need/want full communication between docker instances, they need to be ssh'd into and open up a bunch of different ports to talk to each other (and this is the limitation of my networking knowledge, I don't know how this really works, I suppose they are all high ports, but are those also blocked by docker?). I am not sure docker-swarm would help me much here either. They aimed at micro-services I maybe need interactive sessions from time to time, but this is probably asking too much in a single question.
Maybe the simplest version of this question is: "if I put that code up there as a script to load each time my computer boots up, how can someone abuse it".
Each docker container runs on a local bridge network with IPs generally in the range of 172.1x.xx.xx. You can get the ip address running:
docker inspect <container name> | jq -r ".[].NetworkSettings.Networks[].IPAddress"
You should either run your container exposing and publishing the specific container ports on the host running the containers.
Alternatively, you can use iptables to redirect traffic to a specific port from outside:
iptables -t nat -I PREROUTING -i <incoming interface> -p tcp -m tcp --dport <host listening port> --j DNAT --to-destination <container ip address>:<container port>
Change tcp for udp if the port is listening on a udp socket.
If you want to redirect all traffic you can still use the same approach, but may need to specify a secondary ip address on your host (e.g., 192.168.1.x) and redirect any traffic coming to that address to your container.

Add docker containers to hosts automatically by name

A common workflow for me is I run docker-compose up during development in a web-project, I run docker inspect repo_app_1 | grep IPAddress and then go to the ipaddress in the browser.
Instead of fetching container's IP, I want to add the name of this container with its IP to the hosts file.
What would be the best way to do that? It's certainly possible, I can think of one way -- hijack the docker and docker-compose commands so that after each execution we run a script which runs docker container's output through awk and appends it to hosts file and also manages to delete the older entries.
One possibility is to use Traefik, a Docker-aware reverse proxy that includes its own monitoring dashboard.
See for instance "Traefik on Docker for Web Developers - With bonus Let's Encrypt SSL!", from Juan Treminio, in order to register automatically your containers and access them through a pre-defined URL.
Juan describes how to solve the "port dance":
If port 80 is mapped to web-server-A you must choose another port to bind for web-server-B and web-server-C.
This can quickly get old because you must remember that http://localhost goes to A, http://localhost:81 goes toB and http://localhost:82 goes to C.
He points out:
On virtual machines this problem does not really occur because you can assign a static IP address to your servers, and bind it to your system’s hosts file (/etc/hosts).
Containers are ephemeral by nature and do not normally get created on your host’s network but rather private networks with their own random IP addresses within special ranges. However, you must edit /etc/hosts for every VM you spin up and the list grows with the number of projects you handle.
Træfik solves both of these problems, first by removing the need to use ports in URLs and second by not needing you to edit /etc/hosts at all.
A new container will register itself to the Traefik docker network (docker network create --driver bridge traefik_webgateway) with:
docker run -d --name some-mailhog \
--network traefik_webgateway \
--label traefik.docker.network=traefik_webgateway \
--label traefik.frontend.rule=Host:mailhog.localhost \
--label traefik.port=8025 \
mailhog/mailhog
The URL becomes simple http://mailhog.localhost.

Update Prometheus Host/Port in Docker

Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.

Port forwarding Ubuntu - Docker

I have following problem:
Assume that I started two Docker containers on host machine: A and B.
docker run A -ti -p 2000:2000
docker run B -ti -p 2001:2001
I want to be able to get to each of this containers FROM INTERNET by:
http://example.com:2000
http://example.com:2001
How to reach that?
The rest of the equation here is just normal TCP / IP flow. You'll need to make sure of the following:
If the host has some an implicit deny for incoming traffic on its physical interface, you will need to open up ports 2000 and 2001, just like you would for any service (Docker or not).
If the host is behind a NAT or other external means of routing, you'll need to punch holes for those ports there as well.
You'll need the external IP address (either the one attached to the host or the one in front of the NAT allowing access to the ports).
As far as Docker is concerned, you've done what is required to open the ports to the service running in that container correctly.

Docker container linking via port forwarding?

It seems that the preferred way to expose services to other Docker containers is container linking, which sets some environment variables that you then have to use in your application code to look up host names and port numbers:
psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT
Is there a reason this is not done via port forwarding in a way that is transparent to the application? So that in the same way that I can just run my web server inside the container on standard port 80 and have Docker figure out what actual port to use, I could just be doing
psql -h 0.0.0.0 # no -p necessary, we use the default port
The port forwarding would be set up when I start docker, just like with server ports.
This is possible! It has actually be proposed by the CoreOS team; you can read more in the following blog post:
http://coreos.com/blog/Jumpers-and-the-software-defined-localhost/
Docker will soon allow to start a container sharing the network namespace of another container; it will help with those scenarios (and in the short term, it will allow to do what you suggest very easily).
Project Atomic is also following this approach:
http://www.projectatomic.io/docs/inter-container-networking/
Geard uses iptables to enable containers to connect to each other. Network namespaces allows adding iptables rules to the network namespace of a container. The basic idea is to make remote endpoints appear as if they were local to a container. For example the database container could be made to appear to be running locally inside the application container.

Resources