Set Docker Container Iptable Rules - docker

I have a linux server that kicks off a variety of docker containers on boot. I'd like to implement firewall rules for those containers specific to the container. The containers are connecting to servers with known static ips and ports. I've considered creating the rules inside the container's network namespace using netns to add rules but the namespace is being created via CNI and I would like the rules to be implemented when the namespace gets created, rather than configure the firewall of an already created container.
Most of what I can find about setting up iptables via CNI seems to refer to configuring the host firewall. Does Docker have some way of defining container specific firewall rules that get implemented when a container is run? I'd prefer to not use 3rd party products to accomplish this.

I had made a tool a year ago for the same reason. There is no documentation but if you run it with -h you will get a good indication of how it works. You can use it as a service on Linux with a configuration file and your rules in yaml. I hope it is still up to date with the docker api (I just made it public). You can look at the code, it's pretty simple.
here: https://github.com/tr4cks/docker-netns

Related

Docker container not accessible through server public IP (working only with UFW disabled)

I bought a VPS on Vultr (host system Ubuntu 22.04) with the example IP identified as 123.123.123 and tried to launch a new container with the following command:
docker run -d -p 8081:80 nginx:alpine
Knowing the public IP of my server, I should theoretically be able to access it through the following address in the browser http://123.123.123:8081. However, it isn't working at least publicly. Because if I decide to stop UFW in the host (using Ubuntu 22.04):
service ufw stop
Then I'm able to access it without any problem (or using cURL through SSH without disabling UFW):
But, after enabling the uncomplicated firewall with:
service ufw start
Then the host is unreachable:
These are the current rules of UFW:
I have as well a Portainer instance through docker as well (which works as well only when UFW is disabled):
I tried as well using Nginx Proxy Manager, but I'm unable to make it work with something so simple as this nginx basic container. Any help is appreciated and I'd be happy to provide more information if it's necessary.
Surprisingly, Docker does not work out of the box with Linux’s “Universal Firewall,” or UFW. They both modify the same iptables configuration, and this can lead to misconfigurations exposing containers that weren’t supposed to be public.
A quick fix from Docker's official documentation - but which isn't recommended for most users - and it seems not recommended by many other users. Please read more about that below.
Prevent Docker from manipulating iptables
It is possible to set the iptables key to false in the Docker engine’s
configuration file at /etc/docker/daemon.json, but this option is not
appropriate for most users. It is not possible to completely prevent
Docker from creating iptables rules, and creating them after-the-fact
is extremely involved and beyond the scope of these instructions.
Setting iptables to false will more than likely break container
networking for the Docker engine.
This works, however, this is only a half solution. It disables Docker’s ability to manage its own networking and can cause containers to not be able to access the internet at all out of the box. This can still work, but you’ll need to manually maintain iptables rules for Docker containers and custom networks, which is complicated, annoying, and defeats the purpose of UFW’s simplicity.
Another solution which require a bit more effort could be found in this Github repo detailing the problem and the steps to fix it.
https://github.com/chaifeng/ufw-docker
Also linking here a related question from StackOverflow.

Docker networks in Kubernetes/Rancher

I've been trying to convert my SimpleLogin Docker containers to Kubernetes using Rancher. However one of the steps requires me to create a network.
sudo docker network create -d bridge \
--subnet=240.0.0.0/24 \
--gateway=240.0.0.1 \
sl-network
I couldn't really find a way to do this on Kubernetes/Rancher.
How do I set up an equivalent network like the above command in Kubernetes?
If you want more information about what this network should do you can find it here.
You don't. Kubernetes has its own network ecosystem, which mostly acts as though every Pod and Service is on the same network. You can't create separate subnets within that, there's no way to create a separate network per logical application. You also can't control the IP range of networks in Kubernetes (it shouldn't usually be necessary in Docker either).
Generally you can communicate between Kubernetes Pods by putting a Service in front of each, and then using the Service's DNS name as a host name. If all of the parts were running in the same Namespace, and the Service in front of the database were named sl-db, then the webapp Pod could use sl-db as the host name part of the DB_URI setting.
Reading through the documentation you link to, you will probably need to do some extra work to get the Postfix MTA set up. Note that it looks like it runs outside of Docker in this setup; either you will have to port the setup to run inside Kubernetes or configure its mynetworks settings to include the network that contains the Kubernetes nodes. You will also need to set up Kubernetes ConfigMaps and Secrets to hold the various configuration files and certificates this setup needs.

Usecase for running docker container with none network type

I know docker have none network type in which no IP address is assigned to container and no network interfaces is created except loopback interface.
I want to know any usecase where we can use the none network type, I mean in which scenario do we need to run a container without IP or interfaces.
I had referred this question but it didn't answer the exact usecase where we can use this.
First, as mentioned in weaveworks/weave issue 1394:
the meaning of --net=none is not clear. It could mean "no networking at all" (in which case weave networking should be disabled too), or "no Docker networking" (which is the interpretation I am choosing here).
That is what the wave network is doing:
We create a network namespace with --net=none, inject the weave interface and then start the actual container.
We also run a default gateway within the weave network that has external connectivity.
The reason we are doing this is that services tend to bind to the wrong interface. It's important to have a single interface that is created before the container is started.
You find a similar usecase with Calico;
This tutorial describes how to set up a Calico cluster in a Docker environment without Docker networking (i.e. –net=none).
With this option, Docker creates a container with its own network stack, but not to take any steps to configure its network.
Rather than have Docker configure the network, in this tutorial we use the calicoctl command line tool to add a container into a Calico network: adding the required interface and routes in to the container, and configuring Calico with the correct endpoint information.
Thanks #VonC I got an idea from your answer.
It seems docker none network type is used by other network plugins like calico, weave, etc to provide its own network stack.
Docker networking model is based on CNM (Container Network Model) while there is another model called CNI (Container Network Interface) which is used by network pulgins like calico, weave, etc.
I guess these network plugins (one which uses CNI model) uses the none network type so that a container got created with no networking on top of which these CNI plugins provides its own networking.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

How to link Docker services across hosts?

Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.
There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.
Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:
What (if any) is the current predominant method for linking across hosts in Docker, and
Are there any plans for supporting this functionality directly in the Docker system?
Update
Docker has recently announced a new tool called Swarm for Docker orchestration.
Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.
When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:
$ docker run -d -P -e constraint:storage=ssd mysql
One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.
In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.
Swarm is now in beta phase.
Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.
Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:
Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
DNS-based discovery e.g. with skydock and SkyDNS
Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production
UPDATE 3
Libswarm has been renamed as swarm and is now a separate application.
Here is the github page demo to use as a starting point:
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>
# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>
# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...
# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>
UPDATE 2
The official approach is now to use libswarm see a demo here
UPDATE
There is a nice gist for openvswitch hosts communication in docker using the same approach.
To allow service discovery there is an interesting approach based on DNS called skydock.
There is also a screencast.
This is also a nice article using the same pieces of the puzzle but adding also vlans on top:
http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html
The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.
So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.
Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.
What's interesting here is
Instead of links, use static IPs/hostnames in your virtual network
Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go
This leads to interesting scenarios like
Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
Move your containers to different physical docker hosts, Weave will detect the peer accordingly
For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.
Update
Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.
With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.
You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.
Then you create a new overlay network like this:
$ docker network create --driver overlay my-network
Containers can now be run with the network name as run parameter:
$ docker run -itd --net=my-network busybox
They can also be connected to a network when already running:
$ docker network connect my-network my-container
More details are available in the documentation.
The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:
Open vSwitch: https://gist.github.com/noteed/8656989
Tinc: https://gist.github.com/noteed/11031504
The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.
Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.
As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.
On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.
Seems like docker swarm 1.14 allows you to:
assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.
assigning services to machine using --constraint 'node.hostname == <host>'

Resources