access docker container in different subnet (bridge) - docker

I would like to reach container which is in another subnet (different bridge). The src and dst bridge is connected via a veth-pair.
This is needed for a test setup in which I would like to manipulate the connection propperties (rate, latancy, etc.) between those bridges. My VMs in these bridges are able to ping each other but not the containers (either the VMs nor the other containers if they are connected to the other bridge.
First I startet up the container without any network configuration and tried to connect their veth counterparts on the host to those bridges which i also created manualy.
Actually I created those bridges indirect with
docker network create --subnet 192.168.1.0/26 \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.driver.mtu"="1500" \
-o "com.docker.network.bridge.name"="br-side-a" \
br-side-a
docker network create --subnet 192.168.1.64/29 \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.driver.mtu"="1500" \
-o "com.docker.network.bridge.name"="br-side-b" \
br-side-b
and connected them with
ip link add dev vsidea type veth peer name vsideb
brctl addif br-side-a vsidea
brctl addif br-side-b vsideb
ip addr add 192.168.1.10/26 dev vsidea
ip addr add 192.168.1.66/29 dev vsideb
ip link set vsidea up
ip link set vsideb up
VMs that I connected to those bridges (with IPs of the connected subnets) are able to ping each other.
My containers are startet like this:
docker run -ti --network br-side-a --ip 192.168.1.20 -p 10001:10000 --name csidea --privileged debian bash
docker run -ti --network br-side-a --ip 192.168.1.67 -p 10001:10000 --name csideb --privileged debian bash
I can ping all (gateway-ips, vsidea/b, ...) on both container of each subnet but not the IPs which I assigned to those containers. Nor could the VMs reach the container IPs.
I think docker does some routing/filtering which I must turn off but I have no idea how.

So I found a solution to my problem. Like mentioned docker does filtering, I now know.
Docker automatically creates iptables rules to restrict network access for created bridges. To show them just use iptables [-L|-S] there should be three specific rule chains 'DOCKER-USER', 'DOCKER-ISOLATION-STAGE-1' and 'Docker-ISOLATION-STAGE-2'.
The rules in those isolation stage chains does prevent networking between my networks. They are in the form:
-I DOCKER-ISOLATION-STAGE-1 -i <my_network> ! -o <my_network> -j DOCKER-ISOLATION-STAGE2
-I DOCKER-ISOLATION-STAGE-2 -i <my_network> ! -o <my_network> DROP
I first set the last rules from DROP to ACCEPT just to justify my discoveries. And see the networking works between those nets.
So I searched how to prevent docker to create those rules, but you can only disable creation of any iptables entries by docker not just some of them. Also it is not recommended to change the isolations chains, but the DOCKER-USER chain is for exactly that purpose. It will be evaluated before any other docker rules, so you can specify to accept these packages instead of dropping them. Add following rule for every subnet you will let to communicate iptables -I DOCKER-USER -i <my_bridge_network> ! -o <my_bridge_network> ACCEPT.
PS: Sorry for my english. I hope it is understandable, but if there are unbearable mistakes feel free to give me a hint how I could do better.

Related

Docker IP-TABLES Error

Hey i'm quite new to these docker stuff. I tried to start an docker container with bitbucket, but i get this output.
root#rv1175:~# docker run -v bitbucketVolume:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
6da32052deeba204d5d08518c93e887ac9cc27ac10ffca60fa20581ff45f9959
docker: Error response from daemon: driver failed programming external connectivity on endpoint bitbucket (55d12e0e4d76ad7b7e8ae59d5275f6ee85c8690d9f803ec65fdc77a935a25110): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 7999 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)).
root#rv1175:~#
I got the same output every time i tried to activate any docker
container. Can someone help me?
P.S. one more question.
What does 172.1.0.2 mean? I can only say, that this is not my ip.
172.17.0.2 would be the IP assigned to the container within the default Docker bridge network (docker0 virtual interface). These are not reachable from the outside, though you are instructing the Docker engine to "publish" (in Docker terminology) two ports.
To do so, the engine creates port forwarding rules with iptables, which forward (in your case) all incoming traffic to ports tcp/7990 and tcp/7999 on all interfaces of the host to the same ports at 172.17.0.2 on the docker0 interface (where the process in the container is hopefully listening).
It looks like the DOCKER iptables chain where this happens is not present. Maybe you have other tools manipulating iptables that might be erasing what the Docker engine is doing. Try to identify them and restart the Docker engine (it should re-create everything on startup).
You can also instruct the engine not to manipulate iptables by configuring the Docker daemon appropriately. You would then need to set things up yourself if you want to use the network bridge driver (though you could also use the host driver). Here is a good example of doing so.

How ot map docker container ip to a host ip (NAT instead of NAPT)?

The main goal is to do a real NAT instead of NAPT. Note normal docker run -p ip:port2:port1 command actally is doing NAPT (address+port translation) instead of NAT(address translation). Is it possible to map address only, but keep all exposed ports the same as the container, like docker run -p=ip1:*:* ... , instead of one by one or a range?
ps.1. My port range is rather big (22-50070, ssh-hdfs) so port range approach won't work.
ps.2. Maybe I need a swarm of virtual machines and join the host into the swarm.
ps.3 I raised an feature request on github. Not sure if they will accept it but currently there are 2000+ open issues (it's so popular).
Solution
On linux, you can access any container by ip and port without any binding (no -p) ootb. Docker version: CE 17+
If your host is windows, and docker is running on a linux VM like me, to access the containers, the only thing need to do is adding the route on windows route add -p 172.16.0.0 mask 255.240.0.0 ip_of_your_vm. Now you can access all containers by IP:port without any port mapping from both windows host and linux VM.
There are few options you have. One is to decide which PORT range you want to map then use that in your docker run
docker run -p 192.168.33.101:80-200:80-200 <your image>
Above will map all ports from 80 to 200 on your container. Assuming your idle IP is 192.168.33.100. But unfortunately it is not possible to map a larger port range as docker creates multiple iptables forks to setup the tables and bombs the memory. It would raise an error like below
docker: Error response from daemon: driver failed programming external connectivity on endpoint zen_goodall (0ae6cec360831b46fe3668d6aad9f5f72b6dac5d26cc6c817452d1402d12f02c): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8513 -j DNAT --to-destination 172.17.0.3:8513 ! -i docker0: (fork/exec /sbin/iptables: resource temporarily unavailable)).
This is not right way of docker mapping it. But this is not a use case that they would agree to, so may not fix the above issue. Next option is to run your docker container without any port publishing and use below iptables rules
DOCKER_IP=172.17.0.2
ACTION=A
IP=192.168.33.101
sudo iptables -t nat -$ACTION DOCKER -d $IP -j DNAT --to-destination $DOCKER_IP ! -i docker0
sudo iptables -t filter -$ACTION DOCKER ! -i docker0 -o docker0 -p tcp -d $DOCKER_IP -j ACCEPT
sudo iptables -t nat -$ACTION POSTROUTING -p tcp -s $DOCKER_IP -d $DOCKER_IP -j MASQUERADE
ACTION=A will add the rules and ACTION=D will delete the rules. This would setup complete traffic from your IP to the DOCKER_IP. This only good if you are doing it on a testing server. Not recommended on staging or production. Docker adds a lot more rules to prevent other containers poking into your container but this offers no protection whatsoever
I dont think there is a direct way to do what you are asking.
If you use "-P" option with "docker run", all ports that are exposed using "EXPOSE" in Dockerfile will automatically get exposed with random ports in the host. With "-p" option, the only way is to specify the option multiple times for multiple ports.

Communicating between Docker containers in different networks on the same host

Any possibility to make containers in different networks within the same host to communicate? Please note that I am not using docker-compose at the moment.
The following is a summary of what I did. I created two networks using the following commands
docker network create --driver bridge mynetwork1
docker network create --driver bridge mynetwork2
Then I ran two containers on each of these created networks using the commands:
docker run --net=mynetwork1 -it name=mynet1container1 mycontainerimage
docker run --net=mynetwork1 -it name=mynet1container2 mycontainerimage
docker run --net=mynetwork2 -it name=mynet2container1 mycontainerimage
docker run --net=mynetwork2 -it name=mynet2container2 mycontainerimage
I then identified the IP Addresses of each of the containers from the networks created using
docker network inspect mynetwork1
docker network inspect mynetwork2
Using those I was able to communicate between the containers in the same network, but I could not communicate between the containers across the networks. Communication was possible only by adding the containers to the same network.
Much thanks...
Containers in different networks can not communicate with each other because iptables drop such packets. This is shown in the DOCKER-ISOLATION-STAGE-1 and DOCKER-ISOLATION-STAGE-2 chains in the filter table.
sudo iptables -t filter -vL
Rules can be added to DOCKER-USER chain to allow communication between different networks. In the above scenario, the following commands will allow ANY container in mynetwork1 to communicate with ANY containers in mynetwork2.
The bridge interface names of the network (mynetwork1 and mynetwork2) need to be found first. Their names are usually look like br-07d0d51191df or br-85f51d1cfbf6 and they can be found using command "ifconfig" or "ip link show". Since there are multiple bridge interfaces, to identify the correct ones for the networks of interest, the inet address of the bridge interface (shown in ifconfig) should match the subnet address shown in command 'docker network inspect mynetwork1'
sudo iptables -I DOCKER-USER -i br-########1 -o br-########2 -j ACCEPT
sudo iptables -I DOCKER-USER -i br-########2 -o br-########1 -j ACCEPT
The rules can be fine tuned to allow only communications between specific IPs. E.g,
sudo iptables -I DOCKER-USER -i br-########1 -o br-########2 -s 172.17.0.2 -d 172.19.0.2 -j ACCEPT
sudo iptables -I DOCKER-USER -i br-########2 -o br-########1 -s 172.19.0.2 -d 172.17.0.2 -j ACCEPT
Issue
Two containers cannot communicate because there are not on the same network.
Solution a)
Connect one container into the other network overlay (this may not meet the constraint you have).
Solution b)
Create a third network and plug both containers into this network.
How to
The command docker run accept only one occurrence of the option --net, what you have to do is to docker start the containers and then to docker network connect them to a shared network.
The answer you are looking for is here: https://stackoverflow.com/a/34038381/5321002
According to Docker Docs Containers can only communicate within networks but not across networks You can attach a container to two networks and be able to communicate that way.
edit: Although at that point why have two networks in the first place.
Here's the link:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
-Bruce

Limit Network access but allow a specific IP for a Running Docker Container

I try to use a docker container where only a specific IP address should be accessible out of the running container.
iptables are only working in priviliged docker container. But than the user can change the iptables themselves.
A nice idea would be to create a docker image with a dockerfile and with iptables. But there is no option for privileged right while creating an image.
Anyone have an idea how to solve this issue?
Best
Each docker container has a unique IP address, so if you want to permit a container with address 172.17.0.21 and you want it to be able to only access address 8.8.8.8, you could do something like:
iptables -A FORWARD -s 172.17.0.21 -d 8.8.8.8 -j ACCEPT
iptables -A FORWARD -s 172.17.0.21 -j REJECT --reject-with icmp-host-prohibited
It is also possible to modify the iptables rules inside an unprivileged container using the nsenter command. For example, if you start a Docker container:
docker run --name example -d myimage
You can get the PID of that container like this:
pid=$(docker inspect -f '{{.State.Pid}}' example)
And then use nsenter to run commands inside that container's network namespace:
nsenter -t $pid -n iptables ...
These commands will run without the capabilities restrictions of commands run inside the container.

Exposing a port on a live Docker container

I'm trying to create a Docker container that acts like a full-on virtual machine. I know I can use the EXPOSE instruction inside a Dockerfile to expose a port, and I can use the -p flag with docker run to assign ports, but once a container is actually running, is there a command to open/map additional ports live?
For example, let's say I have a Docker container that is running sshd. Someone else using the container ssh's in and installs httpd. Is there a way to expose port 80 on the container and map it to port 8080 on the host, so that people can visit the web server running in the container, without restarting it?
You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.
If you have a container with something running on its port 8000, you can run
wget http://container_ip:8000
To get the container's IP address, run the 2 commands:
docker ps
docker inspect container_name | grep IPAddress
Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.
To expose the container's port 8000 on your localhost's port 8001:
iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000
One way you can work this out is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).
NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke.
OR
Another alternative is to look at the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.
OR
With 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container's -p flags? (I have not used LINKs yet.)
OR
With docker 0.11? you can use docker run --net host .. to attach your container directly to the host's network interfaces (i.e., net is not namespaced) and thus all ports you open in the container are exposed.
Here's what I would do:
Commit the live container.
Run the container again with the new image, with ports open (I'd recommend mounting a shared volume and opening the ssh port as well)
sudo docker ps
sudo docker commit <containerid> <foo/live>
sudo docker run -i -p 22 -p 8000:80 -m /data:/data -t <foo/live> /bin/bash
While you cannot expose a new port of an existing container, you can start a new container in the same Docker network and get it to forward traffic to the original container.
# docker run \
--rm \
-p $PORT:1234 \
verb/socat \
TCP-LISTEN:1234,fork \
TCP-CONNECT:$TARGET_CONTAINER_IP:$TARGET_CONTAINER_PORT
Worked Example
Launch a web-service that listens on port 80, but do not expose its internal port 80 (oops!):
# docker run -ti mkodockx/docker-pastebin # Forgot to expose PORT 80!
Find its Docker network IP:
# docker inspect 63256f72142a | grep IPAddress
"IPAddress": "172.17.0.2",
Launch verb/socat with port 8080 exposed, and get it to forward TCP traffic to that IP's port 80:
# docker run --rm -p 8080:1234 verb/socat TCP-LISTEN:1234,fork TCP-CONNECT:172.17.0.2:80
You can now access pastebin on http://localhost:8080/, and your requests goes to socat:1234 which forwards it to pastebin:80, and the response travels the same path in reverse.
IPtables hacks don't work, at least on Docker 1.4.1.
The best way would be to run another container with the exposed port and relay with socat. This is what I've done to (temporarily) connect to the database with SQLPlus:
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus
Dockerfile:
FROM debian:7
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
CMD socat -dddd TCP-LISTEN:1521,reuseaddr,fork TCP:db:1521
Here's another idea. Use SSH to do the port forwarding; this has the benefit of also working in OS X (and probably Windows) when your Docker host is a VM.
docker exec -it <containterid> ssh -R5432:localhost:5432 <user>#<hostip>
To add to the accepted answer iptables solution, I had to run two more commands on the host to open it to the outside world.
HOST> iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
HOST> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
HOST> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https
Note: I was opening port https (443), my docker internal IP was 172.17.0.2
Note 2: These rules and temporrary and will only last until the container is restarted
I had to deal with this same issue and was able to solve it without stopping any of my running containers. This is a solution up-to-date as of February 2016, using Docker 1.9.1. Anyway, this answer is a detailed version of #ricardo-branco's answer, but in more depth for new users.
In my scenario, I wanted to temporarily connect to MySQL running in a container, and since other application containers are linked to it, stopping, reconfiguring, and re-running the database container was a non-starter.
Since I'd like to access the MySQL database externally (from Sequel Pro via SSH tunneling), I'm going to use port 33306 on the host machine. (Not 3306, just in case there is an outer MySQL instance running.)
About an hour of tweaking iptables proved fruitless, even though:
Step by step, here's what I did:
mkdir db-expose-33306
cd db-expose-33306
vim Dockerfile
Edit dockerfile, placing this inside:
# Exposes port 3306 on linked "db" container, to be accessible at host:33306
FROM ubuntu:latest # (Recommended to use the same base as the DB container)
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
EXPOSE 33306
CMD socat -dddd TCP-LISTEN:33306,reuseaddr,fork TCP:db:3306
Then build the image:
docker build -t your-namespace/db-expose-33306 .
Then run it, linking to your running container. (Use -d instead of -rm to keep it in the background until explicitly stopped and removed. I only want it running temporarily in this case.)
docker run -it --rm --name=db-33306 --link the_live_db_container:db -p 33306:33306 your-namespace/db-expose-33306
You can use SSH to create a tunnel and expose your container in your host.
You can do it in both ways, from container to host and from host to container. But you need a SSH tool like OpenSSH in both (client in one and server in another).
For example, in the container, you can do
$ yum install -y openssh openssh-server.x86_64
service sshd restart
Stopping sshd: [FAILED]
Generating SSH2 RSA host key: [ OK ]
Generating SSH1 RSA host key: [ OK ]
Generating SSH2 DSA host key: [ OK ]
Starting sshd: [ OK ]
$ passwd # You need to set a root password..
You can find the container IP address from this line (in the container):
$ ifconfig eth0 | grep "inet addr" | sed 's/^[^:]*:\([^ ]*\).*/\1/g'
172.17.0.2
Then in the host, you can just do:
sudo ssh -NfL 80:0.0.0.0:80 root#172.17.0.2
Based on Robm's answer I have created a Docker image and a Bash script called portcat.
Using portcat, you can easily map multiple ports to an existing Docker container. An example using the (optional) Bash script:
curl -sL https://raw.githubusercontent.com/archan937/portcat/master/script/install | sudo bash
portcat my-awesome-container 3456 4444:8080
And there you go! Portcat is mapping:
port 3456 to my-awesome-container:3456
port 4444 to my-awesome-container:8080
Please note that the Bash script is optional, the following commands:
ipAddress=$(docker inspect my-awesome-container | grep IPAddress | grep -o '[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}' | head -n 1)
docker run -p 3456:3456 -p 4444:4444 --name=alpine-portcat -it pmelegend/portcat:latest $ipAddress 3456 4444:8080
I hope portcat will come in handy for you guys. Cheers!
There is a handy HAProxy wrapper.
docker run -it -p LOCALPORT:PROXYPORT --rm --link TARGET_CONTAINER:EZNAME -e "BACKEND_HOST=EZNAME" -e "BACKEND_PORT=PROXYPORT" demandbase/docker-tcp-proxy
This creates an HAProxy to the target container. easy peasy.
Here are some solutions:
https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/12
The solution to mapping port while running the container.
docker run -d --net=host myvnc
that will expose and map the port automatically to your host
In case no answer is working for someone - check if your target container is already running in docker network:
CONTAINER=my-target-container
docker inspect $CONTAINER | grep NetworkMode
"NetworkMode": "my-network-name",
Save it for later in the variable $NET_NAME:
NET_NAME=$(docker inspect --format '{{.HostConfig.NetworkMode}}' $CONTAINER)
If yes, you should run the proxy container in the same network.
Next look up the alias for the container:
docker inspect $CONTAINER | grep -A2 Aliases
"Aliases": [
"my-alias",
"23ea4ea42e34a"
Save it for later in the variable $ALIAS:
ALIAS=$(docker inspect --format '{{index .NetworkSettings.Networks "'$NET_NAME'" "Aliases" 0}}' $CONTAINER)
Now run socat in a container in the network $NET_NAME to bridge to the $ALIASed container's exposed (but not published) port:
docker run \
--detach --name my-new-proxy \
--net $NET_NAME \
--publish 8080:1234 \
alpine/socat TCP-LISTEN:1234,fork TCP-CONNECT:$ALIAS:80
You can use an overlay network like Weave Net, which will assign a unique IP address to each container and implicitly expose all the ports to every container part of the network.
Weave also provides host network integration. It is disabled by default but, if you want to also access the container IP addresses (and all its ports) from the host, you can run simply run weave expose.
Full disclosure: I work at Weaveworks.
It's not possible to do live port mapping but there are multiple ways you can give a Docker container what amounts to a real interface like a virtual machine would have.
Macvlan Interfaces
Docker now includes a Macvlan network driver. This attaches a Docker network to a "real world" interface and allows you to assign that networks addresses directly to the container (like a virtual machines bridged mode).
docker network create \
-d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
pipework can also map a real interface into a container or setup a sub interface in older versions of Docker.
Routing IP's
If you have control of the network you can route additional networks to your Docker host for use in the containers.
Then you assign that network to the containers and setup your Docker host to route the packets via the docker network.
Shared host interface
The --net host option allows the host interface to be shared into a container but this is probably not a good setup for running multiple containers on the one host due to the shared nature.
Read Ricardo's response first. This worked for me.
However, there exists a scenario where this won't work if the running container was kicked off using docker-compose. This is because docker-compose (I'm running docker 1.17) creates a new network. The way to address this scenario would be
docker network ls
Then append the following
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus --net network_name
docker run -i --expose=22 b5593e60c33b bash
ref: https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/5
this may help you

Resources