docker with public IP as a client - docker

I have a host with 10.1.1.2 and I'd like to create a docker container on it that will have the IP address 10.1.1.3 and that will be able to ping (and later to send its syslog) to an external machine on the same network. (eg. 10.1.1.42). I'd also like the packets to arrive from 10.1.1.3. So as far as I understand no NAT.
I am not interested in inbound network connections to the docker container but outbound.

There is apparently an unresolved issue for this feature right now, so the only current solution is to manually create the necessary iptables rules after launching your container. E.g., something like:
iptables -t nat -I POSTROUTING 1 -s <container_ip> -j SNAT --to-source 10.1.1.3
You will also need to add that address to an interface on your host:
ip addr add 10.1.1.3/24 dev eth0

Related

Assigning multiple IP Addresses to Docker Container

I have multiple private ipv4 addresses on my machine (each one bound to a separate public IP address)
10.0.0.4
10.0.0.5
10.0.0.6
10.0.0.7
10.0.0.8
When I run my application which uses each IP address to perform some requests everything works fine and as expected. However, when I try to run it in docker my application claims that it failed to bind to the IP address. I believe this is because docker networking is isolated.
I'm wondering how I can "expose" these ipv4 addresses to my service via a docker-compose.yml file.
You're right that Docker's network isolation is involved: your application will see a single unpredictable IP address, and Docker provides a NAT layer that translates the host's network addresses to this.
The most common way to set this up is to set your application to bind to 0.0.0.0, "all interfaces". The Compose ports: setting takes an optional IP address part, which also defaults to 0.0.0.0. You can have multiple ports: targeting the same container port, so long as the host IP and port pairs don't conflict with other bound ports or non-Docker services.
As a hypothetical example:
version: '3.8'
services:
app:
image: registry.example.com/app
environment:
# Tell the application to listen on all interfaces, port 8080
BIND_ADDR: '0.0.0.0:8080'
ports:
# As the default HTTP service on the first IP address
- '10.0.0.4:80:8080'
# On its own port on the last IP address
- '10.0.0.8:8080:8080'
# And not on any of the other IP addresses at all
An alternative is to disable Docker's networking stack with network_mode: host. In this mode your application will see all of the host interfaces directly, and if it has specific logic to selectively bind to them, that will work just as if the program wasn't running in a container. However, this also disables all other Docker networking functionality: you cannot hide or remap ports, and you cannot communicate with other containers by hostname, only via their published ports. I'd generally discourage host networking, but it might be a reasonable approach to this particular scenario.
You can configure your docker container to use multiple IP addresses, at least in two ways:
Add additional IP addresses inside the container manually:
container # ip address add 172.17.1.4/32 dev eth0
container # ip address add 172.17.1.5/32 dev eth0
...
Note: These addresses probably need to belong to the container's subnet, not sure. docker network inspect bridge prints the default bridge network's subnet, 172.17.0.0/16 for me.
(source: Multiple ip on same interface in Docker container)
or
Create multiple bridge networks, each with a different subnet (IP range), then attach your container to these multiple networks.
For details, see
https://docs.docker.com/engine/reference/commandline/network_create/#specify-advanced-options
https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks
Then you can configure your docker host to route (packets from) these different container IP addresses via your different host IP addresses:
host # iptables -t nat -I POSTROUTING -p all -s 172.17.1.4/32 -j SNAT --to-source 10.0.0.4
host # iptables -t nat -I POSTROUTING -p all -s 172.17.1.5/32 -j SNAT --to-source 10.0.0.5
...
(source: https://serverfault.com/a/686107)
The end result is, traffic outgoing from your container via the different container IPs is routed via the different host IPs. You can confirm this eg. with:
container # curl -v --interface 172.17.1.4 <some destination that will show which host IP is used>
Regarding docker compose, I don't know enough about it to answer that part of your question.

Docker: Access host service from container

I've seen several variants of this apparently common scenario but all the offered solutions are specific to each case (for example, if the service you want to share is say, MySQL, share the sock file).
I have a service on the host. For simplicity's sake let's say it's netcat listening on 127.0.0.1:5000 TCP.
I need to connect from a container to that address using another netcat, or telnet, and be able to send and receive data.
The host service needs to be in 127.0.0.1 (and not another IP) for a few reasons (including security) and the service is unable to bind to more than one interface. Using 0.0.0.0 is really not an option.
In theory this seems like something that IP tables should be able to solve, but I haven't been able to get it to work, so I assume it's Docker filtering out packets before the rules in the host have a chance to process them (iptables logging doesn't show any packet being received in the host).
Edit: If possible I'd like to avoid the host network driver.
Figured it out. As usual, things are really easy one you know them.
Anyway these are the 3 things I had to do:
1) In the firewall, accept connections from the bridge interfaces
iptables -A INPUT -i br+ -p TCP --dport 5000 -j ACCEPT
2) In prerouting change the destination IP:
iptables -t nat -I PREROUTING -d 172.17.0.1 -p tcp --dport 5000 -j DNAT --to 127.0.0.1:5000
3) Allow non-local IPs to connect to the loopback:
sysctl -w net.ipv4.conf.all.route_localnet=1
The last one probably a bit unsafe as is, should be changed to just the bridges (instead of "all").
After doing this, the containers can connect to 172.17.0.1:5000 and the service which is running on the host listening only to 127.0.0.1:5000 handles the connection correctly.
From inside of a Docker container, how do I connect to the localhost of the machine?
According to this, you should be able to point 127.0.0.1 to host.docker.internal.
Why would you like to avoid the host network driver? And why don't you put the host's service into a container as well? That would solve a lot of your problems.

set docker container ip to global ip

I am currently trying to set a ip to container which can accessed by any other machine within the network. The network in which my docker-host machine is 9.158.143.0/24 with a gateway 9.158.143.254.
Tried setting the ip(any free) within the subnet 9.158.143.0/24 with network type as bridge.It did not work(not even able to ping the container within docker host machine).
Then created a user defined network with subnet 9.10.10.0/24 and network driver as bridge. the container created is pingable but only within docker host machine.
Is there any way that this container be accessible from all the machine within the network(not just docker host machine).
PS: i do not want to expose the port.(Is changing routes helpful ..i know very little networking)
I aso tried userdefined network with macvlan driver. in this case i am able to ping container from other machines but not from docker host machine where container is present
To do this, you need to:
1- dedicate a new address for this container, on your local network;
2- configure this address as a secondary address on your docker engine host;
3- enable routing between local interfaces on your docker engine host;
4- use iptables to redirect the connections to this new address to your container.
Here is an example:
We suppose:
your container instance is named myinstance
your LAN network interface is named eth0
you have chosen the available network address 192.168.1.178 on your LAN
So, you need to do the following:
start your container (in this example, we start a web server):
docker run --rm -t -i --name=mycontainer httpd:2.2-alpine
enable local IP routing:
sysctl -w net.ipv4.ip_forward=1
add a secondary address for your container:
ip addr add 192.168.1.178/24 dev eth0
redirect connections for this address to your container:
First, find your container local address:
% docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
172.17.0.2
Now, use this local address with iptables:
iptables -t nat -A PREROUTING -i eth0 -d 192.168.1.178/32 -j DNAT --to 172.17.0.2
Finally, you can check it works by accessing http://192.168.1.178/ from any other host on your LAN.

How ot map docker container ip to a host ip (NAT instead of NAPT)?

The main goal is to do a real NAT instead of NAPT. Note normal docker run -p ip:port2:port1 command actally is doing NAPT (address+port translation) instead of NAT(address translation). Is it possible to map address only, but keep all exposed ports the same as the container, like docker run -p=ip1:*:* ... , instead of one by one or a range?
ps.1. My port range is rather big (22-50070, ssh-hdfs) so port range approach won't work.
ps.2. Maybe I need a swarm of virtual machines and join the host into the swarm.
ps.3 I raised an feature request on github. Not sure if they will accept it but currently there are 2000+ open issues (it's so popular).
Solution
On linux, you can access any container by ip and port without any binding (no -p) ootb. Docker version: CE 17+
If your host is windows, and docker is running on a linux VM like me, to access the containers, the only thing need to do is adding the route on windows route add -p 172.16.0.0 mask 255.240.0.0 ip_of_your_vm. Now you can access all containers by IP:port without any port mapping from both windows host and linux VM.
There are few options you have. One is to decide which PORT range you want to map then use that in your docker run
docker run -p 192.168.33.101:80-200:80-200 <your image>
Above will map all ports from 80 to 200 on your container. Assuming your idle IP is 192.168.33.100. But unfortunately it is not possible to map a larger port range as docker creates multiple iptables forks to setup the tables and bombs the memory. It would raise an error like below
docker: Error response from daemon: driver failed programming external connectivity on endpoint zen_goodall (0ae6cec360831b46fe3668d6aad9f5f72b6dac5d26cc6c817452d1402d12f02c): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8513 -j DNAT --to-destination 172.17.0.3:8513 ! -i docker0: (fork/exec /sbin/iptables: resource temporarily unavailable)).
This is not right way of docker mapping it. But this is not a use case that they would agree to, so may not fix the above issue. Next option is to run your docker container without any port publishing and use below iptables rules
DOCKER_IP=172.17.0.2
ACTION=A
IP=192.168.33.101
sudo iptables -t nat -$ACTION DOCKER -d $IP -j DNAT --to-destination $DOCKER_IP ! -i docker0
sudo iptables -t filter -$ACTION DOCKER ! -i docker0 -o docker0 -p tcp -d $DOCKER_IP -j ACCEPT
sudo iptables -t nat -$ACTION POSTROUTING -p tcp -s $DOCKER_IP -d $DOCKER_IP -j MASQUERADE
ACTION=A will add the rules and ACTION=D will delete the rules. This would setup complete traffic from your IP to the DOCKER_IP. This only good if you are doing it on a testing server. Not recommended on staging or production. Docker adds a lot more rules to prevent other containers poking into your container but this offers no protection whatsoever
I dont think there is a direct way to do what you are asking.
If you use "-P" option with "docker run", all ports that are exposed using "EXPOSE" in Dockerfile will automatically get exposed with random ports in the host. With "-p" option, the only way is to specify the option multiple times for multiple ports.

How to make docker only use a eth1 interface to communicate with other hosts?

I'm using the Digital Ocean. The interface "eth1" is private and the "eth0" is public. How to make the bridge created by docker docker0 use only the private interface eth1?
The bridge created by docker isn't attached to any physical interface. External access is mediated by layer 3 forwarding and NAT rules in your iptables nat table.
This means that you can control which interface is used by Docker containers by manipulating your routing table and or firewall rules. For example, to prevent your containers from forwarding traffic out eth0:
iptables -A FORWARD -i docker0 -o eth0 -j DROP
This would drop any traffic from containers that would go out eth0.
Of course, if (a) your container is trying to access an external host and (b) the only route to that host is via your default gateway, which is probably out eth0, then your container is now out of luck.

Resources