I've seen several variants of this apparently common scenario but all the offered solutions are specific to each case (for example, if the service you want to share is say, MySQL, share the sock file).
I have a service on the host. For simplicity's sake let's say it's netcat listening on 127.0.0.1:5000 TCP.
I need to connect from a container to that address using another netcat, or telnet, and be able to send and receive data.
The host service needs to be in 127.0.0.1 (and not another IP) for a few reasons (including security) and the service is unable to bind to more than one interface. Using 0.0.0.0 is really not an option.
In theory this seems like something that IP tables should be able to solve, but I haven't been able to get it to work, so I assume it's Docker filtering out packets before the rules in the host have a chance to process them (iptables logging doesn't show any packet being received in the host).
Edit: If possible I'd like to avoid the host network driver.
Figured it out. As usual, things are really easy one you know them.
Anyway these are the 3 things I had to do:
1) In the firewall, accept connections from the bridge interfaces
iptables -A INPUT -i br+ -p TCP --dport 5000 -j ACCEPT
2) In prerouting change the destination IP:
iptables -t nat -I PREROUTING -d 172.17.0.1 -p tcp --dport 5000 -j DNAT --to 127.0.0.1:5000
3) Allow non-local IPs to connect to the loopback:
sysctl -w net.ipv4.conf.all.route_localnet=1
The last one probably a bit unsafe as is, should be changed to just the bridges (instead of "all").
After doing this, the containers can connect to 172.17.0.1:5000 and the service which is running on the host listening only to 127.0.0.1:5000 handles the connection correctly.
From inside of a Docker container, how do I connect to the localhost of the machine?
According to this, you should be able to point 127.0.0.1 to host.docker.internal.
Why would you like to avoid the host network driver? And why don't you put the host's service into a container as well? That would solve a lot of your problems.
Related
Some programs in my docker container are making unwanted requests to e.g. Google Analytics and other tracking software, sharing my information. I want to block all this traffic, while still being able to access the docker from outside.
I tried adding the --network=host, this worked correctly, only allowing localhost access from inside the container, but also blocked all external incoming connections.
Is there a way to limit the outgoing connections to the localhost only, while still allowing incoming external connections? I only want to enforce this on a specific docker container, not for my entire system.
Any feedback is appreciated.
I found a working solution for my problem in another thread:
docker network create --subnet 172.19.0.0/16 no-internet
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -j REJECT --reject-with icmp-port-unreachable
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -m state --state RELATED,ESTABLISHED -j RETURN
When starting a docker container add:
--network no-internet
After this, I cannot connect to the internet from inside the container. However, I can still access the container ports from the outside.
I am using docker-compose and just found out all my exposed ports from docker-compose.yml are actually added to iptables to allow world access. No idea but this leaves me with a huge security hole.
The docker page says to run: iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP
but that does nothing for me. I can still access my db server remotely without tunneling.
I'm not sure what my local ip address would be. I still want to allow internal connections on the host OS to connect to those ports, but not the world.
If you don't want the exposed ports to be publically available, there are easier solutions than mucking about with Docker's iptables rules.
Just don't expose them.
You don't need to expose ports just to access a service. You can access any open container ports simply by connecting to the container ip address.
Expose them only on localhost.
Instead of writing -p 8080:8080, which exposes container port 8080 on host port 8080 on all interfaces, write -p 127.0.0.1:8080:8080, which will expose the port only on the loopback address. Now you can reach it on your host at localhost:8080, but it won't be available to anyone else.
Hey i'm quite new to these docker stuff. I tried to start an docker container with bitbucket, but i get this output.
root#rv1175:~# docker run -v bitbucketVolume:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
6da32052deeba204d5d08518c93e887ac9cc27ac10ffca60fa20581ff45f9959
docker: Error response from daemon: driver failed programming external connectivity on endpoint bitbucket (55d12e0e4d76ad7b7e8ae59d5275f6ee85c8690d9f803ec65fdc77a935a25110): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 7999 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)).
root#rv1175:~#
I got the same output every time i tried to activate any docker
container. Can someone help me?
P.S. one more question.
What does 172.1.0.2 mean? I can only say, that this is not my ip.
172.17.0.2 would be the IP assigned to the container within the default Docker bridge network (docker0 virtual interface). These are not reachable from the outside, though you are instructing the Docker engine to "publish" (in Docker terminology) two ports.
To do so, the engine creates port forwarding rules with iptables, which forward (in your case) all incoming traffic to ports tcp/7990 and tcp/7999 on all interfaces of the host to the same ports at 172.17.0.2 on the docker0 interface (where the process in the container is hopefully listening).
It looks like the DOCKER iptables chain where this happens is not present. Maybe you have other tools manipulating iptables that might be erasing what the Docker engine is doing. Try to identify them and restart the Docker engine (it should re-create everything on startup).
You can also instruct the engine not to manipulate iptables by configuring the Docker daemon appropriately. You would then need to set things up yourself if you want to use the network bridge driver (though you could also use the host driver). Here is a good example of doing so.
I have a host with 10.1.1.2 and I'd like to create a docker container on it that will have the IP address 10.1.1.3 and that will be able to ping (and later to send its syslog) to an external machine on the same network. (eg. 10.1.1.42). I'd also like the packets to arrive from 10.1.1.3. So as far as I understand no NAT.
I am not interested in inbound network connections to the docker container but outbound.
There is apparently an unresolved issue for this feature right now, so the only current solution is to manually create the necessary iptables rules after launching your container. E.g., something like:
iptables -t nat -I POSTROUTING 1 -s <container_ip> -j SNAT --to-source 10.1.1.3
You will also need to add that address to an interface on your host:
ip addr add 10.1.1.3/24 dev eth0
The main goal is to do a real NAT instead of NAPT. Note normal docker run -p ip:port2:port1 command actally is doing NAPT (address+port translation) instead of NAT(address translation). Is it possible to map address only, but keep all exposed ports the same as the container, like docker run -p=ip1:*:* ... , instead of one by one or a range?
ps.1. My port range is rather big (22-50070, ssh-hdfs) so port range approach won't work.
ps.2. Maybe I need a swarm of virtual machines and join the host into the swarm.
ps.3 I raised an feature request on github. Not sure if they will accept it but currently there are 2000+ open issues (it's so popular).
Solution
On linux, you can access any container by ip and port without any binding (no -p) ootb. Docker version: CE 17+
If your host is windows, and docker is running on a linux VM like me, to access the containers, the only thing need to do is adding the route on windows route add -p 172.16.0.0 mask 255.240.0.0 ip_of_your_vm. Now you can access all containers by IP:port without any port mapping from both windows host and linux VM.
There are few options you have. One is to decide which PORT range you want to map then use that in your docker run
docker run -p 192.168.33.101:80-200:80-200 <your image>
Above will map all ports from 80 to 200 on your container. Assuming your idle IP is 192.168.33.100. But unfortunately it is not possible to map a larger port range as docker creates multiple iptables forks to setup the tables and bombs the memory. It would raise an error like below
docker: Error response from daemon: driver failed programming external connectivity on endpoint zen_goodall (0ae6cec360831b46fe3668d6aad9f5f72b6dac5d26cc6c817452d1402d12f02c): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8513 -j DNAT --to-destination 172.17.0.3:8513 ! -i docker0: (fork/exec /sbin/iptables: resource temporarily unavailable)).
This is not right way of docker mapping it. But this is not a use case that they would agree to, so may not fix the above issue. Next option is to run your docker container without any port publishing and use below iptables rules
DOCKER_IP=172.17.0.2
ACTION=A
IP=192.168.33.101
sudo iptables -t nat -$ACTION DOCKER -d $IP -j DNAT --to-destination $DOCKER_IP ! -i docker0
sudo iptables -t filter -$ACTION DOCKER ! -i docker0 -o docker0 -p tcp -d $DOCKER_IP -j ACCEPT
sudo iptables -t nat -$ACTION POSTROUTING -p tcp -s $DOCKER_IP -d $DOCKER_IP -j MASQUERADE
ACTION=A will add the rules and ACTION=D will delete the rules. This would setup complete traffic from your IP to the DOCKER_IP. This only good if you are doing it on a testing server. Not recommended on staging or production. Docker adds a lot more rules to prevent other containers poking into your container but this offers no protection whatsoever
I dont think there is a direct way to do what you are asking.
If you use "-P" option with "docker run", all ports that are exposed using "EXPOSE" in Dockerfile will automatically get exposed with random ports in the host. With "-p" option, the only way is to specify the option multiple times for multiple ports.