Terraform Docker Provider: "Bind" port to specific network - docker

Launching a Docker Container that supports three networks: Default Bridge(OAM), Bridge(Data In), & MacVLAN(Data Out). The problem is that the Docker Provider seems to setup all the defined ports on the Default Bridge. How do I tell Terraform to bind the defined ports to a specific Docker Network??
The .tf file snippit:
ports {
# Data-In (Bridge)
internal = 881
external = 881
}
ports {
# SSH Access on Default Bridge
internal = 22
external = 222
}
networks_advanced {
name = "bridge"
}
networks_advanced {
name = "data-in-net"
}
networks_advanced {
name = "data-out-net"
}
The Docker networks:
# docker network list
NETWORK ID NAME DRIVER SCOPE
1c2441b0b530 bridge bridge local
c68892f0c6e5 host host local
bb45d9dcbad1 none null local
a318d3bf3075 data-out-net macvlan local
af806334c7bf data-in-net bridge local
Port 222 works, port 881 does not.
IPTables from the host OS running Docker:
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8000
ACCEPT tcp -- anywhere 192.168.32.2 tcp dpt:601
ACCEPT udp -- anywhere 192.168.32.2 udp dpt:syslog
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:881
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:https
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:ssh
The tcp dpt:881 line needs to have a destination of 192.168.32.3. The syslog container ONLY uses the Data-In Docker Network, and thus has a correct IP address.
Any sugggestions/workarounds?? Thanks! :)

Related

Cannot connect to Protonmail Bridge SMTP (host machine) from a Docker container

My setup is:
Debian, Docker
Host machine running Protonmail Bridge as a service
Docker container running Discourse with their default recommended setup
Issue: From the Docker container, I cannot connect to the SMTP server exposed by the Protonmail Bridge on the host machine.
I checked open ports on the host machine, all good:
ss -plnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:1025 0.0.0.0:* users:(("proton-bridge",pid=953,fd=12))
How I test
Host machine:
openssl s_client -connect 127.0.01:1025 -starttls smtp
Works.
Docker container:
openssl s_client -connect 172.17.0.1:1025 -starttls smtp
Connection refused.
I’m wondering if the Protonmail Bridge service that’s listening on 127.0.0.1:1025 is not accepting connections from the Docker container because they are not coming from 127.0.0.1 exactly? If this is the problem, how to validate and fix? If this is not the problem, what am I doing wrong?
Other tests
nmap 127.0.0.1 on the host machine outputs:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
1025/tcp open NFS-or-IIS
1042/tcp open afrog
Note that it lists the open port 1025.
nmap 172.17.0.1 in the docker container does not output any 1025 port. I'm not sure if this is the problem either.
Output of route in the Docker container:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
This may be impossible currently, but should be solved by this pull request.
If you're comfortable compiling the proton-bridge package from source, you only have to change 1 line in the internal/bridge/constants.go file to say
Host = '127.0.0.1'
To
Host = '0.0.0.0'
Then recompile with make build-nogui (to build the "headless" version).
And you should be good to go!

Iptables rules with dockerized web application - can't block incoming traffic

i'm hosting a dockerized web application binded with port 8081 in my remote server.
I want to block that web application for external ips, as I already did wit port 8080 hosting a plain jenkins server.
Here's what i've tried:
iptables -A INPUT -d <my-server-ip> -p tcp --dport 8081 -j DROP
As I did with port 8080.
Here is
iptables -nv -L INPUT
output:
Chain INPUT (policy ACCEPT 2836 packets, 590K bytes)
pkts bytes target prot opt in out source destination
495 23676 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8080
0 0 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8081
Has it possibily something to do with DOCKER chain in iptables ?
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
9 568 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 <container-eth1-addr> tcp dpt:8080
There are more specific rules i need to add ?
Isn't my server INPUT rules supposed to be applied before those listed in the DOCKER chain?
UPDATE - SOLVED
Thanks to larsks's comments I found the solution.
The goal here was to block tcp traffic on port 8081 binded with docker docker container but being able to use ssh tunneling as "poor man" VPN (so non publish the port was not an option).
Just had to add this rule:
iptables -I DOCKER-USER 1 -d <container-eth-ip> ! -s 127.0.0.1 -p tcp --dport 8080 -j DROP

Docker networking, baffled and puzzled

I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is.
Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network.
When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.
But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.
After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application.
So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)
But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.
I hope someone can shed a light on this, I trying to understand why this is happening.
Some relevant information
Docker host:
# ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
[snip]
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1xx.1xx.199.134 netmask 255.255.255.0 broadcast 1xx.1xx.199.255
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e7f2f7a271b pplbase_api "flask run --host=0.…" 20 hours ago Up 19 hours 0.0.0.0:5000->5000/tcp pplbase_api_1
fdfa10b1ce99 elasticsearch:7.5.1 "/usr/local/bin/dock…" 21 hours ago Up 19 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pplbase_elastic_1
# docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
77127ce4b631 pplbase_pplbase bridge local
# iptables -L -n
[snip]
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9300
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9200
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Docker compose file:
version: '3'
services:
api:
build: .
links:
- elastic
ports:
- "5000:5000"
networks:
- pplbase
environment:
- ELASTIC_HOSTS=elastic localhost
- FLASK_APP=app.py
- FLASK_ENV=development
- FLASK_DEBUG=0
tty: true
elastic:
image: "elasticsearch:7.5.1"
ports:
- "9200:9200"
- "9300:9300"
networks:
- pplbase
environment:
- discovery.type=single-node
volumes:
- ${PPLBASE_STORE}:/usr/share/elasticsearch/data
networks:
pplbase:
driver: bridge
After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection
Connection to 127.0.0.1 5000 port [tcp/*] succeeded!
Checking with netstat when no clients are connected is see:
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
While trying to connect from the dockerhost, the connection is made:
tcp 0 1 172.20.0.1:56866 172.20.0.3:5000 SYN_SENT 27824/docker-proxy
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
tcp6 0 0 ::1:58900 ::1:5000 ESTABLISHED 31642/links
tcp6 592 0 ::1:5000 ::1:58900 ESTABLISHED 27824/docker-proxy
So I'm suspecting now some networkvoodoo on the docker host.
The Flask instance is running at 0.0.0.0:5000.
Have you tried: curl http://0.0.0.0:5000/?
It might be that your host configuration maps localhost with 127.0.0.1 rather than 0.0.0.0
So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.
In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT
Where 172.20.0.0/32 is the bridged network generated by Docker.

Docker expose a port only to localhost

I want to restrict my database access to 127.0.0.1, so I executed the following command:
docker run -it mysql:5.5 -p 127.0.0.1:3306:3306 -name db.mysql
But I have some confusion...
You can see here that only the port of 127.0.0.1 will be forwarded:
; docker ps
mysql:5.5 127.0.0.1:3306->3306/tcp db.mysql
Interestingly, I cannot find this restriction in iptables:
; iptables -L
Chain FORWARD (policy DROP)
DOCKER all -- anywhere anywhere
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 192.168.112.2 tcp dpt:mysql
The source of this rule is anywhere.
The incoming traffic will go as next:
Incoming package to host's network -> use ip tables to forward to container
And, your restrict was not in iptables, it was in host's network, you just open 3306 bind on 127.0.0.1, not 0.0.0.0, so you of course not see anything in iptables. 127.0.0.1:3306:3306 means hostIp:hostPort:containerPort.
You could confirm it with netstat -oanltp | grep 3306 to see no 0.0.0.0 was there, so no foreign host could visit your host machine, thus also could not visit your container.

Docker - exposing IP addresses to DNS server

Looking at the iptables of my docker host i get something like this:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http-alt
I was able to assign a durable IP address like this:
sudo ip addr add 10.0.0.99/8 dev eth0
docker run -d -p 10.0.0.99:8888:8080 tomcat:8
but that address is only available on this machine, as in I need to ssh into it and ping it from this box.
Reading through this, it looks like i need to add a custom bridge:
Custom Docker Bridge
Is there a way to make the bridge hand fresh ips from the DHCP server? For example if my DHCP server assigned addresses from 10.1.1.x - I want to assign those addresses to Docker containers.
Would this involve a generic *nix way of pushing my iptable /etc/hosts ip addresses and dns names to a DNS server so other machines outside of the Docker cluster can ping those machines?
I have port forwarding working but need to do the same with the ip addresses as Zookeeper only tracks the internal Docker ip addresses and hostnames as I've defined them in my docker-compose.yml.

Resources