Docker swarm mode routing mesh not work as expected - docker

I tried to create services in docker swarm model by following this document
I created two nodes in the swarm:
Then create the deploy the service, I use jwilder/whoami here instead of nginx in the document,
docker service create --name my-web --publish published=8888,target=8000 --replicas 2 jwilder/whoami
Seems like they started successfully:
As the document said:
When you access port 8080 on any node, Docker routes your request to
an active container.
SO in my opinion, I can access the my-web service from any of the node, however I found that only one node work:
What's going on?

This can be caused by ports being blocked between the nodes. The swarm mesh networking uses the "ingress" network to connect the published port to a VIP for the service. That ingress network is an overlay network implemented with vxlan. For that you need:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
Reference: https://docs.docker.com/network/overlay/
It's possible for these ports to be blocked at many levels, including iptables, firewalls on the routers, and I've even seen VMware block this with their NSX tool that also implemented vxlan.
For iptables, I typically use the following commands:
iptables -A INPUT -p tcp -m tcp --dport 2376 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
iptables -A INPUT -p udp -m udp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m udp --dport 4789 -j ACCEPT
iptables -A INPUT -p 50 -j ACCEPT
The above will differ if you use firewalld or need to change firewall rules on the network routers.

Related

Inside a container, how to resolve DNS on the host, on a specific port

I've an instance running a consul agent & docker. Consul agent can be used to resolve DNS queries on 0.0.0.0:8600. I'ld like to use this from inside a container.
A manual test works, running dig #172.17.0.1 -p 8600 rabbitmq.service.consul inside a container resolve properly.
A first solution is to run --network-mode host. It works. I'll do this until better. But I don't like it, security-wise.
Another idea, use docker's --dns and associated options. Even if I can script grabbing the IP, I can't get how to specify port=8600. Maybe in --dns-opts, but how ?
Along this line, writing the container's resolv.conf could do. But again, how to specify the port, I saw no hints in man resolv.conf, I believe it's not possible.
Last, I can set up a dnsmasq inside the container or in a sidecar container, along the line of this Q/A. But it's a bit heavy.
Anyone can help on this one ?
You can achieve this with the following configuration.
Configure each Consul container with a static IP address.
Use Docker's --dns option to provide these IPs as resolvers to other containers.
Create an iptables rule on the host system which redirects traffic destined to port 53 of the Consul server to port 8600.
For example:
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto udp \
--dst 1920.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600
# Repeat for TCP
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto tcp \
--dst 192.0.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600

iptables: Access from local machine to docker container is not possible

I have an issue in regards to my iptables setup. My goal is to reach the https based webserver inside a docker container from the server machine itself.
The setup is the following:
The server is connected to the internet via eth0 and serves http via port 443.
Any users from the outside (internet) reach the server via the ip address 1.2.3.4.
It is connected to the internal network via eth1 and serves dhcp, dns and some more services.
Any users from the inside (intranet) reach the server via the ip address 10.0.0.1.
The docker container is connected via docker1 on the server. The later has the ip address 10.8.0.2 inside the docker network.
The docker container serves the webserver on port 1443, but iptables forwards (NAT) requests on port 443 to its address 10.8.0.1 and the destination port 1443.
What is working:
The webserver is perfectly reachable from the internet and the intranet.
The webserver can be reached from the server itself using the address 10.8.0.1:1443.
What is not working:
Any client which is working directly on the server can not reach the docker webserver using https://example.com:443. Using https://10.8.0.1:8443 would work, but fails due to a certificate error. It is not a goal to skip the certificate check as a workaround.
Excerpt of the iptable configuration:
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i docker1 -o docker1 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp -d 1.2.3.4 --dport 443 -j DNAT --to-destination 10.1.0.1:1443
iptables -A FORWARD -o docker1 -p tcp --dport 1443 -j ACCEPT
iptables -A INPUT -i docker1 -j DROP
iptables -A FORWARD -i docker1 -j DROP
Due to that "complicated" setup I am no longer able to understand which of the iptable rules and chains need to be applied to make this work so I am seeking for your help to solve that issue.
Brainstorming about the issue using a simplified model and my understanding of the iptable chains the way of the packages might/should look like this:
Origin is a local application (wget).
The packages go through the OUTPUT table.
The packages go through the POSTROUTING table.
Magic happens...
The packages arrive again in the PREROUTING table.
The packages might go trough INPUT again.
The packages might arrive at the target application (webserver).

Docker-Swarm: Join a docker-swarm from another subnet

I have 4 virtual machines in the same subnet, which are part of a docker-swarm.
Now I want connect another node (virtual machine), which is located in a different country (not the same subnet).
I am an IP noob and it is hard for me to set up an overlay network in docker, which is able to handle this connection.
Which aspects I need to keep in mind, by setting up this kind of docker-swarm?
You need the following ports open between your swarm nodes:
2377/tcp: Swarm mode api
7946/both: Overlay networking control
4789/udp: Overlay networking data
protocol 50 for ipsec (secure option) of overlay networking
The following iptables commands can be used for this (you may want to limit the source host to only your other docker swarm nodes):
iptables -A INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m udp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
iptables -A INPUT -p 50 -j ACCEPT
This needs to be configured on all of your swarm nodes if they have a restrictive host firewall, and on the network firewalls protecting your subnets.

Logspout can't connect to papertrail

I can't get logspout to connect to papertrail. I get the following error:
!! lookup logs5.papertrailapp.com on 127.0.0.11:53: read udp 127.0.0.1:46185->127.0.0.11:53: i/o timeout
where 46185 changes every time I run the container. It seems like a DNS error, but nslookup logs5.papertrailapp.com gives the expected output, as does docker run busybox nslookup logs5.papertrailapp.com.
Beyond that, I don't even know how to interpret that error message, let alone address it. Any help debugging this would be hugely appreciated.
My Docker Compose file:
version: '2'
services:
logspout:
image: gliderlabs/logspout
command: "syslog://logs5.papertrailapp.com:12345"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
sleep:
image: benwhitehead/env-loop
Where 12345 is the actual papertrail port. Result is the same whether using syslog:// or syslog-tls://.
From https://docs.docker.com/engine/userguide/networking/configure-dns/:
the docker daemon implements an embedded DNS server which provides built-in service discovery for any container
It looks like your container is unable to connect to this DNS server. If your container is on the default bridge network, it won't reach the embedded DNS server. You can either set --dns to be an outside source or update /etc/resolv.conf. It doesn't sound like a Papertrail issue, at all.
(source)
Docker and iptables got in a fight. So I spun up a new machine, failed to set up iptables, and the problem was solved: no firewall at all to get in the way of Docker's connections!
Just kidding, don't do that. I got a toy database hacked that way.
Fortunately, it's now relatively easy to get iptables and Docker to live in harmony, using the DOCKER_USER iptables chain.
The solution, excerpted from my blog:
Configure Docker with iptables=true, and append to iptables configuration:
iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP

how to remove 8080 from URL

Could someone please tell me, what can I do to give my application a simple URL. Right now I call my app with this URL -
http://localhostname:8080/MyProject
I would like to call it with this URL -
http://localhostname/MyProject
I'am using JBoss 7.1.0 Final version
As said above in Alexander Pavlov's comments the easiest way to go about this is using port 80.
Applications servers normally have a config file (usually xml) in which you specify the port for your application to use. The default for most application servers is port 8080 so your url will look like this: http://<server IP or name>:8080.
If you modify your applications server configuration to make the default port 80 then you will only have to do this: http://<server IP or Name>
I was in the process of researching this very topic (for a single instance though) and came across a recommendation from the RedHat Discussions.
This is Linux specific for a single instance. OP didn't specify environment. But this should point OP down the right path if using Linux.
Using port 80 & 443 requires root to run the JBoss instance. Chances are, the SA isn't going to grant this to the user, so an alternative is to have the SA modify the iptables. Credit goes to PixelDrift.NET Support over in the RedHat Discussions for the great lead.
iptables -I INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
iptables -I INPUT -i eth0 -p tcp --dport 8443 -j ACCEPT
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443
My SA modified the iptables to our needs.
iptables -I INPUT -p tcp --dport 8380 -j ACCEPT
iptables -I INPUT -p tcp --dport 8443 -j ACCEPT
iptables -I INPUT -p tcp --dport 9990 -j ACCEPT
iptables -I INPUT -p tcp --dport 9443 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8380
iptables -A PREROUTING -t nat -p tcp --dport 443 -j REDIRECT --to-port 8443
iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT
After the changes were applied, I was successfully able to access the application using http://bar.foo/baz without the necessity of including the port number.

Resources