Expose ngnix ingress controller as Daemon-set - docker

I am trying to install and use the nginx-ingress to expose services running under kubernetes cluster and I am following this instructions.
In step 4 it's noted that :
If you created a daemonset, ports 80 and 443 of the Ingress controller container are mapped to the same ports of the node where the container is running. To access the Ingress controller, use those ports and an IP address of any node of the cluster where the Ingress controller is running.
That mean that the daemonset will be listening on ports 80 and 443 to forward the incoming traffic to the service mapped by an ingress.yaml config file.
But after running the instruction 3.2 kubectl apply -f daemon-set/nginx-ingress.yaml the daemon-set was created but nothing was listening on 80 or 443 in all the cluster's nodes.
Is there a problem with the install instruction or am I missing something there.

It is not the typical Listen which you can get from the output of netstat. It is "listened" by iptables. The following is the iptables rules for the ingress controller on my cluster node.
-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.233.88.110:80
-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 443 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.233.88.110:443
10.233.88.110 is the ip address of the ingress controller running on that node.
$ kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-5rh26 1/1 Running 1 77d 10.233.83.110 k8s-master3 <none> <none>
ingress-nginx-controller-9nnwl 1/1 Running 1 77d 10.233.88.110 k8s-master2 <none> <none>
ingress-nginx-controller-ckkb2 1/1 Running 1 77d 10.233.68.111 k8s-master1 <none> <none>
Edit
When a request comes to port 80/443, the iptables will apply DNAT rule to this request which modify the destination IP to the ip address of the ingress controller. The actual listen is inside the ingress controller container.

As mentioned by Hang du (+1)
According to the default setting in your cluster for --proxy-mode:
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs'. If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
Assuming you have nginx-ingress-controller-xxx controller in kube-system namespace you can use this command to verify this parameters on your side:
sudo iptables-save | grep $(kubectl get pods -n kube-system -o wide | grep nginx-ingress-controller-xxx | awk '{print $6}')
More information about iptables/netfilter you can find here and here.
Additional resources:
Network Plugins
Introducing kube-iptables-tailer
Update:
HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
So in addition to the above answer:
In order to bind ports 80 and 443 directly to Kubernetes nodes' network interfaces you can set-up hostNetwork: true (but it's not recommended):
Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.

Related

How to grant internet access to application servers through load balancer

I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward

Inside a container, how to resolve DNS on the host, on a specific port

I've an instance running a consul agent & docker. Consul agent can be used to resolve DNS queries on 0.0.0.0:8600. I'ld like to use this from inside a container.
A manual test works, running dig #172.17.0.1 -p 8600 rabbitmq.service.consul inside a container resolve properly.
A first solution is to run --network-mode host. It works. I'll do this until better. But I don't like it, security-wise.
Another idea, use docker's --dns and associated options. Even if I can script grabbing the IP, I can't get how to specify port=8600. Maybe in --dns-opts, but how ?
Along this line, writing the container's resolv.conf could do. But again, how to specify the port, I saw no hints in man resolv.conf, I believe it's not possible.
Last, I can set up a dnsmasq inside the container or in a sidecar container, along the line of this Q/A. But it's a bit heavy.
Anyone can help on this one ?
You can achieve this with the following configuration.
Configure each Consul container with a static IP address.
Use Docker's --dns option to provide these IPs as resolvers to other containers.
Create an iptables rule on the host system which redirects traffic destined to port 53 of the Consul server to port 8600.
For example:
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto udp \
--dst 1920.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600
# Repeat for TCP
$ sudo iptables --table nat --append PREROUTING --in-interface docker0 --proto tcp \
--dst 192.0.2.4 --dport 53 --jump DNAT --to-destination 192.0.2.4:8600

Unable to access to service from kubernetes master node

[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?

Docker swarm mode routing mesh not work as expected

I tried to create services in docker swarm model by following this document
I created two nodes in the swarm:
Then create the deploy the service, I use jwilder/whoami here instead of nginx in the document,
docker service create --name my-web --publish published=8888,target=8000 --replicas 2 jwilder/whoami
Seems like they started successfully:
As the document said:
When you access port 8080 on any node, Docker routes your request to
an active container.
SO in my opinion, I can access the my-web service from any of the node, however I found that only one node work:
What's going on?
This can be caused by ports being blocked between the nodes. The swarm mesh networking uses the "ingress" network to connect the published port to a VIP for the service. That ingress network is an overlay network implemented with vxlan. For that you need:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
Reference: https://docs.docker.com/network/overlay/
It's possible for these ports to be blocked at many levels, including iptables, firewalls on the routers, and I've even seen VMware block this with their NSX tool that also implemented vxlan.
For iptables, I typically use the following commands:
iptables -A INPUT -p tcp -m tcp --dport 2376 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
iptables -A INPUT -p udp -m udp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m udp --dport 4789 -j ACCEPT
iptables -A INPUT -p 50 -j ACCEPT
The above will differ if you use firewalld or need to change firewall rules on the network routers.

Logspout can't connect to papertrail

I can't get logspout to connect to papertrail. I get the following error:
!! lookup logs5.papertrailapp.com on 127.0.0.11:53: read udp 127.0.0.1:46185->127.0.0.11:53: i/o timeout
where 46185 changes every time I run the container. It seems like a DNS error, but nslookup logs5.papertrailapp.com gives the expected output, as does docker run busybox nslookup logs5.papertrailapp.com.
Beyond that, I don't even know how to interpret that error message, let alone address it. Any help debugging this would be hugely appreciated.
My Docker Compose file:
version: '2'
services:
logspout:
image: gliderlabs/logspout
command: "syslog://logs5.papertrailapp.com:12345"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
sleep:
image: benwhitehead/env-loop
Where 12345 is the actual papertrail port. Result is the same whether using syslog:// or syslog-tls://.
From https://docs.docker.com/engine/userguide/networking/configure-dns/:
the docker daemon implements an embedded DNS server which provides built-in service discovery for any container
It looks like your container is unable to connect to this DNS server. If your container is on the default bridge network, it won't reach the embedded DNS server. You can either set --dns to be an outside source or update /etc/resolv.conf. It doesn't sound like a Papertrail issue, at all.
(source)
Docker and iptables got in a fight. So I spun up a new machine, failed to set up iptables, and the problem was solved: no firewall at all to get in the way of Docker's connections!
Just kidding, don't do that. I got a toy database hacked that way.
Fortunately, it's now relatively easy to get iptables and Docker to live in harmony, using the DOCKER_USER iptables chain.
The solution, excerpted from my blog:
Configure Docker with iptables=true, and append to iptables configuration:
iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP

Resources