Running an nginx forward proxy in kubernetes, getting connection timeout - docker

I am trying to run a nginx forward proxy in kubernetes but I'm facing some issues.
My configuration:
nginx configured as forward proxy with http connect module, running on docker: Dockerfile - listens to 8888
K8S with istio 1.4 Deployment, service, gateway and virtual service configuration, host exposed to 36088
Firefox for testing
My steps:
For local testing, I'm configuring the connection settings in Firefox to point to the instance of nginx running in Docker on localhost:8888. With this configuration, the proxy is behaving as expected, denying and allowing traffic as per the nginx.conf.
For testing my pod in K8S, I can run kubectl port-forward name-of-my-pod 8888:8080 and configure Firefox to use the proxy forwarded on localhost:8080. As per point 1, the proxy works as expected and I can see the traffic hitting my pod in the logs.
Finally, To test my istio/AKS configuration, I can hit https://proxy.mydomain.net:36088 (defined in the gatwway) with a web browser. The url responds just fine and I can see the pod outputting some logs.
Sadly, though, when I configure Firefox to make use of proxy.mydomain.net:36088, I am getting connection timeout and I can see that the traffic is not actually hitting my pod and I am not getting any logs.
In other words, the proxy doesn't seem to be working when I use it as a proxy, but it responds fine when I access its URL a normal website.
Based on the fact that the traffic doesn't seem to hit my pod, I guess that I need to configure something else in istio/aks to ensure that my service/pod works as a proxy, but I don't know if my guess is right. Is anything obvious that am I missing?

After a lot of digging, we managed to make this work.
This is how our templates look like: deployment, gateway, service and virtualservice
The crucial point was:
Adding the .Values.istio.port.number to the istio-ingressgateway
configuring .Values.istio.port.protocol to TCP
configuring the gateway as PASSTHROUGH

Related

HAProxy running inside Docker container always logging same client IP

I have a docker container running haproxy inside, and while it seems to work well, I'm running into a problem where the client IP that reaches the frontend, and shows up on the haproxy logs, is always the same, just with a different port. This IP seems to be the same as the IPV4 IPAM Gateway of the network where the container is running inside (192.168.xx.xx).
The problem with this is that since every request that reaches the proxy has the same client IP, no matter the machine or network where it came from, it's easy for someone with bad intentions to trigger the security restrictions, which bans said IP and no request gets through until the proxy is reset, because every request seems to be coming from the same banned IP.
This is my current haproxy config: (I tried to reduce it to the bare minimum, without the restrictions rules, timeouts, etc, for ease of understanding. I'm testing with this setup and the problem is still present)
global
log stdout format raw local0 info
defaults
mode http
log global
option httplog
option forwardfor
frontend fe
bind :80
default_backend be
backend be
server foo_server $foo_server_IP_and_Port
backend be_abuse_table
stick-table type ip size 1m expire 15m store conn_rate(3s),conn_cur,gpc0,http_req_rate(15s),http_err_rate(20s)
I have tried setting and adding headers, I've also tried to put the container running in the host network, but the problem is that the request does not reach the backend server because it's in a different network, furthermore, I would like to keep the container in the network where it's at, alongside the other containers.
Also, does the backend server configuration influence in any way this problem I'm having? My understanding is that since the problem is already present when reaching the frontend, the backend configuration doesn't matter for this problem.
Any suggestions? This has been driving me crazy for 2 days now. Thank you so much!
Turns out the problem was I was using docker rootless mode. You can either use normal docker or you can use rootless and install the slirpn4netns package and change the port forwarder, following these steps (the section about changing the port forwarder): https://rootlesscontaine.rs/getting-started/docker/#changing-the-port-forwarder

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

No outbound networking on Kubernetes pods

I am running a one-node Kubernetes cluster in a VM for development and testing purposes. I used Rancher Kubernetes Engine (RKE, Kubernetes version 1.18) to deploy it and MetalLB to enable the LoadBalancer service type. Traefik is version 2.2, deployed via the official Helm chart (https://github.com/containous/traefik-helm-chart). I have a few dummy containers deployed to test the setup (https://hub.docker.com/r/errm/cheese).
I can access the Traefik dashboard just fine through the nodes IP (-> MetalLB seems to work). It registers the services and routes for the test containers. Everything is looking fine but when I try to access the test containers in my browser I get a 502 Bad Gateway error.
Some probing showed that there seems to be an issue with outbound traffic from the pods. When I SSH into the node I can reach all pods by their service or pod IP. DNS from node to pod works as well. However, if I start an interactive busybox pod I can't reach any other pod or host from there. When I wget to any other container (all in the default namespace) I only get wget: can't connect to remote host (10.42.0.7): No route to host. The same is true for servers on the internet.
I have not installed any network policies and there are none installed by default that I am aware of.
I have also gone through this: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service
Everything in the guide is working fine, except that the pods don't seem to have any network connectivity whatsoever.
My RKE config is standard, except that I turned off the standard Nginx ingress and enabled etcd encryption-at-rest.
Any ideas?
Maybe just double check that your node's ip forwarding is turned on: sysctl net.ipv4.ip_forward
If for some reason it doesn't return:
net.ipv4.ip_forward = 1
Then you can set it with:
sudo sysctl -w net.ipv4.ip_forward=1
And to make it permanent:
edit /etc/sysctl.conf
add or uncomment net.ipv4.ip_forward = 1
and reload via sysctl -p /etc/sysctl.conf
Ok, so I was being stupid (or rather: a noob). I had an old iptables rule lying around on the host dropping all traffic on the FORWARD chain... removing that rule fixed the problem.
I feel a bit uneasy just removing that role but I have to admit that I don't fully understand the security implications of this. This might take some further research, but that's another topic. And since I'm not currently planning to run this cluster in production but rather use a hosted cluster, it's not really a problem anyways.

Service IP & Port discovery with Kubernetes for external App

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

Getting a connection timeout issue with port forwarding in Kubernetes?

I'm running a k8 cluster on Docker for Mac. To allow a connection from my database client to my mysql pod, I use the following command kubectl port-forward mysql-0 3306:3306. It works great, however a few hours later I get the following error E0201 18:21:51.012823 51415 portforward.go:233] lost connection to pod.
I check the actual mysql pod, and it still appears to be running. This happens every time I run the port-forward command.
I've seen the following answer here: kubectl port forwarding timeout issue and the solution is to use the following flag --streaming-connection-idle-timeout=0 but the flag is now deprecated.
So following on from there, It appears that I have to set that parameter via a kubelet config file (config file)? I'm unsure on how I could achieve this as Docker for Mac runs as a daemon and I don't manually start the cluster.
Could anyone send me a code example or instructions as to how i could configure kubectl to set that flag so my port forwarding won't have timeouts?
Port forwards are generally for short term debugging, not “hours”. What you probably want is a NodePort type service which you can then connect to directly.

Resources