Getting a connection timeout issue with port forwarding in Kubernetes? - docker

I'm running a k8 cluster on Docker for Mac. To allow a connection from my database client to my mysql pod, I use the following command kubectl port-forward mysql-0 3306:3306. It works great, however a few hours later I get the following error E0201 18:21:51.012823 51415 portforward.go:233] lost connection to pod.
I check the actual mysql pod, and it still appears to be running. This happens every time I run the port-forward command.
I've seen the following answer here: kubectl port forwarding timeout issue and the solution is to use the following flag --streaming-connection-idle-timeout=0 but the flag is now deprecated.
So following on from there, It appears that I have to set that parameter via a kubelet config file (config file)? I'm unsure on how I could achieve this as Docker for Mac runs as a daemon and I don't manually start the cluster.
Could anyone send me a code example or instructions as to how i could configure kubectl to set that flag so my port forwarding won't have timeouts?

Port forwards are generally for short term debugging, not “hours”. What you probably want is a NodePort type service which you can then connect to directly.

Related

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

How to run a docker image on kubernetes that accepts command line arguments?

I have a docker image, that contains a python file that accepts arguments from command line using sys.stdin(). I can run the image using the following command
cat file.csv | docker run -i -t my_image
It pipes the contents of file.csv to the image, and i get the output as expected.
Now i want to deploy this image to kubernetes. I can run the image on the server using docker without any problems. But if i curl to it, it should send a response back, but i am not getting it because i do not have a web server listening on any port. I went ahead and built a deployment using the following command.
kubectl run -i my_deployment --image=gcr.io/${PROJECT_ID}/my_image:v1 --port 8080
It built the deployment and i can see the pods running. Then i expose it.
kubectl expose deployment my_deployment --type=LoadBalancer --port 80 --target-port 8080
But if i try to access it using the IP assigned using curl,
curl http://allocated_ip
i get a response "connection refused".
How can deploy this docker image as a service on kubernetes and send contents of a file as an input to the service? Do i need a web server for that?
Kubernetes generally assumes the containers it deploys are long-lived and autonomous. If you're deploying something in a Pod, particularly via a Deployment, it should be able to run on its own without any particular inputs. If it immediately exits, Kubernetes will restart it, and you'll quickly wind up in the dreaded CrashLoopBackOff state.
In short: you need to redesign your container to not use stdin and stdout is its primary interface.
Your instinct to add a network endpoint into the service is probably correct, but Kubernetes won't do that on its own. If you rebuild your application to have, say, a Flask server and listen on a port, that's something you can readily deploy to Kubernetes. If the application expects data to come in on stdin and its results to go to stdout, adding the Kubernetes networking metadata won't help anything: in your example if nothing is listening inside the container on port 8080 then a network connection will never go anywhere.
I am assuming Kubernetes is running on premises. I would do the following.
Create a nginx or apache deployment. Using Helm, it is pretty easy with
helm install stable/nginx-ingress
Create a deployment with the port 8080, or whatever you would expose from running it from with docker. The actual deployment would have an API which I could send content via a POST.
Create a service with port 8080 and targetPort 8080. It should be type ClusterIP.
Create a ingress with the hostname, and servicePort of 8080.
Since you are passing the file as argument when running a command, this makes me think that once you have the content on the container you do not need to update the content of the csv.
The best approach to achieve the read operation of that file, would be to ADD that file on your Dockerfile and the open the file using the open function.
You would have a line like
ADD file.csv /home/file.csv
And in your python code something like :
file_in = open(‘home/file.csv’, ‘r’)
Note that if you want to change the file, you would need to update the Dockerfile, build again, push to the registry and re-deploy to GKE. If you do not want to follow this process, you can use a ConfigMap.
Also, if this answers your question make sure to link your same question on serverfault

Connection refused when trying to connect to services in Kubernetes

I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.

Cannot curl Kubernetes service IP from the host

I am having the same problem as mentioned here: Cannot access kubernetes service via outside network. I have tried the solution mentioned using Ingress, but without any success.
My pods are up and running, along with my service.
I can curl any of the endpoints successfully from within a pod, but not able to curl from the host.
When I am using Ingress, the address field shows blank, and while trying to curl the hostname, it shows Could not resolve host.
I am using Kubernetes on Docker Edge, on a MacBook Pro.
How do I curl the service endpoint from the host?
First of all, please note that Kubernetes on MacOS runs separate virtual machine
to run Docker containers and Kubernetes as well. It is important to
understand that you can have a problem connecting from MacOS to some Kubernetes
resources. TCP connections are not realized in the same way they are in the cloud environment.
It depends on the configuration of the internetworking between MacOS and the VM where Kubernetes stack
is running.
(NAT, bridge, host only connection)
I suppose that you chose a NodePort Service and in this kind of configuration,
you need to know both: the IP address of a node and the port where Kubernetes started to listen to
the incoming connection. Ingress, in this case, analyses a host http header to determine
a route of the traffic. It’s similar to the Service created on type:NodePort. You need to call
a proper Ingress service. It is not obvious that service is listening on Well-Known Port.
In fact, It is a bit tricky, and it may not be easy to connect from MacOS to type:NodePort service
without knowing where did Kubernetes create a listening socket, and be sure that MacOS is actually
supporting routes to this TCP port from MacOS to VM.

Docker: able to telnet to remote machines from host but not from container

We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.

Resources