Connection refused when trying to connect to services in Kubernetes - docker

I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?

The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.

Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.

Related

How to access k3d Kubernetes cluster from inside a docker container?

I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/

Docker container ports are clashing in Kubernetes

I am deploying docker containers on a kubernetes cluster with 2 nodes. The docker containers need to have port 50052 open. My understanding was that I just need to define a containerPort (50052) and have a service that points to this.
But when I deploy this, only the first 2 pods will spin up successfully. After that, I get the following message, presumably because the new pods are trying top open port 50052, which is already being used.
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
I thought that multiple pods with the same requested port could be scheduled on the same node? Or is this not right?
Thanks, I figured it out -- I had set host network to true in my kubernetes deployment. Changing this back to false fixed my issue.
You are right, multiple pods with the same port can exist in a cluster. They have to have the type: ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
To avoid port clashes you should not use NodePort as port type. Because if you have 2 nodes and 4 pods, more then one pod will exist on each node causing a port clash.
Depending on how you want to reach your cluster, you have then different options...

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

Kubernetes using Gitlab installing Ingress returns "?" as external IP

I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.

I can not access my Container Docker Image by HTTP

I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.

Resources