accessing minikube loadbalancer service using VM host ip - docker

I have VM (ip: 10.157.156.176) with linux 7 installed. I am able to access with SSH with VM IP.
I have successfully installed Kubectl, Minikube and created loadbalancer service with 2 pods.
[10-157-156-176 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 14h v1.21.2
[10-157-156-176 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
customers-engagement-service LoadBalancer 10.106.146.66 <pending> 80:30654/TCP 14h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
[dc-user#ech-10-157-156-176 ~]$
[10-157-156-176 ~]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
customers-engagement-service-6f75f4df4b-vlpb8 1/1 Running 0 13h 172.17.0.6 minikube <none> <none>
customers-engagement-service-6f75f4df4b-zdjmd 1/1 Running 0 4h22m 172.17.0.5 minikube <none> <none>
[10-157-156-176 ~]$ minikube service customers-engagement-service --url
http://192.168.49.2:30654
I am able to access service within my VM (10-157-156-176) using service URL
[10-157-156-176 ~]$ curl -v http://192.168.49.2:30654/customers
* Trying 192.168.49.2:30654...
* TCP_NODELAY set
* Connected to 192.168.49.2 (192.168.49.2) port 30654 (#0)
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
I would like to access the service from different machine (which is having connectivity to host VM) using host VM ip (10.157.156.176) instead of Minikube VM ip (192.168.49.2).
What changes I have to do to achieve that?

For type LoadBalancer , you would see that the external IP is in pending state. You need to use minikube tunnel to expose it. To use the host IP you need to use nodePort.
Here is a detailed document : https://minikube.sigs.k8s.io/docs/handbook/accessing/

Related

How to connect to the master node using nodeport Kubernetes

I have two pods running on two different VM in the cluster one on the master node and other on the worker node. I have the following docker file exposed port 31700 on the server-side and IP address of server VM node is 192.168.56.105 and for client-side VM IP address is 192.168.56.106.
Dockerfile
EXPOSE 31700
Server file
sock = socket()
sock.bind(('0.0.0.0',31700))
Client file
sock.connect(('192.168.56.105',31700))
Pod : kubectl get pods
NAME STATUS ROLES AGE VERSION
kmaster Ready master 25h v1.19.3
knode Ready worker 25h v1.19.3
Service : kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
myapp-service NodePort 10.108.144.147 <none> 80:31700/TCP 49m
Detail of the service is described below:
kubectl describe services myapp-service
Name: myapp-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=edge-server
Type: NodePort
IP: 10.108.144.147
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31700/TCP
Endpoints: 192.168.189.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
When I try to ping through the below command I retrieve Connection refused on both the VMs
curl -v https://192.168.56.105:31700
I am able to ping the two pods. Please help me in this on connecting the server and client. Help is highly appreciated. Thank you for your wonderful support.
You need to use port-forwarding to access applications in your cluster
(see https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
You can forward a local port to a port on the Pod with:
kubectl port-forward service/kubernetes <local-port>:443
myapp-service is exposing (listening to) 31700, but you should use port 80.
In your Dockerfile , it should be "EXPOSE 80" instead of "EXPOSE 31700" ( Assuming your container listening on port 80)

Why I cant access a kubernetes pod from other Nodes IP?

I've installed kubernetes cluster with help of Kubespray.
Cluster having 3 Nodes (2 Master & 1 Worker).
node1 - 10.1.10.110,
node2 - 10.1.10.111,
node3 - 10.1.10.112
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 12d v1.18.5
node2 Ready master 12d v1.18.5
node3 Ready <none> 12d v1.18.5
I deployed this pod in node1 (10.1.10.110) and exposed nodeport service as shown.
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/httpd-deployment-598596ddfc-n56jq 1/1 Running 0 7d21h 10.233.64.15 node1 <none> <none>
---
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/httpd-service NodePort 10.233.16.84 <none> 80:31520/TCP 12d app=httpd
Service description
$ kubectl describe services -n default httpd-service
Name: httpd-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=httpd
Type: NodePort
IP: 10.233.16.84
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31520/TCP
Endpoints: 10.233.64.15:80
Session Affinity: None
External Traffic Policy: Cluster
Question:
I can able to access the service from node1:31520 (where the pod actually deployed) but can't able to access the same service from other nodes (node2:31520 (or) node3:31520)
$curl http://10.1.10.110:31520
<html><body><h1>It Works!</h1></body></html>
but if I curl with other node IP, timed out response
$curl http://10.1.10.111:31520
curl (7): Failed connect to 10.1.10.111; Connection timed out
$curl http://10.1.10.112:31520
curl (7): Failed connect to 10.1.10.112; Connection timed out
Can anyone suggest what I am missing ?
Ideally you should be able to access a pod via NodePort using any of the nodes IP. If kube-proxy or CNI Plugin(calico etc) are not working properly in your cluster then it can cause the problem where pod is not reachable via a Nodes IP on which the Pod is not scheduled.
Check this related question kubernetes: cannot access NodePort from other machines
Because you have only one pod on 10.1.10.110
Your curl is wrong, you didn't deploy a pod on 111 and 112 nodes, this is the reason that the endpoints aren't working. Just execute curl http://10.1.10.110:31520 on the other nodes and it will work

Unable to access to service from kubernetes master node

[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?

127.0.0.1:8001 refused to connect when kubectl proxy to access kubernetes dashboard

I deploy a cluster (neo4j) with kubeadm based on this guide. Now I have these pods :
NAME READY STATUS RESTARTS AGE
neo4j-core-0 1/1 Running 0 20h
neo4j-core-1 1/1 Running 0 20h
neo4j-core-2 1/1 Running 0 20h
and these services :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 60d
neo4j ClusterIP None <none> 7474/TCP,6362/TCP 20h
nginx ClusterIP None <none> 80/TCP 25h
Then I install kubernetes dashboard :
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
So when I do kubectl proxy to access the dashboard, with below link, it says 127.0.0.1 refused to connect.
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
What should I do to access the dashboard?
I also create sample user following this guide.
Kubernetes dashboard fully rely on Apiserver. Connection refused means there is an issue with communication with apiserver. Please see https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above#kubectl-proxy
Also you can try to run
kubectl proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*'
And check if on other interface(port 8002) rather than 127.0.0.1 it works.
Quick fix, edit the kubernetes-dashboard yaml file >> selector type is "ClusterIP" to "NodePort" if you are running on localhost.
then visit "https://master_ip:exposed_port"
i think this helps.

kubernetes master and worker nodes getting different ip range

I have setup a local kubernetes cluster, using vagrant. Have assigned 2 nw interfaces for each vagrant box public and private.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubemaster Ready master 14h v1.12.2 192.168.33.10 <none>
Ubuntu 16.04.5 LTS 4.4.0-137-generic docker://17.3.2
kubenode2 Ready <none> 14h v1.12.2 10.0.2.15 <none>
Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
While initiating kubeadm on master, i ran ip advertise and gave ip as 192.168.33.10 of master.
My reall issue was i am not able to login to any pod.
kubectl exec -ti web /bin/bash
error: unable to upgrade connection: pod does not exist
It's because vagrant, in its default configuration, will have a NAT public_network, usually eth0, and then any additional network interfaces -- such as what is likely a host-only interface on 192.168.33.10
You need to change the kubelet configuration -- and possibly your CNI provider -- to bind and advertise the IP address of kubenode2 that's in a subnet your machine can reach. Unidirectional traffic from kubenode2 can likely reach kubemaster over the NAT IP, but almost by definition your machine cannot reach anything behind the NAT IP, thus the connection failure when trying to reach the kubelet port

Resources