minikube : not able to connect a locally deployed nginx service - docker

I have installed minikube on my ubuntu 16.04 machine and have started a cluster, with a message
"Kubernetes is available at https://192.168.99.100:443"
Next, I deployed nginx service with the following command
> kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 --expose
> kubectl.sh get pods -o wide
NAME READY STATUS RESTARTS AGE NODE
my-nginx-2494149703-8jnh4 1/1 Running 0 13m 127.0.0.1
my-nginx-2494149703-q09be 1/1 Running 0 13m 127.0.0.1
> kubectl.sh get services -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.0.0.1 <none> 443/TCP 14m <none>
my-nginx 10.0.0.83 <none> 80/TCP 13m run=my-nginx
> kubectl.sh get nodes -o wide
NAME STATUS AGE
127.0.0.1 Ready 16m
Questions:
1) Is node 127.0.0.1 my local development machine? This has got me confused me the most.
2) Is my following understanding correct: The cluster (nodes, kubernetes API server) has internal IP addresses in 10.0.0.x and their corresponding external IP addresses are 192.168.99.x. The 2 pods will then have IPs in the range like 10.0.1.x and 10.0.2.x ?
3) Why is the external IP for the services not there? Not even, for the kubernetes service. Isn't the 192.168.99.43 an external IP here?
4) Most importantly, how do I connect to the nginx service from my laptop?

1) Is node 127.0.0.1 my local development machine? This has got me
confused me the most.
When a node registers, you provide the IP or name to register with. By default, the node is just registering 127.0.0.1. This is referencing your VM running linux, not your host machine.
2) Is my following understanding correct: The cluster (nodes,
kubernetes API server) has internal IP addresses in 10.0.0.x and their
corresponding external IP addresses are 192.168.99.x. The 2 pods will
then have IPs in the range like 10.0.1.x and 10.0.2.x ?
Yup, the 10.0.0.x network is your overlay network. The 192.168.99.x are your "public" addresses which are visible outside of the cluster.
3) Why is the external IP for the services not there? Not even, for
the kubernetes service. Isn't the 192.168.99.43 an external IP here?
The external IP is typically used to ingress traffic via a specific IP. The kubernetes service is using a clusterIP service type which means it's only visible to the internal cluster.
4) Most importantly, how do I connect to the nginx service from my
laptop?
The easiest way to view your nginx service is to make it type NodePort, then deploy the service. After that, describe the service to get the port that was assigned (or after you create it will tell you as well). Then hit the ip of your VM and provide the auto assigned NodePort.
e.g. http://192.168.99.100:30001

Related

Cannot access NodePort service outside Kubernetes cluster

I am on Windows and used Docker Desktop to deploy a local Kubernetes cluster using WSL 2. I tried to deploy a pod and expose it through a NodePort service so I could access it outside the cluster, but it is not working.
Here are the commands to reproduce the scenario:
kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment echoserver --type=NodePort --port=8080
Trying to open NODE_IP:EXPOSED_PORT in the browser or running the netcat command nc NODE_IP EXPOSED_PORT and trying to send a message (from either WSL or Windows) does not work.
NODE_IP is the internal IP of the Docker Desktop K8S node (obtained by seeing the INTERNAL-IP column on the command kubectl get nodes -o wide)
EXPOSED_PORT is the node port exposed by the service (obtained by seeing the field NodePort on command kubectl describe service echoserver)
Opening the URL on the browser should be met with this page. However, you will get a generic error response saying the browser couldn't reach the URL.
Sending a message with the netcat command should be met with a 400 Bad Request response, as it will not be a properly formatted HTTP request. However, you will not get any response at all or the TCP connection may not even be made in the 1st place.
Trying to communicate with the service and/or the pod from inside the cluster, for example, through another pod, works perfectly.
Using the command kubectl port-forward deployment/echoserver 2311:8080 to port forward the deployment locally and then either accessing localhost:2311 in the browser or through netcat also work perfectly (in both WSL and Windows).
If you want to access it not using localhost you should use your <windows_hosts's_IP:NodePort>.
So having your deployment and service deployed:
$kubectl get svc,deploy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/echoserver NodePort 10.105.169.2 <none> 8080:31570/TCP 4m12s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m3s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/echoserver 1/1 1 1 4m19s
You can either access it by using localhost:31570 or <windows_hosts's_IP:NodePort>.
In my case 192.168.0.29 is my Windows host's IP:
curl.exe 192.168.0.29:31570
CLIENT VALUES:
client_address=192.168.65.3
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.0.29:8080/

Container docker & Kubernetes apache tomcat 8.5.56 http status 404

please I'm running a .war application on apache tomcat 8.5.56 in a docker container and everything work well, but when I create deploy the container on Kubernetes I can access my application welcome page: I have the error message
HTTP Status 404 – Not Found
Type Status Report
Message The requested resource [/SmartClass] is not available
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.56
Please anyone knows how to solve it?
For the deployment I have just copied the .war file into
/opt/apache-tomcat/webapps/ and I have copied my server.xml file into /opt/apache-tomcat/conf/
It looks like the problem is related to the connection to the application.
Create a Service object that exposes your Tomcat deployment:
kubectl expose deployment tomcat-example --type=NodePort --name=example-service
Display information about the Service:
kubectl describe services example-service
The output is similar to this:
Name: example-service
Namespace: default
Labels: run=lexample
Annotations: <none>
Selector: run=example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30000/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 30000.
List the pods that are running the Tomcat application:
kubectl get pods --selector="run=example" --output=wide
The output is similar to this:
NAME READY STATUS ... IP NODE
tomcat-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
tomcat-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
Get the public IP address of one of your nodes that is running a Tomcat pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see the node address by running kubectl cluster-info. If you are using Google Compute Engine instances, you can use the gcloud compute instances list command to see the public addresses of your nodes.
On your chosen node, create a firewall rule that allows TCP traffic on your node port. For example, if your Service has a NodePort value of 31568, create a firewall rule that allows TCP traffic on port 30000. Different cloud providers offer different ways of configuring firewall rules.
Use the node address and node port to access the Hello World application:
curl http://<public-node-ip>:<node-port>
where <public-node-ip> is the public IP address of your node, and <node-port> is the NodePort value for your service.
Please adjust above command according to proper names and values you have used.

Unable to access to service from kubernetes master node

[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?

kubernetes master and worker nodes getting different ip range

I have setup a local kubernetes cluster, using vagrant. Have assigned 2 nw interfaces for each vagrant box public and private.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubemaster Ready master 14h v1.12.2 192.168.33.10 <none>
Ubuntu 16.04.5 LTS 4.4.0-137-generic docker://17.3.2
kubenode2 Ready <none> 14h v1.12.2 10.0.2.15 <none>
Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
While initiating kubeadm on master, i ran ip advertise and gave ip as 192.168.33.10 of master.
My reall issue was i am not able to login to any pod.
kubectl exec -ti web /bin/bash
error: unable to upgrade connection: pod does not exist
It's because vagrant, in its default configuration, will have a NAT public_network, usually eth0, and then any additional network interfaces -- such as what is likely a host-only interface on 192.168.33.10
You need to change the kubelet configuration -- and possibly your CNI provider -- to bind and advertise the IP address of kubenode2 that's in a subnet your machine can reach. Unidirectional traffic from kubenode2 can likely reach kubemaster over the NAT IP, but almost by definition your machine cannot reach anything behind the NAT IP, thus the connection failure when trying to reach the kubelet port

Kubernetes, Flannel and exposing services

I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16
flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting: --bip=10.0.0.1/24
Hostnode IP: 192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root#kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root#kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
So the issue here was that there's a missing piece of the puzzle when you use nodePort.
I was also making a mistake with the commands.
Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:
kubectl expose rc nginx --port=80 --type=NodePort
Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node:
[root#kubemaster ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Selector: run=nginx
Type: NodePort
IP: 172.16.92.8
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32033/TCP
Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity: None
No events.
You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
yes, you would need to use NodePort.
When you hit the service, the destPort should be equal to NodePort.
The destIP for the service should be considered local by the nodes. E.g. you could use the hostIP of one of the nodes..
A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service..
if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod
you define your container, and ports
containers:
- name: ningx
image: nginx
ports:
- containerPort: 80
hostPort: 80
name: http
this will bind the container to the host networking and use the port defined.
The 2 limitations here are obviously:
1) You can only have one of these pods on each host maximum.
2) The IP is the host IP of the node it binds to
this is essentially how the cloud provider load balancers work in a way.
Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes

Resources