Kubernetes: Why my NodePort can not get an external ip? - docker

Environment information:
Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
Recently, I try to deploy "Zero to Jupyterhub" on kubernetes. My jupyterhub config file such below:
config.yaml
proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
singleuser:
storage:
dynamic:
storageClass: local-storage
capacity: 10Gi
Note: I set the service type as NodePort, because I not have any cloud provider(deploy on my lab servers cluster), and I try using nginx-ingress also then got failure, that reason why I do not using LoadBalance.
But when I using this config file to install jupyterhub via Helm, I can not access jupyterhub from browser, even all Pods running. These pods detail like below:
kubectl get pod --namespace jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-8gxxk 1/1 Running 0 27m
continuous-image-puller-8tmdh 1/1 Running 0 27m
continuous-image-puller-lwdcx 1/1 Running 0 27m
continuous-image-puller-pszsr 1/1 Running 0 27m
hub-7b9cbbcf59-fbppq 1/1 Running 0 27m
proxy-6b699b54c8-2pxmb 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-9vmfr 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-lqfrh 1/1 Running 0 27m
and its services like this:
kubectl get service --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.10.55.78 <none> 8081/TCP 28m
proxy-api ClusterIP 10.10.27.133 <none> 8001/TCP 28m
proxy-public NodePort 10.10.97.11 <none> 443:30443/TCP,80:30080/TCP 28m
Is seem to work well, right? (I guessed.) But the fact is that I can not use ip 10.10.97.11 to access the jupyter main page, and I did not get any external ip also.
So, my problems are:
Do my config have any wrong?
How to get an external ip?
Finally, thank you for save my day so much!

For NodePort service you will not get EXTERNAL-IP. You can not use the CLUSTER-IP to access it from outside the kubernetes cluster because CLUSTER-IP is for accessing it from inside the kubernetes cluster typically from another pod.For accessing from outside the kubernetes cluster you need to use NodeIP:NodePort where NodeIP is your kubernetes nodes IP address.

Related

Kubernetes - how to solve secret exchange problems during pod creation

This question belongs to the problem
Deployment of Ingress-controler with Helm failed
but i want also understand more about the background.
Basic situation is: Pod creation fails with error:
{"err":"Get "https://10.96.0.1:443/api/v1/namespaces/ingress-nginx/secrets/ingress-nginx-admission": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2022-02-22T10:47:49Z"}
i can see that the pod tries to get something from my kubernetes cluster-IP which listen on 443:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121d
default nextcloud-service ClusterIP 10.98.154.93 <none> 82/TCP 13d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d
My questions are:
Can i - somehow - check with a command if the URL-path really exist?
When will this secret be created, and how can i observe this?
Can i manipulate the cluster to use another port for this, like 8080 (non secure) or so?
When i check my secrets with command kubectl get secrets -A i see following results
NAMESPACE NAME TYPE DATA AGE
default default-token-95b8q kubernetes.io/service-account-token 3 122d
ingress-nginx default-token-fbvmd kubernetes.io/service-account-token 3 21h
ingress-nginx ingress-nginx-admission-token-cdfbf kubernetes.io/service-account-token 3 11m
can i somehow tell the deployment script (in values.yaml) the exact name of this secret?

Having problem to access deployed application in multiclustering kubernetes environment in VirtualBox

I have create multicluster kubernetes environment and my node details is:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
16-node-121 Ready <none> 32m v1.14.1 192.168.0.121 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
master-16-120 Ready master 47m v1.14.1 192.168.0.120 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
And I created a service and exposed the service using following command:
$kubectl expose deployment hello-world --port=80 --target-port=8080
The is created and exposed. My service detail information is:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world ClusterIP 10.105.7.156 <none> 80/TCP 33m
I exposed my deployment by following command:
kubectl expose deployment hello-world --port=80 --target-port=8080
service/hello-world exposed
Unfortunately when I try to access my service using curl command I'm getting timeout error:
My service details are following:
master-16-120#master-16-120:~$ kubectl describe service hello-world
Name: hello-world
Namespace: default
Labels: run=hello-world
Annotations: <none>
Selector: run=hello-world
Type: ClusterIP
IP: 10.105.7.156
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.1.2:8080
Session Affinity: None
Events: <none>
curl http://10.105.7.156:80
curl: (7) Failed to connect to 10.105.7.156 port 80: Connection timed out
Here I'm using calico for my multicluster network which is :
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
My Pod networking specification is:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
At last I have got the solution. Thanks to Daniel's comment which helps me to reach my solution.
I change my kubernetis pod network CIDR and calico as follow:
--pod-network-cidr=10.10.0.0/16
And also configure master which is master-16-120 Hosts (/etc/hosts):
master-16-120 192.168.0.120
16-node-121 192.168.0.121
And in the node which is 16-node-121 Hosts (/etc/hosts)
master-16-120 192.168.0.120
16-node-121 192.168.0.121
Now my kubernetes is ready to go.

Routing between Kubernetes cluster and Docker container in the VM

I have setup Kubernates cluster in the VM (Ubuntu 18.04.1 LTS) on the Azure cloud using preconfigured scripts.
MongoDB docker container is running along with K8s cluster. My aim is to connect MongoDB to CMS container which is running inside the K8s.
Docker containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3883f7b397cf mongo "docker-entrypoint.s…" 5 hours ago Up 5 hours 0.0.0.0:27017->27017/tcp mongodb
299239d90cbb mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-2
34c8bd5fad2e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-1
15a2d6521e6e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 127.0.0.1:32768->8080/tcp kube-master
Kubernates nodes:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 18s app=mycms,tier=frontend
Kubernates service:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 112s app=mycms,tier=frontend
Kubernates pods:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mycms-dc4978ffc-khvj2 1/1 Running 0 4m8s 10.244.2.13 kube-node-1 <none>
MongoDB container's IP address is 172.17.0.2
Kubernates master container IP address is 10.192.0.2
Kubernates node 1 container IP address is 10.192.0.3
Kubernates node 2 container IP address is 10.192.0.4
As CMS pod is running on 10.244.2.13 which is inside the k8s container.
For testing, I have installed mongo-client on the host and test the connection which works well.
But CMS doesn't reach MongoDB container (I am passing Mongo String to pod in an environmental variable).
CMS pod's log
MongoError: failed to connect to server [172.17.0.2:27017] on first connect [MongoError: connect EHOSTUNREACH 172.17.0.2:27017]
How do I route MongoDB container and CMS container? Is anything wrong/missed in my approach?
Please let me know if you need further information. Thanks!
You need to use the IP address of the host where Docker is installed, not internal MongoDB container's IP address, to connect to MongoDB from the Kubernetes cluster or form any other host. According to the results of your docker ps -a, you have exposed 27017 port for MongoDB container, therefore <hostIP>:27017 should be used, not 172.17.0.2:27017.
By default in Kubernetes, there are no restrictions to connect outside the cluster.
Also, you may have firewall rules in Azure that forbid connections between hosts.

Where is kube-apiserver located

Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.
$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.
Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
Here is get pvc result:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
And get storageclass:
$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
How can I troubleshoot this issue (e.g. logs for why the storage was not created)?
You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your StorageClass.
Here's an answer for your first question:
kube-apiserver is running as a Docker container on your master node. Therefore, the binary is within the container, not on your host system. It is started by the master's kubelet from a file located at /etc/kubernetes/manifests. kubelet is watching this directory and will start any Pod defined here as "static pods".
To configure kube-apiserver command line arguments you need to modify /etc/kubernetes/manifests/kube-apiserver.yaml on your master.
I'll refer to the question regarding the location of the api-server.
Basic answer (specific to the question title):
The kube apiserver is located on the master node (known as the control plane).
It can be executed:
1 ) Via the host's init system (like systemd).
2 ) As a pod (I'll explain below).
In both cases it will be located on the control plane (left side below):
If its running under systemD you can run: systemctl status api-server to see the path to the configuration (drop-in) file.
If it is running as pod you can view it under the kube-system namespace with all other control panel components (plus kube-proxy and maybe network solution like weave below):
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lpdlc 1/1 Running 1 2d22h
coredns-f9fd979d6-vcs7g 1/1 Running 1 2d22h
etcd-my-master 1/1 Running 1 2d22h
kube-apiserver-my-master 1/1 Running 1 2d22h #<----Here
kube-controller-manager-my-master 1/1 Running 1 2d22h
kube-proxy-kh2lc 1/1 Running 1 2d22h
kube-scheduler-my-master 1/1 Running 1 2d22h
weave-net-59r5b 2/2 Running 3 2d22h
You can run:
kubectl describe pod/kube-apiserver-my-master -n kube-system
In order to get more details regarding the pod.
A bit more advanced answer:
(regarding the location of /etc/kubernetes/manifests)
Lets say we have no idea where to find the relevant path for the kube-api-server config file.
But we need to remember two important things:
1 ) The kube-api-server is running on the master node.
2 ) The Kubelet isn't running as pod and when the control plane components (plus kube-proxy) are executed as static pods - it is done by the Kubelet on the master node.
So we can start our journey for reaching the manifests path by investigating the Kubelet logs.
If the Kubelet is running for a long time it will be a very large file and we'll need to dump it somewhere and go to the begging - or if Kubelet was started 5 minutes ago we can run:
sudo journalctl -u kubelet --since -5m >> kubelet_5_minutes.log
And a quick search for "api-server" will bring us to the 2 lines below where the path of the manifests in mentioned:
my-master kubelet[71..]: 00:03:21 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
my-master kubelet[71..]: 00:03:21 kubelet.go:273] Watching apiserver
And also we can see that the Kubelet is trying to create the kube-apiserver pod under my-master node and inside the kube-system namespace:
my-master kubelet[71..]: 00:03:29.05 kubelet.go:1576] ..
Creating a mirror pod for "kube-apiserver-my-master_kube-system
To make the storage class "example-nfs" default, you need to run the below command:
kubectl patch storageclass example-nfs -p '{"metadata":
{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

Kubernetes cannot access grafana and prometheus fron Google cloud platform

I have followed this link to install Grafana/Prometheus in Google cloud kubernetes. I hope it is deployed successfully please find the following response for reference,
Service Created successfully :
kubectl --namespace=monitoring get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.27.249.8 <none> 3000:32703/TCP 1h
prometheus NodePort 10.27.249.233 <none> 9090:31939/TCP 1h
Namespace created successfully :
kubectl get namespaces
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 1h
PODS response :
kubectl --namespace=monitoring get pods
NAME READY STATUS RESTARTS AGE
grafana-1323121529-8614m 1/1 Running 0 1h
node-exporter-lw8cr 0/1 CrashLoopBackOff 17 1h
node-exporter-nv85s 0/1 CrashLoopBackOff 17 1h
node-exporter-r2rfl 0/1 CrashLoopBackOff 17 1h
prometheus-3259208887-x2zjc 1/1 Running 0 1h
Now i am trying to expose the external-Ip for Grafana but i couldn't keep on getting following exception "Error from server (AlreadyExists): services "prometheus" already exists"
kubectl --namespace=monitoring expose deployment/prometheus --type=LoadBalancer
Error from server (AlreadyExists): services "prometheus" already exists
Edited
kubectl -n monitoring edit service prometheus
Edit cancelled, no changes made.
As you have already deployed the Prometheus service manifest file in the monitoring namespace. However, you are trying to deploy a service with the same name at the same namespace.That's not acceptable As Two Service cannot co-exist in the same namespace with same name.
Solutions for your problem
I would use the following command to edit the already deployed Service.
kubectl -n monitoring edit service prometheus
Then your favourite text editor would pop up, you just need to update
type: LoadBalancer
Basically, your service will be edited.
Edited
If you are not able to use the above command, then you do following steps :
you need to edit the Prometheus service manifest file and update it with type: LoadBalancer.
Now you need to apply kubectl apply -f prometheus-service.yaml

Resources