How to deploy multiple Jenkins jobs in the same node? - jenkins

actually i have deployed jenkins master in a GKE cluster, i have a nodepool called jenkins with autoscaling with 2 nodes max. so when i run a job in jenkins, always is using that nodepool, so, thats cool, but, the problem that i have actually, is that when i run a job, jenkins is using 1 node per job, instead of use 1 node per two or more jobs, if a do a kubectl describe node nodename, i can see that i have only 1 jenkins agent deployed in each kubernetes node.
How can i fix this and use 1 node for more than 1 jenkins agent at the same time? because actually im "underusing" my jenkins nodes, because 1 job per node only use half of node resources.
Example of kubectl describe node jenkinsnode (you can see that only have 1 jenkins pod in that node):
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
jenkins atlas-test-atlas-full-tests-2-mrc47-2h7fb-t1vdn 850m (21%) 1250m (31%) 1536Mi (54%) 2560Mi (91%) 118s
kube-system fluentbit-gke-f296j 100m (2%) 0 (0%) 200Mi (7%) 500Mi (17%) 5m8s
kube-system gke-metadata-server-nc58q 100m (2%) 100m (2%) 100Mi (3%) 100Mi (3%) 5m7s
kube-system gke-metrics-agent-q6xl4 3m (0%) 0 (0%) 50Mi (1%) 50Mi (1%) 5m8s
kube-system kube-proxy-gke-develop-jenkins-eb1faad2-9m00 100m (2%) 0 (0%) 0 (0%) 0 (0%) 5m7s
kube-system netd-s6v8s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m7s
Thanks in advance

You can use the Kubernetes plugin for Jenkins, with it you can create a Kubernetes Pod for each agent started:
The Kubernetes plugin allocates Jenkins agents in Kubernetes pods.
Within these pods, there is always one special container jnlp that is
running the Jenkins agent. Other containers can run arbitrary
processes of your choosing, and it is possible to run commands
dynamically in any container in the agent pod.
Example:
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
- name: golang
image: golang:1.16.5
command:
- sleep
args:
- 99d
Jenkinsfile
podTemplate(yaml: readTrusted('pod.yaml')) {
node(POD_LABEL) {
// ...
}
}

Related

Kubernetes: Why my NodePort can not get an external ip?

Environment information:
Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
Recently, I try to deploy "Zero to Jupyterhub" on kubernetes. My jupyterhub config file such below:
config.yaml
proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
singleuser:
storage:
dynamic:
storageClass: local-storage
capacity: 10Gi
Note: I set the service type as NodePort, because I not have any cloud provider(deploy on my lab servers cluster), and I try using nginx-ingress also then got failure, that reason why I do not using LoadBalance.
But when I using this config file to install jupyterhub via Helm, I can not access jupyterhub from browser, even all Pods running. These pods detail like below:
kubectl get pod --namespace jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-8gxxk 1/1 Running 0 27m
continuous-image-puller-8tmdh 1/1 Running 0 27m
continuous-image-puller-lwdcx 1/1 Running 0 27m
continuous-image-puller-pszsr 1/1 Running 0 27m
hub-7b9cbbcf59-fbppq 1/1 Running 0 27m
proxy-6b699b54c8-2pxmb 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-9vmfr 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-lqfrh 1/1 Running 0 27m
and its services like this:
kubectl get service --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.10.55.78 <none> 8081/TCP 28m
proxy-api ClusterIP 10.10.27.133 <none> 8001/TCP 28m
proxy-public NodePort 10.10.97.11 <none> 443:30443/TCP,80:30080/TCP 28m
Is seem to work well, right? (I guessed.) But the fact is that I can not use ip 10.10.97.11 to access the jupyter main page, and I did not get any external ip also.
So, my problems are:
Do my config have any wrong?
How to get an external ip?
Finally, thank you for save my day so much!
For NodePort service you will not get EXTERNAL-IP. You can not use the CLUSTER-IP to access it from outside the kubernetes cluster because CLUSTER-IP is for accessing it from inside the kubernetes cluster typically from another pod.For accessing from outside the kubernetes cluster you need to use NodeIP:NodePort where NodeIP is your kubernetes nodes IP address.

Is there a way to syslog from container to underlying k8s node?

I want to syslog from a container to the host Node -
Targeting fluentd (#127.0.0.1:5140) which runs on the node - https://docs.fluentd.org/input/syslog
e.g syslog from hello-server to the node (which hosts all of these namespaces)
I want to syslog output from hello-server container to fluentd running on node (#127.0.0.1:5140).
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-server-7d8589854c-r4xfr 1/1 Running 0 21h
kube-system event-exporter-v0.2.4-5f7d5d7dd4-lgzg5 2/2 Running 0 6d6h
kube-system fluentd-gcp-scaler-7b895cbc89-bnb4z 1/1 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-4qcbs 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-jxnbn 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-k58x6 2/2 Running 0 6d6h
kube-system heapster-v1.6.0-beta.1-7778b45899-t8rz9 3/3 Running 0 6d6h
kube-system kube-dns-autoscaler-76fcd5f658-7hkgn 1/1 Running 0 6d6h
kube-system kube-dns-b46cc9485-279ws 4/4 Running 0 6d6h
kube-system kube-dns-b46cc9485-fbrm2 4/4 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-7zzj 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-ln02 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-w6kq 1/1 Running 0 6d6h
kube-system l7-default-backend-6f8697844f-bxn4z 1/1 Running 0 6d6h
kube-system metrics-server-v0.3.1-5b4d6d8d98-k7tz9 2/2 Running 0 6d6h
kube-system prometheus-to-sd-2g7jc 1/1 Running 0 6d6h
kube-system prometheus-to-sd-dck2n 1/1 Running 0 6d6h
kube-system prometheus-to-sd-hsc69 1/1 Running 0 6d6h
For some reason k8s does not allow us to use the built in syslog driver docker run --log-driver syslog.
Also, k8s does not allow me to connect with the underlying host using --network="host"
Has anyone tried anything similar? Maybe it would be easier to syslog remotely rather than trying to use the underlying syslog running on every node?
What you are actually looking at is the Stackdriver Logging Agent. According to the documentation at https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/#prerequisites:
If you’re using GKE and Stackdriver Logging is enabled in your cluster, you cannot change its configuration, because it’s managed and supported by GKE. However, you can disable the default integration and deploy your own.
The documentation then gives an example of rinning your own fluentd DaemonSet with custom ConfigMap. You'd need to run your own fluentd so you could configure a syslog input per https://docs.fluentd.org/input/syslog.
Then, since the fluentd is running as a DaemonSet, you would configure a Service to expose it to other pods and allow then to connect to it. If you are running the official upstream DaemonSet from https://github.com/fluent/fluentd-kubernetes-daemonset then a service might look like:
apiVersion: v1
kind: Service
namespace: kube-system
metadata:
name: fluentd
spec:
selector:
k8s-app: fluentd-logging
ports:
- protocol: UDP
port: 5140
targetPort: 5140
Then your applications can log to fluentd.kube-system:5140 (see using DNS at https://kubernetes.io/docs/concepts/services-networking/service/#dns).

Initializing Tiller for Helm with Kubeadm - Kubernetes

I'm using Kubeadm to create a cluster of 3 nodes
One Master
Two Workers
I'm using weave as the network pod
The status of my cluster is this:
NAME STATUS ROLES AGE VERSION
darthvader Ready <none> 56m v1.12.3
jarjar Ready master 60m v1.12.3
palpatine Ready <none> 55m v1.12.3
And I tried to init helm and tiller in my cluster
helm init
The result was this:
$HELM_HOME has been configured at /home/ubuntu/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
And the status of my pods is this:
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-8q6j7 1/1 Running 0 54m
coredns-576cbf47c7-kkvd8 1/1 Running 0 54m
etcd-jarjar 1/1 Running 0 54m
kube-apiserver-jarjar 1/1 Running 0 54m
kube-controller-manager-jarjar 1/1 Running 0 53m
kube-proxy-2lwgd 1/1 Running 0 49m
kube-proxy-jxwqq 1/1 Running 0 54m
kube-proxy-mv7vh 1/1 Running 0 50m
kube-scheduler-jarjar 1/1 Running 0 54m
tiller-deploy-845cffcd48-bqnht 0/1 ContainerCreating 0 12m
weave-net-5h5hw 2/2 Running 0 51m
weave-net-jv68s 2/2 Running 0 50m
weave-net-vsg2f 2/2 Running 0 49m
The problem is that tiller is stuck in ContainerCreating State.
And I ran
kubectl describe pod tiller-deploy -n kube-system
To check the status of tiller and I found The Next error:
Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Pod sandbox changed, it will be killed and re-created.
How I can to create the tiller deploy pod successfully? I don't understand why the pod sandbox is failing.
Maybe the problem is in the way you deployed Tiller. I just recreated this and had no issues using Weave and Compute Engine instances on GCP.
You should retry with different method of installing helm as maybe there was some issue (you did not provide details on how did you install it).
Reset helm and delete tiller pod:
helm reset --force(if the tiller persists check the name of the replicaset with tiller kubectl get all --all-namespaces and kubectl delete rs/name)
Now try deploying helm and tiller using different method. For example running it through the script:
As explained here.
You can also run Helm without Tiller.
It looks like you are running into this.
Most likely your node cannot pull the container image because of a networking connectivity problem. Something image like this: gcr.io/kubernetes-helm/tiller:v2.3.1 or the pause container gcr.io/google_containers/pause (unlikely if your other pods are running). You can try logging into your nodes (darthvader, palpatine) and manually debug with:
$ docker pull gcr.io/kubernetes-helm/tiller:v2.3.1 <= Use the version on your tiller pod spec or deployment (tiller-deploy)
$ docker pull gcr.io/google_containers/pause

Getting ImageInspectError when trying to run an OpenFaas function on Raspberry Pi 3B+

I'm trying to deploy a function with OpenFaas project and a kubernetes cluster running on 2 Raspberry Pi 3B+.
Unfortunately, the pod that should handle the function is going to ImageInspectError state...
I tried to run the function with Docker directly and which is contain in a Docker image, and everything is working fine.
I opened an issue on the OpenFaas github and the maintainer told me to ask directly the Kubernetes community to get some clues.
My first question is : What does ImageInspectError mean and where it comes from ?
And here is all the informations I have :
Expected Behaviour
Pod should run.
Current Behaviour
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-masternode 1/1 Running 1 1d
kube-system kube-apiserver-masternode 1/1 Running 1 1d
kube-system kube-controller-manager-masternode 1/1 Running 1 1d
kube-system kube-dns-7f9b64f644-x42sr 3/3 Running 3 1d
kube-system kube-proxy-wrp6f 1/1 Running 1 1d
kube-system kube-proxy-x6pvq 1/1 Running 1 1d
kube-system kube-scheduler-masternode 1/1 Running 1 1d
kube-system weave-net-4995q 2/2 Running 3 1d
kube-system weave-net-5g7pd 2/2 Running 3 1d
openfaas-fn figlet-7f556fcd87-wrtf4 1/1 Running 0 4h
openfaas-fn testfaceraspi-7f6fcb5897-rs4cq 0/1 ImageInspectError 0 2h
openfaas alertmanager-66b98dd4d4-kcsq4 1/1 Running 1 1d
openfaas faas-netesd-5b5d6d5648-mqftl 1/1 Running 1 1d
openfaas gateway-846f8b5686-724q8 1/1 Running 2 1d
openfaas nats-86955fb749-7vsbm 1/1 Running 1 1d
openfaas prometheus-6ffc57bb8f-fpk6r 1/1 Running 1 1d
openfaas queue-worker-567bcf4d47-ngsgv 1/1 Running 2 1d
The testfaceraspi doesn't run.
Logs from the pod :
$ kubectl logs testfaceraspi-7f6fcb5897-rs4cq -n openfaas-fn
Error from server (BadRequest): container "testfaceraspi" in pod "testfaceraspi-7f6fcb5897-rs4cq" is waiting to start: ImageInspectError
Pod describe :
$ kubectl describe pod -n openfaas-fn testfaceraspi-7f6fcb5897-rs4cq
Name: testfaceraspi-7f6fcb5897-rs4cq
Namespace: openfaas-fn
Node: workernode/10.192.79.198
Start Time: Thu, 12 Jul 2018 11:39:05 +0200
Labels: faas_function=testfaceraspi
pod-template-hash=3929761453
Annotations: prometheus.io.scrape=false
Status: Pending
IP: 10.40.0.16
Controlled By: ReplicaSet/testfaceraspi-7f6fcb5897
Containers:
testfaceraspi:
Container ID:
Image: gallouche/testfaceraspi
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImageInspectError
Ready: False
Restart Count: 0
Liveness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qhnn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-5qhnn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qhnn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning DNSConfigForming 2m (x1019 over 3h) kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
And the event logs :
$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas-fn
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
14m 1h 347 testfaceraspi-7f6fcb5897-rs4cq.1540db41e89d4c52 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
4m 1h 75 figlet-7f556fcd87-wrtf4.1540db421002b49e Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9ed8b91865 Pod Normal Scheduled default-scheduler Successfully assigned testfaceraspi-7f6fcb5897-d6z78 to workernode
10m 10m 1 testfaceraspi-7f6fcb5897.1540df9ed6eee11f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: testfaceraspi-7f6fcb5897-d6z78
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9eef3ef504 Pod Normal SuccessfulMountVolume kubelet, workernode MountVolume.SetUp succeeded for volume "default-token-5qhnn"
4m 10m 27 testfaceraspi-7f6fcb5897-d6z78.1540df9eef5445c0 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
8m 9m 8 testfaceraspi-7f6fcb5897-d6z78.1540df9f670d0dad Pod spec.containers{testfaceraspi} Warning InspectFailed kubelet, workernode Failed to inspect image "gallouche/testfaceraspi": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2/l: invalid argument
9m 9m 7 testfaceraspi-7f6fcb5897-d6z78.1540df9f670fcf3e Pod spec.containers{testfaceraspi} Warning Failed kubelet, workernode Error: ImageInspectError
Steps to Reproduce (for bugs)
Deploy OpenFaas on a 2 node k8s cluster
Create function with faas new testfaceraspi --lang python3-armhf
Add the following code in the handler.py :
import json
def handle(req):
jsonl = json.loads(req)
return ("Found " + str(jsonl["nbFaces"]) + " faces in OpenFaas Function on raspi !")
Change gateway and image in the .yml
provider:
name: faas
gateway: http://127.0.0.1:31112
functions:
testfaceraspi:
lang: python3-armhf
handler: ./testfaceraspi
image: gallouche/testfaceraspi
Run faas build -f testfacepi.yml
Login in DockerHub with docker login
Run faas push -f testfacepi.yml
Run faas deploy -f testfacepi.yml
Your Environment
FaaS-CLI version ( Full output from: faas-cli version ):
Commit: 3995a8197f1df1ecdf524844477cffa04e4690ea
Version: 0.6.11
Docker version ( Full output from: docker version ):
Client:
Version: 18.04.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:25:24 2018
OS/Arch: linux/arm
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.04.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:21:25 2018
OS/Arch: linux/arm
Experimental: false
Operating System and version (e.g. Linux, Windows, MacOS):
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
Thanks by advance and tell me if you need any more informations.
Gallouche
I've seen this error because the docker version wasn't supported by Kubernetes. As of Kubernetes version 1.11, the supported versions are 1.11.2 to 1.13.1 and 17.03.x.
I couldn't test the solution with OpenFaaS.

Where is kube-apiserver located

Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.
$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.
Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
Here is get pvc result:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
And get storageclass:
$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
How can I troubleshoot this issue (e.g. logs for why the storage was not created)?
You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your StorageClass.
Here's an answer for your first question:
kube-apiserver is running as a Docker container on your master node. Therefore, the binary is within the container, not on your host system. It is started by the master's kubelet from a file located at /etc/kubernetes/manifests. kubelet is watching this directory and will start any Pod defined here as "static pods".
To configure kube-apiserver command line arguments you need to modify /etc/kubernetes/manifests/kube-apiserver.yaml on your master.
I'll refer to the question regarding the location of the api-server.
Basic answer (specific to the question title):
The kube apiserver is located on the master node (known as the control plane).
It can be executed:
1 ) Via the host's init system (like systemd).
2 ) As a pod (I'll explain below).
In both cases it will be located on the control plane (left side below):
If its running under systemD you can run: systemctl status api-server to see the path to the configuration (drop-in) file.
If it is running as pod you can view it under the kube-system namespace with all other control panel components (plus kube-proxy and maybe network solution like weave below):
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lpdlc 1/1 Running 1 2d22h
coredns-f9fd979d6-vcs7g 1/1 Running 1 2d22h
etcd-my-master 1/1 Running 1 2d22h
kube-apiserver-my-master 1/1 Running 1 2d22h #<----Here
kube-controller-manager-my-master 1/1 Running 1 2d22h
kube-proxy-kh2lc 1/1 Running 1 2d22h
kube-scheduler-my-master 1/1 Running 1 2d22h
weave-net-59r5b 2/2 Running 3 2d22h
You can run:
kubectl describe pod/kube-apiserver-my-master -n kube-system
In order to get more details regarding the pod.
A bit more advanced answer:
(regarding the location of /etc/kubernetes/manifests)
Lets say we have no idea where to find the relevant path for the kube-api-server config file.
But we need to remember two important things:
1 ) The kube-api-server is running on the master node.
2 ) The Kubelet isn't running as pod and when the control plane components (plus kube-proxy) are executed as static pods - it is done by the Kubelet on the master node.
So we can start our journey for reaching the manifests path by investigating the Kubelet logs.
If the Kubelet is running for a long time it will be a very large file and we'll need to dump it somewhere and go to the begging - or if Kubelet was started 5 minutes ago we can run:
sudo journalctl -u kubelet --since -5m >> kubelet_5_minutes.log
And a quick search for "api-server" will bring us to the 2 lines below where the path of the manifests in mentioned:
my-master kubelet[71..]: 00:03:21 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
my-master kubelet[71..]: 00:03:21 kubelet.go:273] Watching apiserver
And also we can see that the Kubelet is trying to create the kube-apiserver pod under my-master node and inside the kube-system namespace:
my-master kubelet[71..]: 00:03:29.05 kubelet.go:1576] ..
Creating a mirror pod for "kube-apiserver-my-master_kube-system
To make the storage class "example-nfs" default, you need to run the below command:
kubectl patch storageclass example-nfs -p '{"metadata":
{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

Resources