Unable to create kubernetes with HA etc on Ubuntu 20.04 - docker

Since two days I am fighting with Kubernetes setup on Ubuntu 20.04. I created so called template vm on vSphere and I cloned three vm's out of it.
I have following configurations for each master node:
/etc/hosts
127.0.0.1 localhost
127.0.1.1 kubernetes-master1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.255.200 kubernetes-cluster.homelab01.local
192.168.255.201 kubernetes-master1.homelab01.local
192.168.255.202 kubernetes-master2.homelab01.local
192.168.255.203 kubernetes-master3.homelab01.local
192.168.255.204 kubernetes-worker1.homelab01.local
192.168.255.205 kubernetes-worker2.homelab01.local
192.168.255.206 kubernetes-worker3.homelab01.local
127.0.1.1 kubernetes-master1on a first master and 127.0.1.1 kubernetes-master2 on second one and 127.0.1.1 kubernetes-master3 on the third one.
I am using Docker 19.03.11 which is latest supported by Kubernetes as per documentation.
Docker
Client: Docker Engine - Community
Version: 19.03.11
API version: 1.40
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:12:34 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:11:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
I used following commands to install docker:
sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
I marked all the necessary packets on hold.
sudo apt-mark hold kubelet kubeadm kubectl docker-ce containerd.io docker-ce-cli
Some details about the VM's.
Master1
sudo cat /sys/class/dmi/id/product_uuid
f09c3242-c8f7-c97e-bc6a-b2065c286ea9
IP: 192.168.255.201
Master2
sudo cat /sys/class/dmi/id/product_uuid
b4fe3242-ba37-a533-c12f-b30b735cbe9f
IP: 192.168.255.202
Master3
sudo cat /sys/class/dmi/id/product_uuid
c3cc3242-4115-8c38-8e46-166190620249
IP: 192.168.255.203
IP addresses and name resolution works flawless on all hosts
192.168.255.200 kubernetes-cluster.homelab01.local
192.168.255.201 kubernetes-master1.homelab01.local
192.168.255.202 kubernetes-master2.homelab01.local
192.168.255.203 kubernetes-master3.homelab01.local
192.168.255.204 kubernetes-worker1.homelab01.local
192.168.255.205 kubernetes-worker2.homelab01.local
192.168.255.206 kubernetes-worker3.homelab01.local
Keepalived.conf
From master1. On master2 it has state=backup and priority 100, on master3 state=backup and priority 89.
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
$STATE=MASTER
$INTERFACE=ens160
$ROUTER_ID=51
$PRIORITY=255
$AUTH_PASS=Kub3rn3t3S!
$APISERVER_VIP=192.168.255.200/24
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state $STATE
interface $INTERFACE
virtual_router_id $ROUTER_ID
priority $PRIORITY
authentication {
auth_type PASS
auth_pass $AUTH_PASS
}
virtual_ipaddress {
$APISERVER_VIP
}
track_script {
check_apiserver
}
}
check_apiserver.sh
/etc/keepalived/check_apiserver.sh
#!/bin/sh
APISERVER_VIP=192.168.255.200
APISERVER_DEST_PORT=6443
errorExit() {
echo "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
Keepalive service
sudo service keepalived status
● keepalived.service - Keepalive Daemon (LVS and VRRP)
Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-01-06 16:41:38 CET; 1min 26s ago
Main PID: 804 (keepalived)
Tasks: 2 (limit: 4620)
Memory: 4.7M
CGroup: /system.slice/keepalived.service
├─804 /usr/sbin/keepalived --dont-fork
└─840 /usr/sbin/keepalived --dont-fork
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: Registering Kernel netlink reflector
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: Registering Kernel netlink command channel
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: Opening file '/etc/keepalived/keepalived.conf>
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: WARNING - default user 'keepalived_script' fo>
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: (Line 29) Truncating auth_pass to 8 characters
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: SECURITY VIOLATION - scripts are being execut>
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: (VI_1) ignoring tracked script check_apiserve>
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: Warning - script check_apiserver is not used
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: Registering gratuitous ARP shared channel
Jan 06 16:41:38 kubernetes-master1 Keepalived_vrrp[840]: (VI_1) Entering MASTER STATE
lines 1-20/20 (END)
haproxy.cfg
# /etc/haproxy/haproxy.cfg
#
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:8443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server kubernetes-master1 192.168.255.201:6443 check
server kubernetes-master2 192.168.255.202:6443 check
server kubernetes-master3 192.168.255.203:6443 check
haproxy service status
sudo service haproxy status
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-01-06 16:41:38 CET; 3min 12s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 847 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUC>
Main PID: 849 (haproxy)
Tasks: 3 (limit: 4620)
Memory: 4.7M
CGroup: /system.slice/haproxy.service
├─849 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/hapro>
└─856 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/hapro>
Jan 06 16:41:38 kubernetes-master1 haproxy[856]: Server apiserver/kubernetes-master1 is DOWN, reason: >
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: [WARNING] 005/164139 (856) : Server apiserver/kuberne>
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: Server apiserver/kubernetes-master2 is DOWN, reason: >
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: Server apiserver/kubernetes-master2 is DOWN, reason: >
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: [WARNING] 005/164139 (856) : Server apiserver/kuberne>
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: [ALERT] 005/164139 (856) : backend 'apiserver' has no>
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: Server apiserver/kubernetes-master3 is DOWN, reason: >
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: Server apiserver/kubernetes-master3 is DOWN, reason: >
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: backend apiserver has no server available!
Jan 06 16:41:39 kubernetes-master1 haproxy[856]: backend apiserver has no server available!
lines 1-23/23 (END)
I am creating the first kubernetes node with following command
sudo kubeadm init --control-plane-endpoint kubernetes-cluster.homelab01.local:8443 --upload-certs
This works well and I apply Calico CNI plugin with command
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
After that I am attempting join from master2.
Keepalived works perfectly fine as I tested it on all three with stopping service and observing failover to other nodes. When on the first master1 node I created kubernetes haproxy informed that backend was visible.
Kubernetes cluster bootstrap process
udo kubeadm init --control-plane-endpoint kubernetes-cluster.homelab01.local:8443 --upload-certs
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-cluster.homelab01.local kubernetes-master1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.255.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [192.168.255.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [192.168.255.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.539325 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
57abea9f00357a4459c852249ac0170633c9a0f2327cde191e529a1689ea158b
[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2cu336.rjxs8i0svtna27ke
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join kubernetes-cluster.homelab01.local:8443 --token 2cu336.rjxs8i0svtna27ke \
--discovery-token-ca-cert-hash sha256:eb0668ca16acec622e4a97d69e0d4c42e64b1a61ffea13a3787956817021ca54 \
--control-plane --certificate-key 57abea9f00357a4459c852249ac0170633c9a0f2327cde191e529a1689ea158b
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubernetes-cluster.homelab01.local:8443 --token 2cu336.rjxs8i0svtna27ke \
--discovery-token-ca-cert-hash sha256:eb0668ca16acec622e4a97d69e0d4c42e64b1a61ffea13a3787956817021ca54
All stuff is up and running on master1
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-744cfdf676-mks4d 1/1 Running 0 36s
kube-system pod/calico-node-bnvmz 1/1 Running 0 37s
kube-system pod/coredns-74ff55c5b-skdzk 1/1 Running 0 3m11s
kube-system pod/coredns-74ff55c5b-tctl9 1/1 Running 0 3m11s
kube-system pod/etcd-kubernetes-master1 1/1 Running 0 3m4s
kube-system pod/kube-apiserver-kubernetes-master1 1/1 Running 0 3m4s
kube-system pod/kube-controller-manager-kubernetes-master1 1/1 Running 0 3m4s
kube-system pod/kube-proxy-smmmx 1/1 Running 0 3m11s
kube-system pod/kube-scheduler-kubernetes-master1 1/1 Running 0 3m4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m17
s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3m11
s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SE
LECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kuberne
tes.io/os=linux 38s
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kuberne
tes.io/os=linux 3m11s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 38s
kube-system deployment.apps/coredns 2/2 2 2 3m11s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-744cfdf676 1 1 1 37s
kube-system replicaset.apps/coredns-74ff55c5b 2 2 2 3m11s
Immediately after attempting to join master2 to cluster master1 kubernetes dies.
wojcieh#kubernetes-master2:~$ sudo kubeadm join kubernetes-cluster.homelab01.local:8443 --token 2cu336.rjxs8i0svtna27ke \
> --discovery-token-ca-cert-hash sha256:eb0668ca16acec622e4a97d69e0d4c42e64b1a61ffea13a3787956817021ca54 \
> --control-plane --certificate-key 57abea9f00357a4459c852249ac0170633c9a0f2327cde191e529a1689ea158b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-cluster.homelab01.local kubernetes-master2 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.255.202]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master2 localhost] and IPs [192.168.255.202 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master2 localhost] and IPs [192.168.255.202 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
Broadcast message from systemd-journald#kubernetes-master2 (Wed 2021-01-06 16:53:04 CET):
haproxy[870]: backend apiserver has no server available!
Broadcast message from systemd-journald#kubernetes-master2 (Wed 2021-01-06 16:53:04 CET):
haproxy[870]: backend apiserver has no server available!
^C
wojcieh#kubernetes-master2:~$
Here are some logs which might be relevant
Logs from master1 https://pastebin.com/Y1zcwfWt
Logs from master2 https://pastebin.com/rBELgK1Y

Related

Minukube will not start on M1 Macbook Pro (Monterey) Time out

I am trying to run minikube start on my M1 Macbook Pro, however I am getting the following error logs.
initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0927 10:05:39.532492 1011 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: missing optional cgroups: blkio
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0927 10:09:44.005406 3933 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: missing optional cgroups: blkio
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0927 10:09:44.005406 3933 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: missing optional cgroups: blkio
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172
I have tried uninstalling Minikube by running brew uninstall minikube and resintalling it via the website using curl.
I have also reinstalled Docker Desktop, enabled Kubernetes.
I have ran rm -rf ~/.minikube and minikube delete numerous times.
So I am at a standstill

i can't start minikube

Hi i have installed minikube but when i run minikube start i get this error :
😄 minikube v1.17.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.0 ...
🤦 Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
[
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172
i can't understand what is the problem here , it was working then i got a similar error it syas :
🐳 Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...| ❌ Unable to load cached images: loading cached images: stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4: no such file or directory
, i uninstalled it then reinstall it but still got error.
can someone help me fix this
You can use minikube delete to delete the old cluster. After that start minikube using minikube start
I followed the below steps:
$ docker system prune
$ minikube delete
$ minikube start --container-runtime=containerd

Cannot initialize Kubernetes cluster on Ubuntu 18.04 (Virtual Box)

I struggle to initialize a simple Kubernetes cluster using Ubuntu on Virtualbox. I tried both server and desktop version, following the official documentation:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
I also tried to follow some other ones, thinking the issue was because i'm using Virtualbox VM's, like this one:
https://medium.com/#gunjangarge/create-kubernetes-cluster-using-kubeadm-on-ubuntu-virtualbox-step-by-step-68a3eeb1f74c
But everytime I have the same issue with port 6443 not being exposed. Sometimes the process starts correctly, giving me the join command:
kubeadm init --pod-network-cidr=192.168.0.0/16
W1029 08:47:53.841460 11540 configset.go:348] WARNING: kubeadm cannot validate component configs
for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.192:6443 --token ztnoww.t8ng5a3jo2kx5cb2 \
--discovery-token-ca-cert-hash
sha256:907dde6cc6d72ed4cd7fe7e7f252e2cf657dd3256fba6ee5ec92027132a9c5af
Sometimes it's not starting at all and timeouting:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Anyway, even when it's starting, port 6443 is never exposed, and kubelet is not happy with it:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2020-10-29 08:48:15 CET; 20s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 13262 (kubelet)
Tasks: 14 (limit: 4666)
CGroup: /system.slice/kubelet.service
└─13262 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-contai
Okt 29 08:48:22 master kubelet[13262]: E1029 08:48:22.588386 13262 controller.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get
"https://192.168.1.192:6443/apis/coordination.k8s.io/v1/names
Okt 29 08:48:22 master kubelet[13262]: E1029 08:48:22.785951 13262 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.192:644
Okt 29 08:48:23 master kubelet[13262]: I1029 08:48:23.022354 13262 kubelet_node_status.go:70] Attempting to register node master
Okt 29 08:48:24 master kubelet[13262]: I1029 08:48:24.188510 13262 request.go:645] Throttling request took 1.097264312s, request: POST:https://192.168.1.192:6443/api/v1/namespaces/kube-system/pods
Okt 29 08:48:25 master kubelet[13262]: I1029 08:48:25.678880 13262 kubelet_node_status.go:108] Node master was previously registered
Okt 29 08:48:25 master kubelet[13262]: I1029 08:48:25.679004 13262 kubelet_node_status.go:73] Successfully registered node master
Okt 29 08:48:25 master kubelet[13262]: W1029 08:48:25.765981 13262 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Okt 29 08:48:27 master kubelet[13262]: E1029 08:48:27.148246 13262 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: c
Okt 29 08:48:30 master kubelet[13262]: W1029 08:48:30.767511 13262 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Okt 29 08:48:32 master kubelet[13262]: E1029 08:48:32.164211 13262 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: c
I have to say I don't know what to do now. I tried for hours with different Ubuntu versions, trying to find solutions on the Internet but I didn't found any solution. I also went trough the logs and found that maybe the config file is not created correctly for any reason:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml
but I found nothing about it, except "try to init the cluster again", which I did several times..
Thank you in advance for your help!
OK, I think I finally found the problem. I tried the same process on another PC and everything worked smoothly, so for anyway of you having a similar issue, just don't try to use VirtualBox and WSL at the same time (even if wsl is shut off)
I just did what's explained here: https://stackoverflow.com/a/63229718/2428805 and now everything's fine...

Kubernetes not showing nodes

I initialized master node and joined workers nodes to the cluster with kubeadm. According to the logs worker nodes successfully joined to the cluster.
However, when I list the nodes in master using kubectl get nodes, worker nodes are absent. What is wrong?
[vagrant#localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
Here are kubeadm logs
PLAY[
Alusta kubernetes masterit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[
],
...
}TASK[
kubeadm init
]************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[init] Using Kubernetes version: v1.13.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 19.504023 seconds\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"stdout_lines":[
"[init] Using Kubernetes version: v1.13.1",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Activating the kubelet service",
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"etcd/ca\" certificate and key",
"[certs] Generating \"etcd/server\" certificate and key",
"[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"etcd/healthcheck-client\" certificate and key",
"[certs] Generating \"etcd/peer\" certificate and key",
"[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"apiserver-etcd-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
"[apiclient] All control plane components are healthy after 19.504023 seconds",
"[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
"[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
"[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6",
"[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
"[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
"[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
"[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
"[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
"[addons] Applied essential addon: CoreDNS",
"[addons] Applied essential addon: kube-proxy",
"",
"Your Kubernetes master has initialized successfully!",
"",
"To start using your cluster, you need to run the following as a regular user:",
"",
" mkdir -p $HOME/.kube",
" sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
" sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"",
"You should now deploy a pod network to the cluster.",
"Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
" https://kubernetes.io/docs/concepts/cluster-administration/addons/",
"",
"You can now join any number of machines by running the following on each node",
"as root:",
"",
" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
]
}TASK[
set_fact
]****************************************************************
ok:[
k8s-n1
]=>{
"ansible_facts":{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
},
"changed":false
}TASK[
debug
]*******************************************************************
ok:[
k8s-n1
]=>{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
}TASK[
Aseta ymparistomuuttujat
]************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"cp /etc/kubernetes/admin.conf /home/vagrant/ && chown vagrant:vagrant /home/vagrant/admin.conf && export KUBECONFIG=/home/vagrant/admin.conf && echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc",
"delta":"0:00:00.008628",
"end":"2019-01-05 07:08:08.663360",
"rc":0,
"start":"2019-01-05 07:08:08.654732",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Konfiguroi CNI-verkko
]***************************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
Asenna Flannel-plugin
]***************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"export KUBECONFIG=/home/vagrant/admin.conf ; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
"delta":"0:00:00.517346",
"end":"2019-01-05 07:08:17.731759",
"rc":0,
"start":"2019-01-05 07:08:17.214413",
"stderr":"",
"stderr_lines":[
],
"stdout":"clusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\ndaemonset.extensions/kube-flannel-ds-arm64 created\ndaemonset.extensions/kube-flannel-ds-arm created\ndaemonset.extensions/kube-flannel-ds-ppc64le created\ndaemonset.extensions/kube-flannel-ds-s390x created",
"stdout_lines":[
"clusterrole.rbac.authorization.k8s.io/flannel created",
"clusterrolebinding.rbac.authorization.k8s.io/flannel created",
"serviceaccount/flannel created",
"configmap/kube-flannel-cfg created",
"daemonset.extensions/kube-flannel-ds-amd64 created",
"daemonset.extensions/kube-flannel-ds-arm64 created",
"daemonset.extensions/kube-flannel-ds-arm created",
"daemonset.extensions/kube-flannel-ds-ppc64le created",
"daemonset.extensions/kube-flannel-ds-s390x created"
]
}TASK[
shell
]*******************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"sleep 10",
"delta":"0:00:10.004446",
"end":"2019-01-05 07:08:29.833488",
"rc":0,
"start":"2019-01-05 07:08:19.829042",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Alusta kubernetes workerit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n3
]ok:[
k8s-n2
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.085388",
"end":"2019-01-05 07:08:34.547407",
"rc":0,
"start":"2019-01-05 07:08:34.462019",
"stderr":"",
"stderr_lines":[
],
...
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.086224",
"end":"2019-01-05 07:08:34.600794",
"rc":0,
"start":"2019-01-05 07:08:34.514570",
"stderr":"",
"stderr_lines":[
],
"stdout":"[preflight] running pre-flight checks\n[reset] no etcd config found. Assuming external etcd\n[reset] please manually reset etcd to prevent further issues\n[reset] stopping the kubelet service\n[reset] unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]\n[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually.\nFor example: \niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.",
"stdout_lines":[
"[preflight] running pre-flight checks",
"[reset] no etcd config found. Assuming external etcd",
"[reset] please manually reset etcd to prevent further issues",
"[reset] stopping the kubelet service",
"[reset] unmounting mounted directories in \"/var/lib/kubelet\"",
"[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]",
"[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
"[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
"",
"The reset process does not reset or clean up iptables rules or IPVS tables.",
"If you wish to reset iptables, you must do so manually.",
"For example: ",
"iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X",
"",
"If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
"to reset your system's IPVS tables."
]
}TASK[
kubeadm join
]************************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:01.988676",
"end":"2019-01-05 07:08:38.771956",
"rc":0,
"start":"2019-01-05 07:08:36.783280",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:02.000874",
"end":"2019-01-05 07:08:38.979256",
"rc":0,
"start":"2019-01-05 07:08:36.978382",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}PLAY RECAP *********************************************************************
k8s-n1:ok=24 changed=16 unreachable=0 failed=0
k8s-n2:ok=16 changed=13 unreachable=0 failed=0
k8s-n3:ok=16 changed=13 unreachable=0 failed=0
.
[vagrant#localhost ~]$ kubectl get events -a
Flag --show-all has been deprecated, will be removed in an upcoming release
LAST SEEN TYPE REASON KIND MESSAGE
3m15s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 72f6776d-c267-4e31-8e6d-a4d36da1d510
3m16s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 2d68a2c8-e27a-45ff-b7d7-5ce33c9e1cc4
4m2s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 0213bbdf-f4cd-4e19-968e-8162d95de9a6
By default the nodes (kubelet) identify themselves using their hostnames. It seems that your VMs' hostnames are not set.
In the Vagrantfile set the hostname value to different names for each VM.
https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname

How to configure polipo's HTTP proxy for docker to deploy Kubernetes

I am a beginner of Docker. Wanted to know if it's good practice to do that and what would be the best way to do that?
System: Ubuntu LTS 16.04.2
I want to deploy Kubernetes on my server with proxy. Because of some problem, I used polipo to convert sock5 proxy to http proxy. The http proxy has been successfully applied to the terminal. Then I searched that:
https://docs.docker.com/engine/admin/systemd/#http-proxy
and I added the HTTP_PROXY environment variable in /etc/systemd/system/docker.service.d/http-proxy.conf:
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8123/"
and then do that:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://127.0.0.1:8123/
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
and then I successfully installed kubelet kubeadm kubectl kubernetes-cni and I ran this command:
# kubeadm init
here are result of the operation:
root#ubuntu16:~# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: Connection to "https://59.64.78.138:6443" uses proxy
"http://127.0.0.1:8123/". If that is not intended, adjust your proxy settings
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ubuntu16 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 my_server_IP]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
Then it does not go ahead and I ran
# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Sun 2017-11-05 21:17:37 CST; 9min ago
Docs: http://kubernetes.io/docs/
Main PID: 19363 (kubelet)
Tasks: 14
Memory: 39.9M
CPU: 14.229s
CGroup: /system.slice/kubelet.service
└─19363 /usr/bin/kubelet --bootstrap->?
kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/k
Nov 05 21:26:28 ubuntu16 kubelet[19363]: W1105 21:26:28.959628 19363 cni.go:196] Unable to update cni config: No ne
Nov 05 21:26:28 ubuntu16 kubelet[19363]: E1105 21:26:28.960538 19363 kubelet.go:2095] Container runtime network not
Nov 05 21:26:33 ubuntu16 kubelet[19363]: W1105 21:26:33.962500 19363 cni.go:196] Unable to update cni config: No ne
Nov 05 21:26:33 ubuntu16 kubelet[19363]: E1105 21:26:33.963407 19363 kubelet.go:2095] Container runtime network not
Nov 05 21:26:38 ubuntu16 kubelet[19363]: W1105 21:26:38.974986 19363 cni.go:196] Unable to update cni config: No ne
Nov 05 21:26:38 ubuntu16 kubelet[19363]: E1105 21:26:38.975851 19363 kubelet.go:2095] Container runtime network not
Nov 05 21:26:43 ubuntu16 kubelet[19363]: W1105 21:26:43.977879 19363 cni.go:196] Unable to update cni config: No ne
Nov 05 21:26:43 ubuntu16 kubelet[19363]: E1105 21:26:43.978806 19363 kubelet.go:2095] Container runtime network not
Nov 05 21:26:48 ubuntu16 kubelet[19363]: W1105 21:26:48.992642 19363 cni.go:196] Unable to update cni config: No ne
Nov 05 21:26:48 ubuntu16 kubelet[19363]: E1105 21:26:48.993587 19363 kubelet.go:2095] Container runtime network not
lines 1-23/23 (END)
Now I am confused how to solve this problem. I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
Thanks in advance.
Installing it I had this modify to align cfgroups between docker & kubelet:
docker info |grep -i cgroup
In file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
modify the corresponding line to have:
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Bye

Resources