Got only one node after install Kubernetes cluster - docker

Followed official guide to install Kubernetes cluster with kubeadm on Vagrant.
https://kubernetes.io/docs/getting-started-guides/kubeadm/
master
node1
node2
Master
# kubeadm init --apiserver-advertise-address=192.168.33.200
# sudo cp /etc/kubernetes/admin.conf $HOME/
# sudo chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yaml
Node1 and Node2
# kubeadm join --token <token> 192.168.33.200:6443
...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
Until now all success.
But when check kubectl get nodes on master host, retunes only one node:
# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 25m v1.6.4
Sometimes, it retunes:
# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
Edit
Add hostname to all the hosts.
Then check kubectl get nodes again from master:
[root#master ~]# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 4h v1.6.4
master Ready 12m v1.6.4
Just added a new current host name.

Related

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

Kubernetes Installation process guidance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
During the installation of kubernetes, an error is reported when I initialize the master node. I am using the arm platform server and the operating system is centos-7.6 aarch64. Does kubernetes support deploying master nodes on the arm platform?
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
6月 30 22:53:04 master kubelet[54238]: W0630 22:53:04.188966 54238 pod_container_deletor.go:75] Container "51615bc1d926dcc56606bca9f452c178398bc08c78a2418a346209df28b95854" not found in pod's containers
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.189353 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.218672 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.236484 54238 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.112:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.238898 54238 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://192.168.1.112:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.260520 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.289516 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.389666 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.436810 54238 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.1.112:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.489847 54238 kubelet.go:2248] node "master" not found
To start kubernetes cluster, make sure you have minimum requirement of kubernetes platfrom.
If you want kubernetes cluster with low compute you could discus with me in seperatly.
You need :
Docker
Compute Node at least 4GB Memory 2CPU.
I will write answer depends on your node.
Docker
On each of your machines, install Docker. Version 19.03.11 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
Create /etc/docker
mkdir /etc/docker
Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Restart Docker
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Kubernetes
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
systemctl daemon-reload
systemctl restart kubelet
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
Master
Init kubernetes cluster (Running this on master node)
kubeadm init --pod-network-cidr 192.168.0.0/16
Note : I will calico here. so the cidr use 192.168.0.0/16
Move kube config to user directory (assume root)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Worker Node
Join other nodes (Running below command from your worker node)
kubeadm join <IP_PUBLIC>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Note : you will get this when you successfully init master
Master Node
Applying calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify cluster
kubectl get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Kubernetes: Nodes/Pods not showing with kubectl after building cluster with kubeadm

I am trying to create a Kubernetes cluster using kubeadm tool. For this I installed the supported docker version as specified here
I could also install kubeadm successfully. I initiated the cluster with below command
sudo kubeadm init --pod-network-cidr=10.244.0.0/14 --apiserver-advertise-address=172.16.0.11
and I got the message to use kubeadm join to join the cluster as shown below
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.0.11:6443 --token ptdibx.r2uu0s772n6fqubc \
--discovery-token-ca-cert-hash sha256:f3e36a8e82fb8166e0faf407235f12e256daf87d0a6d0193394f4ce31b50255c
Used flannel for networking purpose
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
After that when I try to check the kubectl command to check the pods/nodes status it fails
$ sudo kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ sudo kubectl get pods --all-namespaces
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ echo $KUBECONFIG
/etc/kubernetes/admin.conf:/home/ltestzaman/.kube/config
Docker and kubernetes version are as follows:
$ sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2",
GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean",
BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
$ sudo docker version
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 08:10:07 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.0-ce
How to make the cluster work?
Output of admin.conf is as follows:
$ sudo cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpN
QjRYRFRJd01EUXhPREV6TlRnek1sb1hEVE13TURReE5qRXpOVGd6TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjdvCnhERGVP
UWd4TGsyWE05K0pxc0hYRkRPWHB2YXYxdXNyeXNYMldOZzlOOVFaeDVucFA2UTRpYkQrZDFjWFcyL0oKY0ZXUjhBSDliaG5hZlhLRUhiUVZWT0R2UTcwbFRmNXVtVlQ4Qk5ZUjRQTmlCWjFxVDNMNnduWlYrekl6a0ZKMwp0
eVVNK0prUy80K2dMazI3b01HMFZ4Rnpjd1ozREMxWEFqRXVxb3FrYVF5UGUzMk9XZmZ2N082TjhxeWNCNkdnClNxbWxNWldBMk1DL0J1cFpZWXRYNkUyYUtNUloxZjkzRlpCaFdYNG9DYjVQSGdSUEdoNTFiNERKZExoYlk4
aWMKdVRSa0EyTi95UDVrTlRIMW5pSTU3bTlUY2phcDZpV0p3dFhsdlpOTUpCYmhQS1VjTEFhZG1tTHFtWTNMTmhiaApGZ2orK0s4T3hXVk5KYWVuQnI4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0Ex
VWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHS2VwQURtZTEva0orZWpob3p4RXVIdXFwQTYKT3dkK3VNQlNPMWYzTTBmSzkxQmhWYkxWakZZeEUwSjVqc1BLNzNJM3cxRU5rb2p2UGdnc0pV
NHBjNnoyeGdsVgpCQ0tESWhWSEVPOVlzRVNpdERnd2g4QUNyQitpeEc4YjBlbnFXTzhBVjZ6dGNESGtJUXlLdDAwNmgxNUV1bi9YCmg0ZUdBMDQrRmNTZVltZndSWHpMVmFFS3F2UHZZWVdkTHBJTktWRFNHZ3J3U3cvbnU5
K2g1U09Ddms1YncwbEYKODhZNnlTaHk3U1B6amRNUHdRcks5cmhWY1ZXK1VvS3d6SE80aUZHdWpmNDR0WHRydTY4L1NGVm5uVnRHWkYyKwo2WmJYeE81Z3I2c1NBQU9NK0E1RmtkQlNmcXZDdmwvUzZFQk04V2czaGNjOUZL
cEFCV0tadHNoRlMxOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://172.16.0.11:6443
name: kubernetes
contexts:contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS2RLRGs4MUpNKzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp
1WlhSbGN6QWVGdzB5TURBME1UZ3hNelU0TXpKYUZ3MHlNVEEwTVRneE16VTRNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUl
CSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW44elI0RlVVM2F1QjluVGkKeWxJV1ZpZ3JkV1dHSXY0bmhsRnZuQU1mWVJyVklrMGN6eTZPTmZBQzNrb01tZ3ZPQnA5MmpsWmlvYXpJUGg1aAovaUR
xalE3dzN4cFhUN1QxT1kySy9mVyt1S1NRUVI3VUx1bjM4MTBoY1ZRSm5NZmV4UGJsczY2R3RPeE9WL2RQCm1tcEEyUFlzL0lwWWtLUEhqNnNvb0NXU1JEMUZIeG1SdWFhYXhpL0hYQXdJODZEN01uWS90KzZJQVIyKzZRM0s
KY2pPRFdEWlRpbHYyMXBCWFBadW9FTndoZ0s2bWhKUU5VRmc5VmVFNEN4NEpEK2FYbmFSUW0zMndid29oYXk1OAo3L0FnUjRoMzNXTjNIQ1hPVGUrK2Z4elhDRnhwT1NUYm13Nkw1c1RucFkzS2JFRXE5ZXJyNnorT2x5ZVl
GMGMyCkZCV3J4UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCVDVKckdCS1NxS1VkclNYNUEyQVZHNmFZSHl
1TkNkeWp1cQpIVzQrMU5HdWZSczJWZW1kaGZZN2VORDJrSnJXNldHUnYyeWlBbDRQUGYzYURVbGpFYm9jVmx0RjZHTXhRRVU2CkJtaEJUWGMrVndaSXRzVnRPNTUzMHBDSmZwamJrRFZDZWFFUlBYK2RDd3hiTnh0ZWpacGZ
XT28zNGxTSGQ3TFYKRDc5UHAzYW1qQXpyamFqZE50QkdJeHFDc3lZWE9Rd1BVL2Nva1RJVHRHVWxLTVV5aUdOQk5rQ3NPc3RiZHI2RApnQVRuREg5YWdNck9CR2xVaUlJN0Qvck9rU3IzU2QvWnprSGdMM1c4a3V5dXFUWWp
wazNyNEFtU3NXS1M4UUdNCjZ6bHUwRk4rYnVJbGRvMnBLNmhBZlBnSTBjcDZWQWk1WW5lQVVGQ2EyL2pJeXI3cXRiND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbjh6UjRGVVUzYXVCOW5UaXlsSVdWaWdyZFdXR0l2NG5obEZ2bkFNZllSclZJazBjCnp5Nk9OZkFDM2tvTW1
ndk9CcDkyamxaaW9heklQaDVoL2lEcWpRN3czeHBYVDdUMU9ZMksvZlcrdUtTUVFSN1UKTHVuMzgxMGhjVlFKbk1mZXhQYmxzNjZHdE94T1YvZFBtbXBBMlBZcy9JcFlrS1BIajZzb29DV1NSRDFGSHhtUgp1YWFheGkvSFh
Bd0k4NkQ3TW5ZL3QrNklBUjIrNlEzS2NqT0RXRFpUaWx2MjFwQlhQWnVvRU53aGdLNm1oSlFOClVGZzlWZUU0Q3g0SkQrYVhuYVJRbTMyd2J3b2hheTU4Ny9BZ1I0aDMzV04zSENYT1RlKytmeHpYQ0Z4cE9TVGIKbXc2TDV
zVG5wWTNLYkVFcTllcnI2eitPbHllWUYwYzJGQldyeFFJREFRQUJBb0lCQVFDQXhjRHJFaVQ2Yk5jUwpFQ2NoK3Z4YytZbnIxS0EvV3FmbktZRFRMQUVCYzJvRmRqYWREbHN6US9KTHgwaFlhdUxmbTJraVVxS3d2bGV2CkZ
6VElZU1loL2NSRlJTak81bmdtcE5VNHlldWpSNW1ub0h4RVFlNjVnbmNNcURnR3kxbk5SMWpiYnV6R3B4YUsKOUpTRlR0SnJCQlpFZkFmYXB1Q04rZE9IR2ovQUZJbWt2ZXhSckwyTXdIem0zelJkMG5UdkRyOUh1dy9IMjE
1RAprNXBHZjluV1ZsNnZxSGZFYVF0S0NNemY2WE5MdEFjcEJMcmNwSExwWEFObVNMWTAvcFFnV0s5eVpkbVA5b0xCCjhvU1J0eFRsZlU0V1RLdERpNlozK0tTSytidnF4MDRGZTJYb2RlVUM3eDN1d3lwamszOXZjSG55UkM
2Tmhlem4KTExJcnVEbVJBb0dCQU1VbG8zRkpVTUJsczYrdHZoOEpuUjJqN2V6bU9SNllhRmhqUHVuUFhkeTZURFVxOFM2aQprSTZDcG9FZEFkeUE4ejhrdU01ZlVFOENyOStFZ05DT3lHdGVEOFBaV2FCYzUxMit6OXpuMXF
3SVg3QjY1M01lCk5hS2Y1Z3FYbllnMmdna2plek1lbkhQTHFRLzZDVjZSYm93Q3lFSHlrV0FXS3I4cndwYXNNRXQ3QW9HQkFNK0IKRGZZRU50Vmk5T3VJdFNDK0pHMHY1TXpkUU9JT1VaZWZRZTBOK0lmdWwrYnRuOEhNNGJ
aZmRUNmtvcFl0WmMzMQptakhPNDZ5NHJzcmEwb1VwalFscEc5VGtVWDRkOW9zRHoydlZlWjBQRlB2em53R1JOUGxzaTF1cUZHRkdyY0dTClJibzZiTjhKMmZqV0hGb2ppekhVb3Rkb1BNbW1qL0duM0RmVEw2Ry9Bb0dBQk8
rNVZQZlovc2ROSllQN003bkEKNW1JWmJnb2h1Z05rOFhtaXRLWU5tcDVMbERVOERzZmhTTUE2dlJibDJnaWNqcU16d1c4ZmlxcnRqbkk1NjM3Mwp3OEI2TXBRNXEwdElPOCt3VXI2M1lGMWhVQUR6MUswWCtMZDZRaCtqd1N
wa1BTaFhTR05tMVh0dkEwaG1mYWkwCmxPcm82c1hSSUEvT0NEVm5UUENJMFFzQ2dZQWZ3M0dQcHpWOWxKaEpOYlFFUHhiMFg5QjJTNmdTOG40cTU0WC8KODVPSHUwNGxXMXFKSUFPdEZ3K3JkeWdzTk9iUWtEZjZSK0V5SDF
NaVdqeS9oWXpCVkFXZW9SU1lhWjNEeWVHRwpjRGNkZzZHQ3I5ZzNOVE1XdXpiWjRUOGRaT1JVTFQvZk1mSlljZm1iemFxcFlhZDlDVCtrR2FDMGZYcXJVemF5CmxQRkZvUUtCZ0E4ck5IeG1DaGhyT1ExZ1hET2FLcFJUUWN
xdzd2Nk1PVEcvQ0lOSkRiWmdEU3g3dWVnVHFJVmsKd3lOQ0o2Nk9kQ3VOaGpHWlJld2l2anJwNDhsL2plM1lsak03S2k1RGF6NnFMVllqZkFDSm41L0FqdENSZFRxTApYQ3B1REFuTU5HMlFib0JuSU1UaXNBaXJlTXduS1Z
BZjZUMzM4Tjg5ZEo2Wi93QUlOdWNYCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
Not sure why most of the entries are null as shown below
$ sudo kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Most probably the kubeconfig file is not setup properly.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Also this should work
sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
Changed environmental variable KUBECONFIG to /home/ltestzaman/.kube/config and it works
$ echo $KUBECONFIG
/home/ltestzaman/.kube/config
$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 117m v1.18.2
Or need to mention --config as identified by #Arghya Sadhu
$ sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 120m v1.18.2
You have to start the kubernetes cluster. Either with minikube start or if you are connecting to any cloud service, make sure kubernetes is started.

Kubernetes not showing nodes

I initialized master node and joined workers nodes to the cluster with kubeadm. According to the logs worker nodes successfully joined to the cluster.
However, when I list the nodes in master using kubectl get nodes, worker nodes are absent. What is wrong?
[vagrant#localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
Here are kubeadm logs
PLAY[
Alusta kubernetes masterit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[
],
...
}TASK[
kubeadm init
]************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[init] Using Kubernetes version: v1.13.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 19.504023 seconds\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"stdout_lines":[
"[init] Using Kubernetes version: v1.13.1",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Activating the kubelet service",
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"etcd/ca\" certificate and key",
"[certs] Generating \"etcd/server\" certificate and key",
"[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"etcd/healthcheck-client\" certificate and key",
"[certs] Generating \"etcd/peer\" certificate and key",
"[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"apiserver-etcd-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
"[apiclient] All control plane components are healthy after 19.504023 seconds",
"[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
"[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
"[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6",
"[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
"[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
"[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
"[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
"[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
"[addons] Applied essential addon: CoreDNS",
"[addons] Applied essential addon: kube-proxy",
"",
"Your Kubernetes master has initialized successfully!",
"",
"To start using your cluster, you need to run the following as a regular user:",
"",
" mkdir -p $HOME/.kube",
" sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
" sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"",
"You should now deploy a pod network to the cluster.",
"Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
" https://kubernetes.io/docs/concepts/cluster-administration/addons/",
"",
"You can now join any number of machines by running the following on each node",
"as root:",
"",
" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
]
}TASK[
set_fact
]****************************************************************
ok:[
k8s-n1
]=>{
"ansible_facts":{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
},
"changed":false
}TASK[
debug
]*******************************************************************
ok:[
k8s-n1
]=>{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
}TASK[
Aseta ymparistomuuttujat
]************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"cp /etc/kubernetes/admin.conf /home/vagrant/ && chown vagrant:vagrant /home/vagrant/admin.conf && export KUBECONFIG=/home/vagrant/admin.conf && echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc",
"delta":"0:00:00.008628",
"end":"2019-01-05 07:08:08.663360",
"rc":0,
"start":"2019-01-05 07:08:08.654732",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Konfiguroi CNI-verkko
]***************************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
Asenna Flannel-plugin
]***************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"export KUBECONFIG=/home/vagrant/admin.conf ; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
"delta":"0:00:00.517346",
"end":"2019-01-05 07:08:17.731759",
"rc":0,
"start":"2019-01-05 07:08:17.214413",
"stderr":"",
"stderr_lines":[
],
"stdout":"clusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\ndaemonset.extensions/kube-flannel-ds-arm64 created\ndaemonset.extensions/kube-flannel-ds-arm created\ndaemonset.extensions/kube-flannel-ds-ppc64le created\ndaemonset.extensions/kube-flannel-ds-s390x created",
"stdout_lines":[
"clusterrole.rbac.authorization.k8s.io/flannel created",
"clusterrolebinding.rbac.authorization.k8s.io/flannel created",
"serviceaccount/flannel created",
"configmap/kube-flannel-cfg created",
"daemonset.extensions/kube-flannel-ds-amd64 created",
"daemonset.extensions/kube-flannel-ds-arm64 created",
"daemonset.extensions/kube-flannel-ds-arm created",
"daemonset.extensions/kube-flannel-ds-ppc64le created",
"daemonset.extensions/kube-flannel-ds-s390x created"
]
}TASK[
shell
]*******************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"sleep 10",
"delta":"0:00:10.004446",
"end":"2019-01-05 07:08:29.833488",
"rc":0,
"start":"2019-01-05 07:08:19.829042",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Alusta kubernetes workerit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n3
]ok:[
k8s-n2
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.085388",
"end":"2019-01-05 07:08:34.547407",
"rc":0,
"start":"2019-01-05 07:08:34.462019",
"stderr":"",
"stderr_lines":[
],
...
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.086224",
"end":"2019-01-05 07:08:34.600794",
"rc":0,
"start":"2019-01-05 07:08:34.514570",
"stderr":"",
"stderr_lines":[
],
"stdout":"[preflight] running pre-flight checks\n[reset] no etcd config found. Assuming external etcd\n[reset] please manually reset etcd to prevent further issues\n[reset] stopping the kubelet service\n[reset] unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]\n[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually.\nFor example: \niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.",
"stdout_lines":[
"[preflight] running pre-flight checks",
"[reset] no etcd config found. Assuming external etcd",
"[reset] please manually reset etcd to prevent further issues",
"[reset] stopping the kubelet service",
"[reset] unmounting mounted directories in \"/var/lib/kubelet\"",
"[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]",
"[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
"[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
"",
"The reset process does not reset or clean up iptables rules or IPVS tables.",
"If you wish to reset iptables, you must do so manually.",
"For example: ",
"iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X",
"",
"If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
"to reset your system's IPVS tables."
]
}TASK[
kubeadm join
]************************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:01.988676",
"end":"2019-01-05 07:08:38.771956",
"rc":0,
"start":"2019-01-05 07:08:36.783280",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:02.000874",
"end":"2019-01-05 07:08:38.979256",
"rc":0,
"start":"2019-01-05 07:08:36.978382",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}PLAY RECAP *********************************************************************
k8s-n1:ok=24 changed=16 unreachable=0 failed=0
k8s-n2:ok=16 changed=13 unreachable=0 failed=0
k8s-n3:ok=16 changed=13 unreachable=0 failed=0
.
[vagrant#localhost ~]$ kubectl get events -a
Flag --show-all has been deprecated, will be removed in an upcoming release
LAST SEEN TYPE REASON KIND MESSAGE
3m15s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 72f6776d-c267-4e31-8e6d-a4d36da1d510
3m16s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 2d68a2c8-e27a-45ff-b7d7-5ce33c9e1cc4
4m2s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 0213bbdf-f4cd-4e19-968e-8162d95de9a6
By default the nodes (kubelet) identify themselves using their hostnames. It seems that your VMs' hostnames are not set.
In the Vagrantfile set the hostname value to different names for each VM.
https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname

Flannel fails in kubernetes cluster due to failure of subnet manager

I am running etcd, kube-apiserver, kube-scheduler, and kube-controllermanager on a master node as well as kubelet and kube-proxy on a minion node as follows (all kube binaries are from kubernetes 1.7.4):
# [master node]
./etcd
./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0
./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080
./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080
# [minion node]
./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true
./kube-proxy --master=http://$MASTER_IP:8080
After this, if I execute kubectl get all --all-namespaces and kubectl get nodes, I get
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes 10.10.10.1 <none> 443/TCP 27m
NAME STATUS AGE VERSION
minion-1 Ready 27m v1.7.4+793658f2d7ca7
Then, I apply flannel as follows:
kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml
Now, I see a pod is created, but with error:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-p8tcb 1/2 CrashLoopBackOff 4 2m
When I check the logs inside the failed container in the minion node, I see the following error:
Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?
Maybe it is your flannel yaml file has something wrong,
you can try this to install your flannel,
check the old ip link
ip link
if it show flannel,please delete it
ip link delete flannel.1
and install , its default pod network cdir is 10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
You could try to pass --etcd-prefix=/your/prefix and --etcd-endpoints=address to flanneld instead of --kube-subnet-mgr so flannel get net-conf from etcd server and not from api server.
Keep in mind that you must to push net-conf to etcd server.
UPDATE
The problem (/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory) can appear when execute apiserver without --admission-control=...,ServiceAccount,... or if kubelet is inside a container (eg: hypercube) and this last was my case. If you want execute k8s components inside a container you need to pass 'shared' option to kubelet volume
/var/lib/kubelet/:/var/lib/kubelet:rw,shared
Furthermore enable same option to docker in docker.service
MountFlags=shared
Now the question is: is there a security hole with shared mount?

Resources