Failed create pod sandbox: rpc error: code = Unknown desc #198 - docker

I got this error coredns pod doenst start
I deploy kubernetes cluster!
1 pod got status: ContainerCreating
The other one got status Running
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6d57b44787-xlj89 1/1 Running 15 10d kube-system calico-node-dwm47 1/1 Running 310 5d3h kube-system calico-node-hhgzk 1/1 Running 13 10d kube-system calico-node-tk4mp 1/1 Running 309 5d3h kube-system calico-node-w7zvs 1/1 Running 311 5d3h kube-system coredns-74c9d4d795-psf2v 1/1 Running 0 4d4h **kube-system coredns-74c9d4d795-xpbsd 0/1 ContainerCreating** 0 5d3h kube-system dns-autoscaler-7d95989447-7kqsn 1/1 Running 8 10d kube-system kube-apiserver-master 1/1 Running 1 10d kube-system kube-controller-manager-master 1/1 Running 1 10d kube-system kube-proxy-9bt8m 1/1 Running 1 5d3h kube-system kube-proxy-cbrcl 1/1 Running 2 5d3h kube-system kube-proxy-stj5g 1/1 Running 0 5d3h kube-system kube-proxy-zql86 1/1 Running 0 5d3h kube-system kube-scheduler-master 1/1 Running 1 10d kube-system kubernetes-dashboard-7c547b4c64-6skc7 1/1 Running 589 10d kube-system nginx-proxy-worker1 1/1 Running 1 5d3h kube-system nginx-proxy-worker2 1/1 Running 0 5d3h kube-system nginx-proxy-worker3 1/1 Running 0 5d3h kube-system nodelocaldns-6t92x 1/1 Running 1 5d3h kube-system nodelocaldns-kgm4t 1/1 Running 0 5d3h kube-system nodelocaldns-xl8zg 1/1 Running 0 5d3h kube-system nodelocaldns-xwlwk 1/1 Running 10 10d
OS:
cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7
my inventory:
master ansible_host=ip ansible_user=root
worker1 ansible_host=ip ansible_user=root
worker2 ansible_host=ip ansible_user=root
worker3 ansible_host=ip ansible_user=root
#[all:vars]
#ansible_python_interpreter=/usr/bin/python3
[kube-master]
master
[kube-node]
worker1
worker2
worker3
[etcd]
master
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
My Problem Pod Log:
Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74c9d4d795-xpbsd": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown
1

Related

Kubernetes calico-node issue - running 0/1

Hi I have two virtual machine in a local server with ubuntu 20.04 and i want to build a small cluster for my microservices. I ran the following step to setup my cluster but I got issue with calico-nodes. They are running with 0/1/
master.domain.com
ubuntu 20.04
docker --version = Docker version 20.10.7, build f0df350
kubectl version = Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
worker.domain.com
ubuntu 20.04
docker --version = Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
kubectl version = Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
STEP-1
In the master.domain.com virtual machine I run the following commands
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7f4f5bf95d-gnll8 1/1 Running 0 38s 192.168.29.195 master <none> <none>
kube-system calico-node-7zmtm 1/1 Running 0 38s 195.251.3.255 master <none> <none>
kube-system coredns-74ff55c5b-ltn9g 1/1 Running 0 3m49s 192.168.29.193 master <none> <none>
kube-system coredns-74ff55c5b-nkhzf 1/1 Running 0 3m49s 192.168.29.194 master <none> <none>
kube-system etcd-kubem 1/1 Running 0 4m6s 195.251.3.255 master <none> <none>
kube-system kube-apiserver-kubem 1/1 Running 0 4m6s 195.251.3.255 master <none> <none>
kube-system kube-controller-manager-kubem 1/1 Running 0 4m6s 195.251.3.255 master <none> <none>
kube-system kube-proxy-2cr2x 1/1 Running 0 3m49s 195.251.3.255 master <none> <none>
kube-system kube-scheduler-kubem 1/1 Running 0 4m6s 195.251.3.255 master <none> <none>
STEP-2
In the worker.domain.com virtual machine I run the following commands
sudo kubeadm join 195.251.3.255:6443 --token azuist.xxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx
STEP-3
In the master.domain.com virtual machine I run the following commands
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7f4f5bf95d-gnll8 1/1 Running 0 6m37s 192.168.29.195 master <none> <none>
kube-system calico-node-7zmtm 0/1 Running 0 6m37s 195.251.3.255 master <none> <none>
kube-system calico-node-wccnb 0/1 Running 0 2m19s 195.251.3.230 worker <none> <none>
kube-system coredns-74ff55c5b-ltn9g 1/1 Running 0 9m48s 192.168.29.193 master <none> <none>
kube-system coredns-74ff55c5b-nkhzf 1/1 Running 0 9m48s 192.168.29.194 master <none> <none>
kube-system etcd-kubem 1/1 Running 0 10m 195.251.3.255 master <none> <none>
kube-system kube-apiserver-kubem 1/1 Running 0 10m 195.251.3.255 master <none> <none>
kube-system kube-controller-manager-kubem 1/1 Running 0 10m 195.251.3.255 master <none> <none>
kube-system kube-proxy-2cr2x 1/1 Running 0 9m48s 195.251.3.255 master <none> <none>
kube-system kube-proxy-kxw4m 1/1 Running 0 2m19s 195.251.3.230 worker <none> <none>
kube-system kube-scheduler-kubem 1/1 Running 0 10m 195.251.3.255 master <none> <none>
kubectl logs -n kube-system calico-node-7zmtm
...
...
2021-06-20 17:10:25.064 [INFO][56] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface eth0: 195.251.3.255/24
2021-06-20 17:10:34.862 [INFO][53] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.5s: avg=4ms longest=13ms ()
kubectl logs -n kube-system calico-node-wccnb
...
...
2021-06-20 17:10:59.818 [INFO][55] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=3ms longest=13ms (resync-filter-v4,resync-nat-v4,resync-raw-v4)
2021-06-20 17:11:05.994 [INFO][51] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface br-9a88318dda68: 172.21.0.1/16
As you can see for both calico nodes I get 0/1 running, Why??
Any idea how to solve this problem?
Thank you
Got totally the same issue.
CentOS 8
kubectl kubeadm kubelet v1.22.3
docker-ce version 20.10.9
The only difference worth mention is that I have to comment line
- --port=0
in /etc/kubernetes/manifests/kube-scheduler.yaml or otherwise scheduler declared as unhealthy in
kubectl get componentstatuses
Kubernetes API is advertised on a public IP address.
Public IP address of control panel node is substituted with 42.42.42.42 in kubectl print-out;
Public IP address of worker node is substituted with 21.21.21.21
Public domain name (which is also a hostname on Control Panel node) is substituted with public-domain.work
>kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5d995d45d6-rk9cq 1/1 Running 0 76m 192.168.231.193 public-domain.work <none> <none>
calico-node-qstxm 0/1 Running 0 76m 42.42.42.42 public-domain.work <none> <none>
calico-node-zmz5s 0/1 Running 0 75m 21.21.21.21 node1.public-domain.work <none> <none>
coredns-78fcd69978-5xsb2 1/1 Running 0 81m 192.168.231.194 public-domain.work <none> <none>
coredns-78fcd69978-q29fn 1/1 Running 0 81m 192.168.231.195 public-domain.work <none> <none>
etcd-public-domain.work 1/1 Running 3 82m 42.42.42.42 public-domain.work <none> <none>
kube-apiserver-public-domain.work 1/1 Running 3 82m 42.42.42.42 public-domain.work <none> <none>
kube-controller-manager-public-domain.work 1/1 Running 2 82m 42.42.42.42 public-domain.work <none> <none>
kube-proxy-5kkks 1/1 Running 0 81m 42.42.42.42 public-domain.work <none> <none>
kube-proxy-xsc66 1/1 Running 0 75m 21.21.21.21 node1.public-domain.work <none> <none>
kube-scheduler-public-domain.work 1/1 Running 1 (78m ago) 78m 42.42.42.42 public-domain.work <none> <none>
>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
public-domain.work Ready control-plane,master 4h56m v1.22.3 42.42.42.42 <none> CentOS Stream 8 4.18.0-348.el8.x86_64 docker://20.10.9
node1.public-domain.work Ready <none> 4h50m v1.22.3 21.21.21.21 <none> CentOS Stream 8 4.18.0-348.el8.x86_64 docker://20.10.10
>kubectl logs -n kube-system calico-node-qstxm
2021-11-09 15:27:38.996 [INFO][86] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:27:38.996 [INFO][86] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:27:38.997 [INFO][86] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:27:38.998 [INFO][86] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"
2021-11-09 15:27:40.198 [INFO][86] felix/iface_monitor.go 201: Netlink address update. addr="here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6" exists=true ifIndex=3 2021-11-09 15:27:40.198 [INFO][86] felix/int_dataplane.go 1071: Linux interface addrs changed. addrs=set.mapSet{"fe80::9132:a0df:82d8:e26c":set.empty{}} ifaceName="eth1"
2021-11-09 15:27:40.198 [INFO][86] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{"here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6":set.empty{}}}
2021-11-09 15:27:40.199 [INFO][86] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{"here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6":set.empty{}}}
2021-11-09 15:27:40.199 [INFO][86] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:27:40.200 [INFO][86] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"
2021-11-09 15:27:48.010 [INFO][81] monitor-addresses/startup.go 713: Using autodetected IPv4 address on interface eth0: 42.42.42.42/24
> kube-system calico-node-zmz5s
2021-11-09 15:25:56.669 [INFO][64] felix/int_dataplane.go 1071: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1"
2021-11-09 15:25:56.669 [INFO][64] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:25:56.669 [INFO][64] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:25:56.669 [INFO][64] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:25:56.670 [INFO][64] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"
2021-11-09 15:25:56.769 [INFO][64] felix/iface_monitor.go 201: Netlink address update. addr="here:is:some:ipv6:address:that:has:nothing:to:do:with:my:worknode:server:public:ipv6" exists=false ifIndex=3
2021-11-09 15:26:07.050 [INFO][64] felix/summary.go 100: Summarising 14 dataplane reconciliation loops over 1m1.7s: avg=5ms longest=11ms ()
2021-11-09 15:26:33.880 [INFO][59] monitor-addresses/startup.go 713: Using autodetected IPv4 address on interface eth0: 21.21.21.21/24
Seemed that issue was in closed BGP port due to firewall.
This commands on master node solved it for me:
>firewall-cmd --add-port 179/tcp --zone=public --permanent
>firewall-cmd --reload

Two coredns Pods in k8s cluster are in pending state

kube-system coredns-f68dcb75-f6smn 0/1 Pending 0 34m
kube-system coredns-f68dcb75-npc48 0/1 Pending 0 34m
kube-system etcd-master 1/1 Running 0 33m
kube-system kube-apiserver-master 1/1 Running 0 34m
kube-system kube-controller-manager-master 1/1 Running 0 33m
kube-system kube-flannel-ds-amd64-lngrx 1/1 Running 1 32m
kube-system kube-flannel-ds-amd64-qz2gn 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-w5lpc 1/1 Running 0 34m
kube-system kube-proxy-9l9nv 1/1 Running 0 32m
kube-system kube-proxy-hvd5g 1/1 Running 0 32m
kube-system kube-proxy-vdgth 1/1 Running 0 34m
kube-system kube-scheduler-master 1/1 Running 0 33m
I am using the latest k8s version: 1.16.0.
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=<some-repo> --token=TOKEN --apiserver-advertise-address=<IP> --kubernetes-version=1.16.0
This is the command I am using to initialize the cluster
The current state of the cluster.
master NotReady master 42m v1.16.0
slave1 NotReady <none> 39m v1.16.0
slave2 NotReady <none> 39m v1.16.0
Please comment if you need any other info.
I think you need to wait for k8s v1.17.0 or update your current installaion, this issue fixed in here
orginal Issue

Kubernetes dial tcp myIP:10250: connect: no route to host

I got Kubernetes Cluster with 1 master and 3 workers nodes.
calico v3.7.3 kubernetes v1.16.0 installed via kubespray https://github.com/kubernetes-sigs/kubespray
Before that, I normally deployed all the pods without any problems.
I can't start a few pod (Ceph):
kubectl get all --namespace=ceph
NAME READY STATUS RESTARTS AGE
pod/ceph-cephfs-test 0/1 Pending 0 162m
pod/ceph-mds-665d849f4f-fzzwb 0/1 Pending 0 162m
pod/ceph-mon-744f6dc9d6-jtbgk 0/1 CrashLoopBackOff 24 162m
pod/ceph-mon-744f6dc9d6-mqwgb 0/1 CrashLoopBackOff 24 162m
pod/ceph-mon-744f6dc9d6-zthpv 0/1 CrashLoopBackOff 24 162m
pod/ceph-mon-check-6f474c97f-gjr9f 1/1 Running 0 162m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ceph-mon ClusterIP None <none> 6789/TCP 162m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ceph-osd 0 0 0 0 0 node-type=storage 162m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ceph-mds 0/1 1 0 162m
deployment.apps/ceph-mon 0/3 3 0 162m
deployment.apps/ceph-mon-check 1/1 1 1 162m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ceph-mds-665d849f4f 1 1 0 162m
replicaset.apps/ceph-mon-744f6dc9d6 3 3 0 162m
replicaset.apps/ceph-mon-check-6f474c97f 1 1 1 162m
But another obe is ok:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6d57b44787-xlj89 1/1 Running 19 24d
calico-node-dwm47 1/1 Running 310 19d
calico-node-hhgzk 1/1 Running 15 24d
calico-node-tk4mp 1/1 Running 309 19d
calico-node-w7zvs 1/1 Running 312 19d
coredns-74c9d4d795-jrxjn 1/1 Running 0 2d23h
coredns-74c9d4d795-psf2v 1/1 Running 2 18d
dns-autoscaler-7d95989447-7kqsn 1/1 Running 10 24d
kube-apiserver-master 1/1 Running 4 24d
kube-controller-manager-master 1/1 Running 3 24d
kube-proxy-9bt8m 1/1 Running 2 19d
kube-proxy-cbrcl 1/1 Running 4 19d
kube-proxy-stj5g 1/1 Running 0 19d
kube-proxy-zql86 1/1 Running 0 19d
kube-scheduler-master 1/1 Running 3 24d
kubernetes-dashboard-7c547b4c64-6skc7 1/1 Running 591 24d
nginx-proxy-worker1 1/1 Running 2 19d
nginx-proxy-worker2 1/1 Running 0 19d
nginx-proxy-worker3 1/1 Running 0 19d
nodelocaldns-6t92x 1/1 Running 2 19d
nodelocaldns-kgm4t 1/1 Running 0 19d
nodelocaldns-xl8zg 1/1 Running 0 19d
nodelocaldns-xwlwk 1/1 Running 12 24d
tiller-deploy-8557598fbc-7f2w6 1/1 Running 0 131m
I use Centos 7:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
The error log:
Get https://10.2.67.203:10250/containerLogs/ceph/ceph-mon-744f6dc9d6-mqwgb/ceph-mon?tailLines=5000&timestamps=true: dial tcp 10.2.67.203:10250: connect: no route to host
Maybe someone came across this and can help me? I will provide any additional information
logs from pending pods:
Warning FailedScheduling 98s (x125 over 3h1m) default-scheduler 0/4 nodes are available: 4 node(s) didn't match node selector.
It seems that a firewall is blocking ingress traffic from port 10250 on the 10.2.67.203 node.
You can open it by running the commands below (I'm assuming firewalld is installed or you can run the commands of the equivalent firewall module):
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --reload
sudo firewall-cmd --list-all # you should see that port `10250` is updated
tl;dr; It looks like your cluster itself is fairly broken and should be repaired before looking at Ceph specifically
Get https://10.2.67.203:10250/containerLogs/ceph/ceph-mon-744f6dc9d6-mqwgb/ceph-mon?tailLines=5000&timestamps=true: dial tcp 10.2.67.203:10250: connect: no route to host
10250 is the port that the Kubernetes API server uses to connect to a node's Kubelet to retrieve the logs.
This error indicates that the Kubernetes API server is unable to reach the node. This has nothing to do with your containers, pods or even your CNI network. no route to host indicates that either:
The host is unavailable
A network segmentation has occurred
The Kubelet is unable to answer the API server
Before addressing issues with the Ceph pods I would investigate why the Kubelet isn't reachable from the API server.
After you have solved the underlying network connectivity issues I would address the crash-looping Calico pods (You can see the logs of the previously executed containers by running kubectl logs -n kube-system calico-node-dwm47 -p).
Once you have both the underlying network and the pod network sorted I would address the issues with the Kubernetes Dashboard crash-looping, and finally, start to investigate why you are having issues deploying Ceph.

IBM cloud private port 8443 not listening after reboot

root#pact1:~# kubectl -s http://localhost:8888 get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system auth-apikeys-9bx2r 1/1 Running 3 4d
kube-system auth-idp-kkt6m 3/3 Running 9 4d
kube-system auth-pap-6brhh 1/1 Running 3 4d
kube-system auth-pdp-6ztzn 1/1 Running 3 4d
kube-system calico-kube-controllers-68786dd655-xrcs4 1/1 Running 4 5d
kube-system calico-node-amd64-48t8h 2/2 Running 8 5d
kube-system catalog-catalog-apiserver-hj6pt 1/1 Running 6 4d
kube-system catalog-catalog-controller-manager-59d9b88c9d-qgd9z 1/1 Running 8 4d
kube-system catalog-ui-42vlk 1/1 Running 4 4d
kube-system default-http-backend-6858c684cd-n222g 1/1 Running 5 4d
kube-system elasticsearch-client-56cf688d8f-l74tf 2/2 Running 8 4d
kube-system elasticsearch-data-0 1/1 Running 4 4d
kube-system elasticsearch-master-86fddbdcb-8bl7p 1/1 Running 4 4d
kube-system filebeat-ds-amd64-hbvrw 1/1 Running 4 4d
kube-system heapster-96c84478b-27spb 2/2 Running 9 4d
kube-system helm-api-7767fbc785-kng26 2/2 Running 8 4d
kube-system helmrepo-86c469554-4nlkf 1/1 Running 4 4d
kube-system icp-ds-0 1/1 Running 4 4d
kube-system icp-management-ingress-lkjlm 1/1 Running 4 4d
kube-system image-manager-0 2/2 Running 8 4d
kube-system k8s-etcd-9.199.144.168 1/1 Running 4 5d
kube-system k8s-mariadb-9.199.144.168 1/1 Running 5 4d
kube-system k8s-master-9.199.144.168 3/3 Running 12 5d
kube-system k8s-proxy-9.199.144.168 1/1 Running 4 5d
kube-system kube-dns-amd64-x9vb6 3/3 Running 12 4d
kube-system logstash-5c8c4954d9-bcfkz 1/1 Running 4 4d
kube-system metering-dm-5c4f8bf7c7-jp6xz 1/1 Running 4 4d
kube-system metering-reader-amd64-75rv2 1/1 Running 4 4d
kube-system metering-server-55c4d77f4c-86l4x 1/1 Running 4 4d
kube-system metering-ui-59c65d97d6-mbw45 1/1 Running 4 4d
kube-system monitoring-exporter-d8568ffff-sxvtq 1/1 Running 4 4d
kube-system monitoring-grafana-78dd9bd7c9-d9hrr 2/2 Running 8 4d
kube-system monitoring-prometheus-7994986858-z2lsm 3/3 Running 12 4d
kube-system monitoring-prometheus-alertmanager-7dc884c44d-4wbf8 3/3 Running 12 4d
kube-system monitoring-prometheus-kubestatemetrics-798dd85965-pwxth 1/1 Running 4 4d
kube-system monitoring-prometheus-nodeexporter-amd64-tzndd 1/1 Running 4 4d
kube-system nginx-ingress-lb-amd64-546w7 1/1 Running 7 4d
kube-system platform-api-gn4gf 1/1 Running 4 4d
kube-system platform-deploy-f6dhv 1/1 Running 4 4d
kube-system platform-ui-cr9n7 1/1 Running 4 4d
kube-system rescheduler-vlfd2 1/1 Running 4 4d
kube-system tiller-deploy-69f658499-6vwpl 1/1 Running 4 4d
kube-system unified-router-4qz2r 1/1 Running 4 4d
We face this issue related to cluster starting when we reboot ICP . Console throws 500 Internal server Error. Everytime I face this issue I try below solutions.
1. https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.2/getting_started/known_issues.html#alert_500 : Follow this link
2. If the problem does not get resolved, verify
systemctl status kubelet docker
3. after doing both the above solutions wait for few mins, clear the browser cache and then opening the console. Many times even if the solution has been fixed due to cache issue , console does not open.
Have you checked if there is a firewall rule which is gone since you performed the reboot?
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/troubleshoot/restart_master_console.html

Pod not response properly

I have a local(without cloud provider) cluster made up of 3 vm the master and the nodes, I have created a volume with a nfs to reuse it if a pod die and is reschedule on another nodes, but i think same component not work well: I use to create the cluster just this guide: kubernetes guide and I have after that create the cluster this is the actual state:
master#master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod --all-namespaces
[sudo] password for master:
NAMESPACE NAME READY STATUS RESTARTS AGE
default mysqlnfs3 1/1 Running 0 27m
kube-system etcd-master-virtualbox 1/1 Running 0 46m
kube-system kube-apiserver-master-virtualbox 1/1 Running 0 46m
kube-system kube-controller-manager-master-virtualbox 1/1 Running 0 46m
kube-system kube-dns-86f4d74b45-f6hpf 3/3 Running 0 47m
kube-system kube-flannel-ds-nffv6 1/1 Running 0 38m
kube-system kube-flannel-ds-rqw9v 1/1 Running 0 39m
kube-system kube-flannel-ds-s5wzn 1/1 Running 0 44m
kube-system kube-proxy-6j7p8 1/1 Running 0 38m
kube-system kube-proxy-7pj8d 1/1 Running 0 39m
kube-system kube-proxy-jqshs 1/1 Running 0 47m
kube-system kube-scheduler-master-virtualbox 1/1 Running 0 46m
master#master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get node
NAME STATUS ROLES AGE VERSION
host1-virtualbox Ready <none> 39m v1.10.2
host2-virtualbox Ready <none> 40m v1.10.2
master-virtualbox Ready master 48m v1.10.2
and this is the pod:
master#master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod
NAME READY STATUS RESTARTS AGE
mysqlnfs3 1/1 Running 0 29m
it is schedule on the host2 and if i try to go in the shell of host 2 and I do dockerexec I use the container very well, the data are store and retrieve, but when I try to use kubect exec not work:
master#master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl exec -it -n default mysqlnfs3 -- /bin/bash
error: unable to upgrade connection: pod does not exist

Resources