IBM Kubernetes Services - Docker command not working - docker

I am working on POC to setup IBM API Connect v2018 on IBM Kubernetes Service.
I setup my IBM Kubernetes cluster environments and kubectl commands working fine.
Below command output/results are showing as expected:
ibmcloud login -a cloud.ibm.com -r eu-de -g Default
kubectl config current-context
kubectl get services --namespace=kube-system
ibmcloud cs clusters
ibmcloud cs cluster config --cluster myclsuter
kubectl get nodes
kubectl get pods -n kube-system
The Problem occurs when I am trying to install Docker Registry.
Trying to install Docker registry in IBM Kubernetes Services error exception thrown as below:
ibmcloud cr login
FAILED
Could not locate 'docker'. Check your 'docker' installation and path.
When I type docker in IBM Cloud Shell:
-bash: docker: command not found
Can anyone please suggest how to install Docker on IBM Kubernetes Service...

Related

how to check docker containers and images inside aks nodes

how to check running docker containers and images for the deployed pods inside aks nodes?
I have deployed company pods and services inside the azure aks cluster.
Need to login as a root user inside containers running inside nodes of managed aks cluster. Those containers are of rabbitmq pods deployed with bitnami helm chart.
I was able to login into worker nodes by following this link https://learn.microsoft.com/en-us/azure/aks/node-access, but couldn't find the docker package running/installed inside them.
They do have containerd://1.4.9+azure as the CONTAINER-RUNTIME.
Tried below commands of 'containerd' inside those nodes, but nothing came, empty response, no running containers or downloaded images.
ctr container ls
ctr images ls
So how to check running docker containers and images for the deployed pods inside aks nodes?
We can use the below commands to check running containers inside worker nodes of managed k8s clusters at least in the case of aks clusters.
Here's the reference link https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/
use below commands
sudo crictl --help
sudo crictl ps
sudo crictl images
I use nerdctl instead. Here are the steps work for me..
Create a debug pod to access the aks node
kubectl debug node/<nodeName> -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
You can interact with the node session by running below command:
chroot /host
Install nerdctl tool:
wget https://github.com/containerd/nerdctl/releases/download/v0.20.0/nerdctl-0.20.0-linux-amd64.tar.gz
tar zxvf nerdctl-0.20.0-linux-amd64.tar.gz -C /usr/local/bin/
alias n=nerdctl
Check the containers in the AKS node
nerdctl --namespace k8s.io ps -a
Reference:
https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/
https://learn.microsoft.com/en-us/azure/aks/node-access
https://github.com/containerd/nerdctl#whale-nerdctl-load

Docker Desktop in Mac OS (High Sierra) showing Kubernetes is staring forever. Kubectl shows Unable to connect to the server: EOF

I upgraded to docker stable version 2.0.0.3 running in my local. Earlier I had docker CE 17.06 which only had docker and kubernetes options were missing.
When I ran kubectl cluster-info dump I get the output in terminal.
How can I fix this? I have 3 docker containers fir which I wanted to run in Kubernetes now instead of using docker networks all alone. When running kubectl cluster-info this is the output-
Kubernetes master is running at https://localhost:6443

How to enter docker containers deployed on Google cluster?

I have deployed my application via docker on Google Kubernetes cluster. I am facing an application error due to which I want to enter inside the container and check a few things. At local, all I had to do was run sudo docker ps and then exec -it into the container.
How can I do that when containers are deployed on a Kubernetes cluster?
You need to use kubectl
kubectl get pods
kubectl exec -it pod-container-name -- /bin/bash

Running docker inside Kubernetes with containerd

Since K8S v1.11 Runtime was changed from dockerd to containerd.
I'm using Jenkins over kubernetes to build docker images using Docker outside of Docker (dood).
When I tried to switch to use the socket file from conatinerd (containerd/containerd.sock was mapped ad /var/run/docker.sock) with the regular docker client a got the following error Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x00\x00\x00\x04\x00\x00\x00\x00\x00".
Can docker client be used with containerd?
Disclaimer: as of this writing containerd didn't replace Docker, you can install containerd separately from Docker, and you can point the Kubernetes CRI to directly talk to the containerd socket.
So, when you install Docker it does install together with containerd and the Docker daemon talks to it. You'll see a process like this:
docker-containerd --config /var/run/docker/containerd/containerd.toml
However, the Docker client still talks to the Docker daemon, that's why when you run the Docker client in your container you still need to talk directly to the Docker daemon (/var/run/docker.sock), so you can switch back to /var/run/docker.sock and I believe it should work.
At least with MicroK8s 1.18 on Ubuntu 20.04, I found that a fix for this was to explicitly install Docker alongside Kubernetes.
Similar steps should apply to other Kubernetes distributions that don't include Docker.
After installing microk8s, you can do the following to install Docker:
# Shut down microk8s
sudo snap disable microk8s
# Assuming no Docker installed yet - this fixes the case
# where Kubernetes results in this path being a directory
rm -rf /var/run/docker.sock
sudo apt-get install docker.io
ls -l /var/run/docker.sock
# Output should show socket not directory:
# srw-rw---- 1 root docker 0 Aug 6 11:50 /var/run/docker.sock
# (See https://docs.docker.com/engine/install/linux-postinstall/ for usermod + newgrp commands at this point)
# Restart microk8s
sudo snap enable microk8s
Other Kubernetes distributions may have a different way to shut down processes more selectively.
journalctl -xe is useful to see any errors from Docker or Kubernetes here.
In Kubernetes manifests, be sure to use /var/run/docker.sock as the host path when mounting docker.sock.
Related issues:
hosting docker daemon alongside microk8s
cannot create socket because it's a directory
Post-install steps for Docker on Linux

Unable to start container using kubectl

I am learning kubernetes and using minikube to create single node cluster in my ubuntu machine. In my ubuntu machine Oracle Virtualbox is also installed. As I run
$ minikube start
Starting local Kubernetes v1.6.4 cluster...
...
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
...
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8000
error: failed to discover supported resources: Get https://192.168.99.100:8443/api: Service Unavailable
I am not getting that what is causing this error. Is there some place we can check for logs. I cannot use kubectl logs as it requires container to mention which is not creating at all. Please provide any possible solution to problem.
You can debug using these steps:
kubectl talks to kube-apiserver at port 8443 to do its thing. Try curl -k https://192.168.99.100:8443 and see if there's a positive response. If this fails, it means kube-apiserver isn't running at all. You can try restarting the VM or rebuilding minikube to see if it comes up properly the 2nd time round.
You can also debug the VM directly if you feel brave. In this case, get a shell on the VM spun up by minikube. Run docker ps | grep apiserver to check if the kube-apiserver pod is running. Also try ps aux | grep apiserver to check if it's run natively. If both don't turn up results, check the logs using journalctl -xef.

Resources