how to obtain AKS logs - azure-aks

A few days ago I had some pods crash and in their logs I don't see anything unusual.
I was using the following command:
kubectl logs mypod -n namespace
How do I see the AKS log to see if I see a problem there?

If you're creating your pods using a kubernetes deployment, pods will restart automatically if they crash. The new pod won't have logs for the crashed pod.
To see the logs for the previously terminated pod, add the "-p" argument:
kubectl logs -n <namespace> <pod> -p

Related

Can't create pods in kubernetes

I am following a tutorial, where a pod is created using the below command:
kubectl run firstPod --image={image from dockerhub repository}
But I am getting the following error:
Error from server (Forbidden): pods "firstPod" is forbidden: error looking up service account default/default: serviceaccount "default" not found
The goal of command is to pull docker image from my own repository and use it to create pod. I saw already some solutions that use .yaml file (but I didn't like the answer). All I want is to run this command. I am using windows 10 and docker desktop for a kubernetes cluster (minikube etc.).
You can test it with network-multitool. It will keep on running a webserver and have a lot of great tools.
kubectl run multitool --image=praqma/network-multitool --replicas=1
If that works, find the podname
kubectl get pods
Then you can exec into it with the name you found above
kubectl exec -it multitool-3822887632-pwlr1 bash
From inside the container/pod you can tjek that the webserver is running by
curl localhost
If the first command doesnt work, then something is wrong.
Check if the service account exists
kubectl get sa
Thanks for answers. Now I realize, that I forget to start my local cluster minikube.
minikube start
Now it is ok to create a pod.

Kubernetes :: Restart terminated pod

I'm using Kubernetes to run jobs with a RestartPolicy to Never.
Sometime, I would like to be able to debug a failed/terminated pod. In some way, I'm trying to find how restart it with a sleep XXX command to connect (exec) to the container and get the same state.
In Docker this is something doable using docker ps --all and then docker start X but I didn't find something similar with kubectl or the client-go
Thanks!
Not sure about client-go as I have no experience there. But if I understood the question correctly, you can check the reason of the failure:
kubectl get pods (if you do not see your pod here add --all-namespaces)
NAME READY STATUS RESTARTS AGE
pi-c2x4r 0/1 Completed 0 19m
pi-test-c5hln 0/1 Error 0 16m`
And then run:
kubectl describe pod pi-test-c5hln (name of your pod).
kubectl logs pi-test-c5hln
You can also find more information when you run:
kubectl describe job *job name*
You can find useful information about Jobs and how to work with them (including cleanup, termination and patterns) in here.
Not sure if it needs to be added, but terminating is ongoing process, so you can work with the pod after it goes from terminating to other status (error, completed).

Kubernetes DNS Disk Utilization is High. Is there a way to tailor the logging to assist?

Can someone suggest what level of logging should be enabled for kube-dns and what parameters to use? My kube-dns pod is using 23GB of disk space and I fear its related to logging.
Has anyone else seen this behavior?
There are few ways to resolve your issue:
You can change verbose of logs in theconfig of your deployment.
kubectl get -o yaml --export deployments kube-dns --namespace=kube-system > file
Edit the file, change --v=2 to --v=0 (it will disable all logs in kube-dns) and deploy it.
kubectl apply -f ./file --namespace=kube-system
Then clear the logs on you pods:
kubectl get pods --namespace=kube-system
kubectl exec POD_NAME /bin/sh
You can configure logs rotating using any of the available tools, for example fluentd.

Kubernetes pods are running but docker ps does not give any output

I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?
Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file.
Why is that docker ps does not show any containers running?
I am following the below URLs:
http://containertutorials.com/get_started_kubernetes/k8s_example.html
https://blog.jetstack.io/blog/k8s-getting-started-part2/
How can I get it running and see tomcat running on browser on port 5000.
The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:
minikub ssh
...
docker ps
Just try the kubernetes equivalent of minikube ssh.
In Kubernetes, Docker contaienrs are run on Pods, and Pods are run on Nodes, and Nodes are run on your machine (minikube/GKE)
When you run kubectl create -f tmocat_pod.yaml you basically create a pod and it runs the docker container on that pod.
The node that holds this pod, is basically a virtual instance, if you could 'SSH' into that node, docker ps would work.
What you need is:
kubectl get pods <-- It is like docker ps, it shows you all the pods (think of it as docker containers) running
kubectl get nodes <-- view the host machines for your pods.
kubectl describe pods <pod-name> <-- view system logs for your pods.
kubectl logs <pod-name> <-- Will give you logs for the specific pod.
You can connect your Terminal with the docker server what is running inside your Node/VM.
With this command in your terminal: eval $(minikube docker-env)
This only configures your current terminal window.
illustration
may be you are not using docker as container runtime.
I faced the same issue, and i forgot that i switched to gVisor with runsc as handler.
cat /etc/default/kubelet
KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock"
If so, you need to use runsc command instead of docker.
I'm not sure where you are running the docker ps command, but if you are trying to do that from your host machine and the k8s cluster is located elsewhere, i.e. your machine is not a node in the cluster, docker ps will not return anything since the containers are not tied to your docker host.
Assuming your pod is running, kubectl get pods will display all of your running pods. To check further details, you can use kubectl describe pod <yourpodname> to check the status of each container (in great detail). To get the pod names, you should be able to use tab-complete with the kubernetes cli. Also, if your pod contains multiple containers, you will need to give the container name as well, which you can use tab-complete for after you've selected your pod.
The output will look similar to:
kubectl describe pod comparison-api-dply-reborn-6ffb88b46b-s2mtx
Name: comparison-api-dply-reborn-6ffb88b46b-s2mtx
Namespace: default
Node: aks-nodepool1-99697518-0/10.240.0.5
Start Time: Fri, 20 Apr 2018 14:08:21 -0400
Labels: app=comparison-pod-reborn
pod-template-hash=2996446026
...
Status: Running
IP: *.*.*.*
Controlled By: ReplicaSet/comparison-api-dply-reborn-6ffb88b46b
Containers:
rabbit-mq:
...
Port: 5672/TCP
State: Running
...
If your containers and pods are already running, then you shouldn't need to troubleshoot them too much. To make them accessible from the Public Internet, take a look at Services (https://kubernetes.io/docs/concepts/services-networking/service/) to make your API's IP address fixed and easily reachable.
Have you tried a "docker ps -a" to see if the container is dead? If it is there you can see its logs with "docker logs " and maybe this gives you a hint.
If your pod is running successfully and if you are looking for the container on the node where the pod is scheduled the issue could be kubernetes is using a different container runtime.
Example
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl exec -it nginx-8586cf59-h92ct bash
root#nginx-8586cf59-h92ct:/# exit
exit
root#renjith-laptop:/home/renjith/raspbery-k8s# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-8586cf59-h92ct 1/1 Running 0 47s 10.20.0.3 renjith-laptop
root#renjith-laptop:/home/renjith/raspbery-k8s# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root#renjith-laptop:/home/renjith/raspbery-k8s#
Here I am able exec to the pod, and I am in the same node where pod is scheduled, but docker ps doesn't show the container. In my case kubelet is using different container runtime, one of the argument to kubelet service is --container-runtime-endpoint=unix:///var/run/cri-containerd.sock
From Kubernetes documentation to get container images running on your system:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
Then you get back something like:
2 registry.k8s.io/coredns/coredns:v1.9.3
1 registry.k8s.io/etcd:3.5.4-0
1 registry.k8s.io/kube-apiserver:v1.25.1
1 registry.k8s.io/kube-controller-manager:v1.25.1
3 registry.k8s.io/kube-proxy:v1.25.1
1 registry.k8s.io/kube-scheduler:v1.25.1

how to debug container images using openshift

Let's say I have a docker image created using a Dockerfile. At the time of writing the Dockerfile I had to test it repeatedly to realize what I did wrong. To debug a docker image I can simply run a test container and look at its stdout/stderr to see what's wrong with the image.
IMAGE_NAME=authoritative-dns-bind
IMAGE_OPTIONS="
-v $(pwd)/config.yaml:/config.yaml:ro
-p 127.0.0.1:53:53
-p 127.0.0.1:53:53/udp"
docker run -t -i $IMAGE_OPTIONS $IMAGE_NAME
Learning the above was good enough to iteratively create and debug a minimal working Docker container. Now I'm looking for a way to do the same for OpenShift.
I'm pretty much aware of the fact that the container is not ready for OpenShift. My plan is to run it and watch its stdoud/stderr like I did with Docker. One of the people I asked for help came up with a command that looked like exactly what I need.
oc run -i -t --image $IMAGE_NAME --command test-pod -- bash
And the above command seemed for me for fedora:24 and fedora:latest images from the docker registry and I got a working shell. But the same wouldn't happen for my derived image with a containerized service. My explanation is that it probably does an entirely different thing and instead of starting the command interactively it starts it non-interactively and then tries to run bash inside a failed container.
So what I'm looking for is a reasonable way to debug a container image in OpenShift. I expected that I would be able to at least capture and view stdin/stdout of OpenShift containers.
Any ideas?
Update
According to the comment by Graham oc run should indeed work as docker run but it doesn't seem to be the case. With original Fedora images the bash always appears at least upon hitting enter.
# oc run -i -t --image authoritative-dns-bind --command test-auth13 -- bash
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
...
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
^C
#
I wasn't able to try out the suggested oc debug yet as it seems to require more configuration than just simple image. There's another problem with oc run as that command creates new and new containers that I don't really need. I hope there is a way to start the debug easily and get the container automatically distroyed afterwards.
There are three main commands to debug pods:
oc describe pod $pod-name -- detailed info about the pod
oc logs $pod-name -- stdout and stderr of the pod
oc exec -ti $pod-name -- bash -- get a shell in running pod
To your specific problem: oc run default pull policy is set to Always. This means that OpenShift will try to pull the image until successful and refuse to use the local one.
Once this kuberenetes patch lands in OpenShift origin, the pull policy will be easily configurable.
Please do not consider this a final answer to the question and supersede it with your own better answers...
I'm now using a pod configuration file like the following...
apiVersion: v1
kind: Pod
metadata:
name: "authoritative-dns-server" # pod name, your reference from command line
namespace: "myproject" # default namespace in `oc cluster up`
spec:
containers:
- command:
- "bash"
image: "authoritative-dns-bind" # use your image!
name: "authoritative-dns-bind-container" # required
imagePullPolicy: "Never" # important! you want openshift to use your local image
stdin: true
tty: true
restartPolicy: "Never"
Note the command is explicitly set to bash. You can then create the pod, attach to the container and run the docker command yourself.
oc create -f pod.yaml
oc attach -t -i authoritative-dns-server
/files/run-bind.py
This looks far from ideal and it doesn't really help you debug an ordinary openshift container with standard pod configuration, but at least it's possible to debug, now. Looking forward to better answers.

Resources