Can we interact and troubleshoot containers inside kubernetes without command line access? - docker

Can we interact and troubleshoot containers inside kubernetes without command line access? Or reading logs will be sufficient for debugging?
Is there any way for debugging the containers without command line (kubectl)?

Unfortunately the containers created FROM Scratch are not simple to debug, the best you can do is add logging and telemetry in the container so that you don't have to debug it. The other option is use minimal images like busybox.
The K8s team has a proposal for a a kubectl debug target-pod command, but is not something you can use yet.
In the worse scenarios you can try Scratch-debugger, it will create a busybox pod in the same node your pod being debuged is and call docker to inject the filesystem into the existing container.

You can set up access to the dashboard and make changes to the containers / read logs there.
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Related

Deployment of a web application in AKS windows nodepool

I have deployed a owin hosted web applciation in AKS(windows nodepool). The container is in running state but I am not able to hit the application. There might be runtime exceptions or errors but I am not able to figure out the path where I can see such errors in AKS Windows node.
Please help me.
So, the ideal way here would be to use kubctl logs (which goes to Monitor, if you have that enabled). However, Windows containers don't pass on its logs to stdout by default. You have to use Log Monitor for that. Essentially, you have to enable Log Monitor on your container image to be able to get the logs out of the container just like you do with Linux containers. I blogged about it here: https://techcommunity.microsoft.com/t5/itops-talk-blog/troubleshooting-windows-containers-apps-on-azure-kubernetes/ba-p/3269767
The other thing you can try is to use kubectl exec to run a command inside the container and get its output.

Azure Kubernetes Service (AKS) POD File Explorer

I deployed Dot Net application on AKS with windows node pool. I want to view files structure in the AKS POD. Do we have any tool for that? or any other suggestion.
I dont have any tool for this but i have a suggestion: If you are using Kubernetes, dont log to files inside your Pods.
Your application should send logs to STDOUT & STDERR and you can scrape those logs with a tool like fluentbit, fluentd or promtail and send it to a central log solution like Loki etc.
Another downside of this log file solution you have is that if you dont have a persistent volume for your pod, it will use EmptyDir aka a ephemeral volume. This also means that Kubernetes will kill your pod if the node reaches 85% of its storage capacity.
I found simple way to view Pods folder structure and view content of file using PowerShell.
Run below command that will jump to pod and open PowerShell to execute the commands inside pod.
kubectl exec -it k8s-xm-cm-pod -n staging powershell
Reference: https://support.sitecore.com/kb?id=kb_article_view&sysparm_article=KB0137733
Please let me know if anyone knows other tools which shows pod file structure.

Not able to connect to a container(Created via Rest API) in Kubernetes

I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?
Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.
You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.

Why do Kubernetes' containers fail when ran on runsc (gVisor) as a runtime in Docker?

I am running a single master Kubernetes cluster with Docker. I wanted to try runsc (gVisor) on Kubernetes. I just wanted to start each container in a separate sandbox. So I set runsc as the default runtime and restarted the Docker service. To my surprise, all the Kubernetes' containers were failing (checked with docker ps). What is the exception that causes this? Is there any other way to use gVisor+Docker+Kubernetes?
I am using the right requirements to run each of them.
PS: I am just a beginner.
Thanks for trying gVisor! Sorry it isn't working for you.
Running a Kubernetes Pod inside gVisor is still fairly experimental. It can be made to work, but is a bit difficult to configure right now. We are working to make this easier.
Can you run gVisor with Docker (not Kubernetes)? See the instructions here:
https://github.com/google/gvisor#configuring-docker
If that fails, please file a bug report:
https://github.com/google/gvisor/issues
If you can include debug logs, that will help us diagnose any failure.
https://github.com/google/gvisor#debugging

How to add containers to a Kubernetes pod on runtime

I have a number of Jobs
running on k8s.
These jobs run a custom agent that copies some files and sets up the environment for a user (trusted) provided container to run.
This agent runs on the side of the user container, captures the logs, waits for the container to exit and process the generated results.
To achieve this, we mount Docker's socket /var/run/docker.sock and run as a privileged container, and from within the agent, we use docker-py to interact with the user container (setup, run, capture logs, terminate).
This works almost fine, but I'd consider it a hack. Since the user container was created by calling docker directly on a node, k8s is not aware of it's existence. This has been causing troubles since our monitoring tools interact with K8s, and don't get visibility to these stand-alone user containers. It also makes pod scheduling harder to manage, since the limits (cpu/memory) for the user container are not accounted as the requests for the pod.
I'm aware of init containers but these don't quite fit this use case, since we want to keep the agent running and monitoring the user container until it completes.
Is it possible for a container running on a pod, to request Kubernetes to add additional containers to the same pod the agent is running? And if so, can the agent also request Kubernetes to remove the user container at will (e.g. certain custom condition was met)?
From this GitHub issue, it seems that the answer is that adding or removing containers to a pod is not possible, since the container list in the pod spec is immutable.
In kubernetes 1.16, there is an alpha feature that would allow for creation of ephemeral containers which could be "added" to running pods. Note, that this requires a feature gate to be enabled on relevant components e.g. kubelet. This may be hard to enable on control plane for cloud provider managed services such as EKS.
API Reference 1.16
Simple tutorial
I don't think you can alter a running pod like that but you can certainly define your own pod and run it programmatically using API
What I mean is you should define a pod with the user container and whatever other containers you wish and run it as a unit. It's possible you'll need to play around with liveness checks to have post processing completed after your user container dies
You can share data between multiple containers in a pod using shared volumes. this would let your agent container read from log files written on the user container, and drop config files into the shared volume for setup.
This way you could run the user container and the agent container as a Job with both containers in the pod. When both containers exit, the job will be completed.
You seem to indicate above that you are manually terminating the user container. That wouldn't be supported via shared volume unless you did something like forcing users to terminate their execution at the presence of a file on the shared volume.
Is it possible for a container running on a pod, to request Kubernetes
to add additional containers to the same pod the agent is running? And
if so, can the agent also request Kubernetes to remove the user
container at will (e.g. certain custom condition was met)?
I'm not aware of any way to add containers to existing Job pod definitions. There's no replicas option for Jobs so you couldn't hack it by changing replicas from 0->1 like you potentially could on a Deployment.
I'm not aware of any way to use kubectl to delete a container but not the whole pod. See kubectl delete.
If you want to kill the user container (rather than having it run to completion), you'll have to get on the host and use docker kill <sha> on the user container. Make sure to set .spec.template.spec.restartPolicy = "Never" on the user container or k8s will restart it.
I'd recommend:
Having a shared volume to transfer logs to the agent and so the agent can set up the user container
Making user containers expect to exit on their own and read configs from the shared volume
I don't know what workloads you are doing or how users are making containers so that may not be possible. If you're not able to dictate how users build their containers, the above may not work.
Another option is providing a binary that acts as a command API on the user container. This binary could accept commands like "setup", "run", "terminate", "transfer logs" via RPC and it would be the main process in their docker container.
Then you could make the build process for users something like:
FROM your-container-with-binary:latest
put whatever you want in this
container and set ENV JOB_PATH=/path/to/executable/code (or put code
in specific location)
Lots of moving parts to this whichever way you make it happen.
You can inject containers to pods dynamically via : https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
An admission controller is a piece of code that intercepts requests to
the Kubernetes API server prior to persistence of the object, but
after the request is authenticated and authorized. The controllers
consist of the list below, are compiled into the kube-apiserver
binary, and may only be configured by the cluster administrator. In
that list, there are two special controllers: MutatingAdmissionWebhook
and ValidatingAdmissionWebhook. These execute the mutating and
validating (respectively) admission control webhooks which are
configured in the API.
Admission controllers may be “validating”, “mutating”, or both.
Mutating controllers may modify the objects they admit; validating
controllers may not.
And you can inject additional runtime requirements to pods via : https://kubernetes.io/docs/concepts/workloads/pods/podpreset/
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. You use label selectors to
specify the Pods to which a given Pod Preset applies.

Resources