I deployed Dot Net application on AKS with windows node pool. I want to view files structure in the AKS POD. Do we have any tool for that? or any other suggestion.
I dont have any tool for this but i have a suggestion: If you are using Kubernetes, dont log to files inside your Pods.
Your application should send logs to STDOUT & STDERR and you can scrape those logs with a tool like fluentbit, fluentd or promtail and send it to a central log solution like Loki etc.
Another downside of this log file solution you have is that if you dont have a persistent volume for your pod, it will use EmptyDir aka a ephemeral volume. This also means that Kubernetes will kill your pod if the node reaches 85% of its storage capacity.
I found simple way to view Pods folder structure and view content of file using PowerShell.
Run below command that will jump to pod and open PowerShell to execute the commands inside pod.
kubectl exec -it k8s-xm-cm-pod -n staging powershell
Reference: https://support.sitecore.com/kb?id=kb_article_view&sysparm_article=KB0137733
Please let me know if anyone knows other tools which shows pod file structure.
Related
I have installed the promtail/loki using the helm chart in my Kubernetes cluster by following the below link
https://grafana.com/docs/loki/latest/installation/helm/
But By default, it collects only container logs. I want to configure my promtail in a way that it will be able to collect the application log files as well from the container.
Example :
I have ngnix pod which has 2 sets of log files like access.log and error.log, I want to stream these two files to the loki.
need suggestions how can i capture containers log using stdout or stderr ? within a pod
on following use case ?
my pod contains 3 containers where i want third container to capture logs by using any of these longing options filebeat, logstash or fluentd.
i dont want to save logs in file within containers.
Thanks in advance guys.
In case you don't must capture the logs from within the same pod as your containers, you can use ECK's Filebeat Kubernetes operator to automatically set up pods that ships containers logs to Elasticsearch. Read the documentation of ECK here: https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html
Can we interact and troubleshoot containers inside kubernetes without command line access? Or reading logs will be sufficient for debugging?
Is there any way for debugging the containers without command line (kubectl)?
Unfortunately the containers created FROM Scratch are not simple to debug, the best you can do is add logging and telemetry in the container so that you don't have to debug it. The other option is use minimal images like busybox.
The K8s team has a proposal for a a kubectl debug target-pod command, but is not something you can use yet.
In the worse scenarios you can try Scratch-debugger, it will create a busybox pod in the same node your pod being debuged is and call docker to inject the filesystem into the existing container.
You can set up access to the dashboard and make changes to the containers / read logs there.
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?
Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.
You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.
I can see the logs for a particular pod by running 'kubectl logs podName'. I have also seen that logs contains an option --log-dir flag, but it doesn't seem to be working. Is there some kind of configuration I can change, logs will be saved to a particular file on my host machine?
kubectl logs pod_name > app.log
for example, if you have a kube pod named app-6b8bdd458b-kskjh and you intend to save the logs from this pod to a file name app.log then the command should be
kubectl logs app-6b8bdd458b-kskjh > app.log
Note: This may not be an direct answer of how to make the logs go to the host machine, but it does provide a way to get logs onto the machine. This may not be the best answer, but it's what I know so I'm sharing. Please answer if you have a more direct solution.
You can do this using fluentd. Here's a tutorial about how to set it up. You can configure it to write to a file in a mounted hostdir or have it write to S3. It also allows you to aggregate all your logs from all your containers which may or may not be useful. Combined with ElasticSearch and Kibana you can put together a pretty strong logging stack. It will depend on your use-case of course though.
Grafana Loki allows to collect, store and access pods logs in a developer-friendly way.