Logs of pods missing from /var/logs/ in Kubernetes - docker

I have a cluster that has numerous services running as pods from which I want to pull logs with fluentd. All services show logs when doing kubectl logs service. However, some logs don't show up in those folders:
/var/log
/var/log/containers
/var/log/pods
although the other containers are there. The containers that ARE there are created as a Cronjob, or as a Helm chart, like a MongoDB installation.
The containers that aren't logging are created by me with a Deployment file like so:
kind: Deployment
metadata:
namespace: {{.Values.global.namespace | quote}}
name: {{.Values.serviceName}}-deployment
spec:
replicas: {{.Values.replicaCount}}
selector:
matchLabels:
app: {{.Values.serviceName}}
template:
metadata:
labels:
app: {{.Values.serviceName}}
annotations:
releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
spec:
containers:
- name: {{.Values.serviceName}}
# local: use skaffold, dev: use passed tag, test: use released version
image: {{ .Values.image }}
{{- if (eq .Values.global.env "dev") }}:{{ .Values.imageConfig.tag}}{{ end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
envFrom:
- configMapRef:
name: {{.Values.serviceName}}-config
{{- if .Values.resources }}
resources:
{{- if .Values.resources.requests }}
requests:
memory: {{.Values.resources.requests.memory}}
cpu: {{.Values.resources.requests.cpu}}
{{- end }}
{{- if .Values.resources.limits }}
limits:
memory: {{.Values.resources.limits.memory}}
cpu: {{.Values.resources.limits.cpu}}
{{- end }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecret }}
restartPolicy: {{ .Values.global.restartPolicy }}
{{- end }}
and a Dockerfile CMD like so:
CMD ["node", "./bin/www"]
One assumption might be that the CMD doesn't pipe to STDOUT, but why would the logs show up in kubectl logs then?

This is how I would proceed to find out where a container is logging:
Identify the node on which the Pod is running with:
kubectl get pod pod-name -owide
SSH on that node, you can check which logging driver is being used by the node with:
docker info | grep -i logging
if the output is json-file, then the logs are being written to file as expected. If there is something different, then it may depends on what the driver do (there are many drivers, they could write to journald for example, or other options)
If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node:
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
(if there are more containers in the same pod, the index to use may vary, depending on which container you want to inspect)
With the id extracted, which will be something like docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c, you can inspect the container with docker, on the node on which the container is running. Just remove the docker:// part from the id, SSH again on the node you identified before, then do a:
docker inspect container-id | grep -i logpath
Which should output where the container is actively writing its logs to file.
In my case, the particular container I tried this procedure on, is currently logging into:
/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log

Related

Build Kubernetes cluster with spark master and spark workers

I've built a custom-spark docker image with the following dependencies:
Python 3.6.9
Pip 1.18
Java OpenJDK 64-Bit Server VM, 1.8.0_212
Hadoop 3.2
Scala 2.13.0
Spark 3.0.3
where I pushed to ducker hub: https://hub.docker.com/r/redaer7/custom-spark
Dockerfile,spark-master and spark-worker files are stored under: https://github.com/redaER7/Custom-Spark
I verify /spark-master and /spark-worker works well when creating a container linked to the previous image:
docker run -it -d --name spark_1 redaer7/custom-spark:1.0 bash
docker exec -it $CONTAINER_ID /bin/bash
My issue is when I try to build a K8s cluster from previous image with following yaml file for the spark master pod:
kubectl create namespace sparkspace
kubectl -n sparkspace create -f ./spark-master-deployment.yaml
#yaml file
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-master
spec:
replicas: 1 # should always be one
selector:
matchLabels:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: redaer7/custom-spark:1.0
imagePullPolicy: IfNotPresent
command: ["/spark-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
# limits:
# cpu: 1
# memory: 1G
requests:
cpu: 1 #100m
memory: 1G
I get CrashLoopBackOff when viewing pod with kubectl -n sparkspace get pods
When inspecting with kubectl -n sparkspace describe pod $Pod_Name
Any clue about that First warning ? thank you
I simply solved it by re-pulling the image :
imagePullPolicy: Always
Because I edited the Docker Image locally and I haven't changed the following in the config file:
imagePullPolicy: IfNotPresent
Then, I pushed it into Dockerhub for later deployment

How do I run create multiple container and run different command inside using k8s

I have a Kubernetes Job, job.yaml :
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
I need to run command for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the commands or containers all simultaneously.
Some of the ways that you could do it outside of the solutions proposed in other answers are following:
With a templating tool like Helm where you would template the exact specification of your workload and then iterate over it with different values (see the example)
Use the Kubernetes official documentation on work queue topics:
Indexed Job for Parallel Processing with Static Work Assignment - alpha
Parallel Processing using Expansions
Helm example:
Helm in short is a templating tool that will allow you to template your manifests (YAML files). By that you could have multiple instances of Jobs with different name and a different command.
Assuming that you've installed Helm by following guide:
Helm.sh: Docs: Intro: Install
You can create an example Chart that you will modify to run your Jobs:
helm create chart-name
You will need to delete everything that is in the chart-name/templates/ and clear the chart-name/values.yaml file.
After that you can create your values.yaml file which you will iterate upon:
jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
templates/job.yaml
{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
If you have above files created you can run following command on what will be applied to the cluster beforehand:
$ helm template . (inside the chart-name folder)
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
A side note #1!
This example will create X amount of Jobs where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: GKE).
A side note #2!
You can also add your namespace definition in the templates (templates/namespace.yaml) so it will be created before running your Jobs.
You can also run above Chart by:
$ helm install chart-name . (inside the chart-name folder)
After that you should be seeing 2 Jobs that are completed:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
And the output that they've created:
$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k
one:
3.14
two:
3.1415926535897932385
Additional resources:
Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file
In simpler terms , you want to run multiple commands , following is a sample format to execute multiple commands in a pod :
command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
When we apply this logic to your requirement for two different operations
command: ["sh", "-c", "run-vpn-script.sh /to/download/this && run-vpn-script.sh /to/download/another"]
If you want to run the same command multiple times you can deploy the same YAML multiple times by just changing the name.
You can go with the sed command for replacing the values in YAML and apply those YAML to the cluster for creating the container.
Example job.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: COMMAND # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
command :
'job.yaml | sed -i "s,COMMAND,["sh", "run-vpn-script.sh", "/to/download/this"],"
so the above command will replace all the values in YAML and you can apply the YAML to the cluster for creating the container. Same you can apply for other variables.
You can pass the different parameters as per the need in the command that got set in the YAML.
You can also deploy the multiple jobs using the command also
kubectl create job test-job --from=cronjob/a-cronjob
https://www.mankier.com/1/kubectl-create-job
pass other param as per need into the command.
If you don't just want to run the POD you can also try
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/

Unable to pull public docker image packages from GitHub through Kubernetes

I created a sample Node.js project in GitHub and created a docker image for the same. I uploaded the docker image as a package in the same repository. This is a public repo. I created a kubernetes config yaml file with this image as the pods image. Following is the yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
selector:
matchLabels:
component: node-server
template:
metadata:
labels:
component: node-server
spec:
containers:
- name: node-server
image: docker.pkg.github.com/lethalbrains/intense_omega/io_service:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: dockerconfigjson-github-com
---
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
selector:
component: node-server
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/inress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 3000
After I apply this file using Kubectl and check the pods details, I get an ImagePullBackOff error.
I even tried using this option of using dockerconfigjson secret with Github Personal Access Token but still the sam result.
Edit:
Added error message from pods describe
This seems to be an issue with GitHub registry which is being discussed here.
What I can recommend is to push the image to docker hub or if create private repo which you can read about at Using a private Docker Registry with Kubernetes.
There seems to be a workaround but I did not tested that.
It's published by #sudomaxime and available here:
Here's a nasty little workaround for thoses who:
Don't mind loosing blue/green deploys until this is resolved
Don't mind 10-15 secs app start-up time
Use docker swarm / docker stack deploys
Use CI scripts for deployment
In your CI scripts call:
$ docker stack rm {{ your_stack_name }}
$ until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done
$ docker stack deploy --with-registry-auth -c docker-compose.yml {{ your_stack_name }}
Basically you ask Docker scheduler to stop all the services under {{ your_stack_name }} orchestrator. A little knack of docker swarm is that docker stack rm will immediately return even if some services are not properly closed chich may cause networking errors when you try to deploy again. That's why we use a small inline script until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done to wait for the proper return.
Hopes it saves a few folks headaches. I guess a similar temporary fix will help you out.
This is quite a frustrating issue, for our apps that MUST use blue/green deploys we bought a private repo to fix the problem.

How to configure kubernetes (microk8s) to use local docker images?

I've build docker image locally:
docker build -t backend -f backend.docker
Now I want to create deployment with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
spec:
selector:
matchLabels:
tier: backend
replicas: 2
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: IfNotPresent # This should be by default so
ports:
- containerPort: 80
kubectl apply -f file_provided_above.yaml works, but then I have following pods statuses:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18s
Before that it was ErrImagePull. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images using microk8s.docker but it seems to be removed.
Found docs on how to use private registry: https://microk8s.io/docs/working
First it needs to be enabled:
microk8s.enable registry
Then images pushed to registry:
docker tag backend localhost:32000/backend
docker push localhost:32000/backend
And then in above config image: backend needs to be replaced with image: localhost:32000/backend

Why pulling private image in Pod is not working in Kubernetes Registry addon?

I am very new to Kubernetes and I setup Kubernetes Registry addons just copy and pasting the yaml from Kubernetes Registry Addon just a small change in ReplicationController with emptyDir
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry-upstream
version: v0
template:
metadata:
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
emptyDir: {}
Then I forward the 5000 port as follows
$POD=$(kubectl get pods --namespace kube-system -l k8s-app=kube-registry-upstream \
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
| grep Running | head -1 | cut -f1 -d' ')
$kubectl port-forward --namespace kube-system $POD 5000:5000 &
I can push my images fine as follows
$docker tag alpine localhost:5000/nurrony/alpine
$docker push localhost:5000/nurrony/alpine
Then I write a Pod to test it like below
Version: v1
kind: Pod
metadata:
name: registry-demo
labels:
purpose: registry-demo
spec:
containers:
- name: registry-demo-container
image: localhost:5000/nurrony/alpine
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
It is throwing an error
Failed to pull image "localhost:5000/nurrony/alpine": image pull failed for localhost:5000/nurrony/alpine:latest, this may be because there are no credentials on this request. details: (net/http: request canceled)
Any idea why is this happening? Thanks in advance.
Most likely your proxy is not working.
The Docker Registry K8S addon comes with DaemonSet which defines registry proxy for every node which runes your kubelets. What I would suggest you is to inspect those proxies since they will map Docker Registry (K8S) Service to localhost:5000 on every node.
Please note, that even if you have green check mark on your registry proxies that does not mean they work correctly. Open the logs of them and make sure that everything is working.
If your proxy is configured and you are still getting this error then most likely environment variable REGISTRY_HOST inside kube-registry-proxy is wrong. Are you using DNS here like in example? Is your DNS configured correctely? Is it working if you put this variable to ClusterIP of your service?
Also, please be aware that your RC labels need to match SVC selectors, otherwise service cannot discover your pods.
Hope it helps.

Resources