Deleting Deployment does not delete its replicaset/pods in Kubernetes aks - docker

I am deploying new deployment after changes in kubernetes service. But I am facing strange issue. When I delete deployment, it deleted fine but its replica sets and pods not deleted. Therefor after apply that deployment again, new replica sets and pods created. But that newly created pods throws "FailedScheduling" error with message "0/1 nodes are available: 1 Too many pods.". And that's why new changes are not reflecting
Following are the commands which I am using
kubectl delete -f render.yaml
kubectl apply -f render.yaml
My yaml file code
apiVersion: apps/v1
kind: Deployment
metadata:
name: renderdev-deployment
labels:
app: renderdev-deployment
spec:
replicas: 6
selector:
matchLabels:
app: renderdev-deployment
template:
metadata:
labels:
app: renderdev-deployment
spec:
containers:
- name: renderdev-deployment
image: renderdev.azurecr.io/renderdev:latest
ports:
- containerPort: 5000
volumeMounts:
- name: azuresquarevfiles
mountPath: /mnt/azuresquarevfiles
volumes:
- name: azuresquarevfiles
azureFile:
secretName: azure-secret
shareName: videos
readOnly: false
So when I delete deployment first it should delete replica sets/pods respectively but it does not. What will be the issue? Do I have to delete that replica sets and pods manually?

Related

kubernetes elastic all-in-one fails on arm64 machines?

I was working on setting up ELK on a Kubernetes cluster, and it works on my macbookpro for tests, but when i tried to do it on my ubunutu arm64 machines clustered together it fails.
When i noticed it was giving exec errors, I immediately knew it was failing to run an arm64 variant as I had a similar issue with some containers i was using for different projects and just needed to use buildx to create arm64 support.
Anyways, This is my current flow. Join me on an adventure.
Given a fresh Ubuntu install on a Arm64, Raspberry Pi 4, 4G.
Update, Upgrade, and install kubeadm,kubectl, etc. I set up a second machine, so now i have a cluster of size 2. (sweet! Im proud so far!)
I go to the k8s website, and grab the all in one.
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml
Now that all the kubernetes is set up, I should be able to launch my Elasticsearch pod, and I do. I do a kubectl get elasticsearch to see my new pod. Says the name but no state.
Time to see whats up. kubectl get pods --all-namespaces
BUT WAIT. What is this, elastic-operator-0. Interesting, never used THIS before. BUT it exists on my machine and on my pro, so it much have some value. Wait. Its in an indefinite crashloopbackoff. Interesting. Attempts to describe or logs failed. Realized it is giving exec errors, which I know is from an architecture mismatch.
So this leads me to now.
Desired Endstate:
I am trying to install Elasticsearch, Kibana, and Logstash.
Elasticsearch AND Kibana arent built off of images, but instead the Types in the all in one, elasticseatch.k8s.elastic.co and kibana.k8s.elastic.co respectively. Logstash though is built off of a docker container: docker.elastic.co/logstash/logstash:7.9.2
So here is my conundrum. How do I get this back up and functional? It seems that kibana and elasticsearch are not developing state (red, green or otherwise) until this elastic-operator-0 is up and running.
I am trying to trim and clean this all up such this works again. I have no problem with removing everything installed with all-in-one, and then just doing tweaks but im not sure how much additional work it would be.
Below is my sample YAML file.
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: elasticsearch
spec:
version: 7.9.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: kibana
spec:
version: 7.9.2
count: 1
elasticsearchRef:
name: elasticsearch
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
data:
logstash.conf: |
input { }
filter { }
output {
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
index => "sample-%{+YYYY.MM.dd}"
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
template:
metadata:
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.9.2
env:
- name: ES_HOSTS
value: "https://elasticsearch-es-http.default.svc:9200"
- name: ES_USER
value: "elastic"
- name: CUSTOM_ENV_TEST
value: "Helloworld"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: ca-certs
mountPath: /etc/logstash/certificates
readOnly: true
volumes:
- name: config-volume
configMap:
name: logstash-config
- name: pipeline-volume
configMap:
name: logstash-pipeline
- name: ca-certs
secret:
secretName: elasticsearch-es-http-certs-public

kubernetes deploy plugin in Jenkins doesn't update pod

I use kubernetes-cd plugin in Jenkins (https://plugins.jenkins.io/kubernetes-cd/) to deploy my application successfully.
But, I got a problem, when I re-run the job again, the jenkins doesn't update my pod (doesn't delete and create new pod again), so my changes of code aren't affected. And after I delete the pod manual using kubectl commands in kubernetes cluster and re-run the job, it make changes
Below is my yaml file. Do you know how to fix this ?
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tds-upload
name: tds-upload
spec:
replicas: 1
selector:
matchLabels:
app: tds-upload
template:
metadata:
labels:
app: tds-upload
spec:
containers:
- image: dev-master:5000/tds-upload:1.0.0
imagePullPolicy: Always
name: tds-upload
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tds-upload
name: tds-upload
spec:
ports:
- nodePort: 31313
port: 8889
protocol: TCP
targetPort: 8889
selector:
app: tds-upload
type: NodePort
There are different ways to make Kubernetes deploy new changes.
kubectl rollout restart deployment myapp
This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout like rollbacks
kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"
Where you can use any name and any value for the label as long as it changes with each build.
You can use the kubectl cli plugin of Jenkins to execute above commands.

Postgres running on kubernetes looses data upon Pod recreation or cluster reboot

I have the postgres container running in a Pod on GKE and a PersistentVolume set up to store the data. However, all of the data in the database is lost if the cluster reboots or if the Pod is deleted.
If I run kubectl delete <postgres_pod> to delete the existing Pod and check the newly created (by kubernetes) Pod to replace the deleted one, the respective database has not the data that it had before the Pod being deleted.
Here are the yaml files I used to deploy postgres.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom-storage
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-volume-claim
spec:
storageClassName: custom-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.5
resources: {}
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "dbname"
- name: POSTGRES_USER
value: "user"
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- mountPath: /var/lib/postgresql/
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-volume-claim
I double checked that the persistentVolumeReclaimPolicy has value Retain.
What am I missing?
Is the cluster creating a new volume each time you delete a pod? Check with kubectl get pv.
Is this a multi-zone cluster? Your storage class is not provisionig regional disks, so you might be getting a new disk when the pod moves from one zone to another.
Possibly related to your problem, the postgres container reference recommends mounting at /var/lib/postgresql/data/pgdata and setting the PGDATA env variable:
https://hub.docker.com/_/postgres#pgdata

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above

Cronjob in Kubernetes to restart (delete) the pod in a deployment

I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage

Resources