Hi I'm getting CrashLoopBackOffin my container.
Docker image is running fine in my laptop but I can run it in kubernetis
This my deployment code
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
labels:
app: jobstreet
spec:
selector:
matchLabels:
app: jobstreet
role: master
tier: frontend
replicas: 1
template:
metadata:
labels:
app: jobstreet
role: master
tier: frontend
spec:
containers:
- name: master
image: parthi922/reactapp:v2
command: [ 'sh', '-c', 'echo The app is running! && sleep 3600']
resources:
requests:
cpu: 500m
memory: 500Mi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: react-app
labels:
app: jobstreet
role: master
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: jobstreet
tier: frontend
When I type kubectl get logs this I what I get
standard_init_linux.go:219: exec user process caused: exec format error
enter image description here
The error message you get probably means that the image is not prepared for a different architecture.
You can check it using the following command:
$ docker image inspect parthi922/reactapp:v2 | grep "Architecture"
"Architecture": "arm64",
Make sure your k8 nodes are arm64 or build your image in a different architecture.
Related
I'm trying to run a redis deployment file but I'm getting an issue with redis health check.
Here is deployment.yaml
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
I saw another question, where someone mentioned this command to add, management.health.redis.enabled=false But I'm not so sure where to add this command. Can someone please point me in the correct direction? Help is appreciated. Thanks.
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
My goal is to have a pod with a working Kubectl binary inside.
Unfortunatly every kubectl image from docker hub I booted using basic yaml resulted in CrashLoopbackOff or else.
Has anyone got some yaml (deployment, pod, etc) that would get me my kubectl ?
I tried a bunch of images with this basic yaml there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl-demo
labels:
app: deploy
role: backend
spec:
replicas: 1
selector:
matchLabels:
app: deploy
role: backend
template:
metadata:
labels:
app: deploy
role: backend
spec:
containers:
- name: kubectl-demo
image: <SOME_IMAGE>
ports:
- containerPort: 80
Thx
Or, you can do this. It works in my context, with kubernetes on VMs, where I know where is kubeconfig file. You would need to make the necessary changes, to make it work in your environment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl
spec:
replicas: 1
selector:
matchLabels:
role: kubectl
template:
metadata:
labels:
role: kubectl
spec:
containers:
- image: viejo/kubectl
name: kubelet
tty: true
securityContext:
privileged: true
volumeMounts:
- name: kube-config
mountPath: /root/.kube/
volumes:
- name: kube-config
hostPath:
path: /home/$USER/.kube/
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
This is the result:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
kubectl-cb8bfc6dd-nv6ht 1/1 Running 0 70s
$ kubectl exec kubectl-cb8bfc6dd-nv6ht -- kubectl get no
NAME STATUS ROLES AGE VERSION
kubernetes-1-17-master Ready master 16h v1.17.3
kubernetes-1-17-worker Ready <none> 16h v1.17.3
As Suren already explained in the comments that kubectl is not a daemon so kubectl will run, exit and cause the container to restart.
There are a couple of workarounds for this. One of these is to use sleep command with infinity argument. This would keep the Pod alive, prevent it from restarting and allow you to exec into it.
Here`s an example how to do that:
spec:
containers:
- image: bitnami/kubectl
command:
- sleep
- "infinity"
name: kctl
Let me know if this helps.
I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage
I have my controller.yaml that looks like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
replicas: 1
selector:
name: hmrcaction
template:
metadata:
labels:
name: hmrcaction
version: 0.1.4
spec:
containers:
- name: hmrcaction
image: ccc-docker-docker-release.someartifactory.com/hmrcaction:0.1.4
ports:
- containerPort: 9000
imagePullSecrets:
- name: fff-artifactory
and service yaml that looks like this:
apiVersion: v1
kind: Service
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 9000
selector:
name: hmrcaction
and I have a kubernetes cluster, so I wanted to use this rc to upload my docker to the cluster and I did it like this:
kubectl create -f controller.yaml
but I get some weird status, when I run the command kubectl get pods I get:
NAME READY STATUS RESTARTS AGE
hmrcaction-k9bb6 0/1 ImagePullBackOff 0 40s
what is this?? before the status was ErrImagePull...
please help :)
thanks!
kubectl describe pods -l name=hmrcaction should give you more useful information.