I'm trying to run a redis deployment file but I'm getting an issue with redis health check.
Here is deployment.yaml
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
I saw another question, where someone mentioned this command to add, management.health.redis.enabled=false But I'm not so sure where to add this command. Can someone please point me in the correct direction? Help is appreciated. Thanks.
Related
So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.
Hi I'm getting CrashLoopBackOffin my container.
Docker image is running fine in my laptop but I can run it in kubernetis
This my deployment code
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
labels:
app: jobstreet
spec:
selector:
matchLabels:
app: jobstreet
role: master
tier: frontend
replicas: 1
template:
metadata:
labels:
app: jobstreet
role: master
tier: frontend
spec:
containers:
- name: master
image: parthi922/reactapp:v2
command: [ 'sh', '-c', 'echo The app is running! && sleep 3600']
resources:
requests:
cpu: 500m
memory: 500Mi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: react-app
labels:
app: jobstreet
role: master
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: jobstreet
tier: frontend
When I type kubectl get logs this I what I get
standard_init_linux.go:219: exec user process caused: exec format error
enter image description here
The error message you get probably means that the image is not prepared for a different architecture.
You can check it using the following command:
$ docker image inspect parthi922/reactapp:v2 | grep "Architecture"
"Architecture": "arm64",
Make sure your k8 nodes are arm64 or build your image in a different architecture.
I have installed nfs-provisioner in my rancher cluster. I make persistant volume for my MongoDB. When I restart server or upgrade mongodb container all my data is lost. How to fix this?
My mongodb configuration
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-db
spec:
selector:
matchLabels:
app: mongo-db
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
app: mongo-db
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: data #reference the volumeClaimTemplate below
mountPath: /data/db
#this is a key difference with statefulsets
#A unique volume will be attached to each pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
#If no storageClassName is provided the default storage class will be used
#storageClassName: "standard"
resources:
requests:
storage: 2Gi
I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage
I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.
For example here is my defined postgres.yml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
I run this by following commands: kubectl run -f postgres.yml.
But when I try to run rails server. I always meet following exception:
PG::Error
invalid encoding name: utf8
I try to forwarding port, and rails server successfully connects to database server:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
Then run following command:
kubectl port-forward postgres-3681891707-8ch4l 5432:5432
I think this solution not good. How can I define in my postgres.yml so I don't need to port-forwarding manually as above.
Thanks
You can try by exposing your service on NodePort and then accessing the service on that port.
Check here https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport