I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage
Related
So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.
Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.
I have two jobs that will run only once. One is called Master and one is called Slave. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:
Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
To achieve this I created 5 yaml file:
A job-master.yaml for creating a Master pod:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
A service(NodePort) that will allow the master to fetch info online:
apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
A job for the Slave pod:
apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:
apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a configMap with upstreamNameservers: | ["8.8.8.8"]
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated.
Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a selector: that matches the labels: of the Pods.
In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
You should be able to verify with kubectl describe service master-service. At the end of its output will be a line that says Endpoints:. If the Service selector and the Pod labels don't match this will say <none>; if they do match you will see the Pod IP addresses.
(You don't need a NodePort service unless you need to accept requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)
Try with headless service:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
type: ClusterIP
clusterIP: None
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
and use command: ["python", "run.py", "master-service"] in your job_slave.yaml
Make sure your master job is listening on port 6666 inside your container.
when i run my command to apply the modification or just to create ( pods, service, Deployments)
kubectl apply -f hello-kubernetes-oliver.yml
I dont have an error.
But when i do docker ps to see if the container was downloaded from my private registery. i've nothing :(
If i run the command docker-all.attanea.net/hello_world:latestit download the container.
i dont understand why it doesn't download my container with the first command ?
you will find below my hello-kubernetes-oliver.yml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-oliver
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-oliver
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
In order to download Images from the Private registry, You need to create a Secret which is used in the Deployment Manifest.
kubectl create secret docker-registry regcred --docker-server= --docker-username="your-name" --docker-password="your-pword" --docker-email="your-email"
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
regcred is the name of the secret resources.
Then you attach regcred secret in your deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
I have my controller.yaml that looks like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
replicas: 1
selector:
name: hmrcaction
template:
metadata:
labels:
name: hmrcaction
version: 0.1.4
spec:
containers:
- name: hmrcaction
image: ccc-docker-docker-release.someartifactory.com/hmrcaction:0.1.4
ports:
- containerPort: 9000
imagePullSecrets:
- name: fff-artifactory
and service yaml that looks like this:
apiVersion: v1
kind: Service
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 9000
selector:
name: hmrcaction
and I have a kubernetes cluster, so I wanted to use this rc to upload my docker to the cluster and I did it like this:
kubectl create -f controller.yaml
but I get some weird status, when I run the command kubectl get pods I get:
NAME READY STATUS RESTARTS AGE
hmrcaction-k9bb6 0/1 ImagePullBackOff 0 40s
what is this?? before the status was ErrImagePull...
please help :)
thanks!
kubectl describe pods -l name=hmrcaction should give you more useful information.