I'm running a web service that I can not change any of the specifications. I want to use liveness probe with HTTP POST on Kubernetes. I couldn't find anything available. All of my efforts with busybox and netcat have failed.
Is there a solution? Is it possible to build a custom liveness probe from any Linux dist?
Kubernetes Probes only support HTTP GET, TCP & Command.
If you must check something over HTTP POST you could use a command approach and just curl -XPOST ..
An example would be:
...
containers:
- name: k8-byexamples-spring-rest
image: gcr.io/matthewdavis-byexamples/k8-byexamples-spring-rest:1d4c1401c9485ef61322d9f2bb33157951eb351f
ports:
- containerPort: 8080
name: http
livenessProbe:
exec:
command:
- curl
- -X POST
- http://localhost/test123
initialDelaySeconds: 5
periodSeconds: 5
...
For more explanation see: https://matthewdavis.io/kubernetes-health-checks-demystified/.
Hope that helps!
Related
I have a docker image created user-service and tagged it to localhost:5001
I have a local registry running at PORT 5001
User-service pushed to local registry
and created pod using deploy_new.yaml file
apiVersion: v1
kind: Pod
metadata:
name: user-service
labels:
component: web
spec:
containers:
- name: web
image: localhost:5001/user-service
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
imagePullPolicy: Never
ports:
- name: http
containerPort: 4006
protocol: TCP
livenessProbe:
httpGet:
path: /health/health
port: 4006
initialDelaySeconds: 3
periodSeconds: 3
failureThreshold: 2
readinessProbe:
httpGet:
path: /health/health
port: 4006
initialDelaySeconds: 15
periodSeconds: 10
But on describing pod
I get
Questions :
What is ErrImageNeverPull image and how to fix it?
How to test liveliness and readiness probes?
Probe APIs
1. What is ErrImageNeverPull image and how to fix it?
As the imagePullPolicyis set to Never the kubelet won't fetch images but look for what is present locally. The error means it could not found the image locally and it will not try to fetch it.
If the cluster can reach to your local docker registry, just change the image: user-service to image: localhost:5000/user-service:latest
If you are using minikube, check the README to reuse your docker daemon so you can use your image without uploading it.
Do eval $(minikube docker-env) on each session you need to use it.
Build the image docker build -t user-service .
Add the image in your Pod manifest as image: user-service
make sure you have imagePullPolicy: Never for your container (which you already have)
2. How to test liveliness and readiness probes?
I suggest you try the examples form the Kubernetes documentation they explain really good the difference between the two and the different types of probes you can configure.
You need first to make your pod running before checking liveness and readiness probes. But in your case they will succeed as soon as the Pod starts. Just describe it and see the events.
One more thing to note. eval $(minikube docker-env) will fail silently if you are using a non-default minikube profile, leading to the observed behavior:
$ eval $(minikube docker-env)
$ minikube docker-env
🤷 Profile "minikube" not found. Run "minikube profile list" to view all profiles.
👉 To start a cluster, run: "minikube start"
$
To address this re-run specifying the profile you are using:
$ eval $(minikube -p my-profile docker-env)
When working with helm charts (generated by helm create <name>) and specifying a docker image in values.yaml such as the image "kubernetesui/dashboard:v2.4.0" in which the exposed ports are written as EXPOSE 8443 9090 I found it hard to know how to properly specify these ports in the actual helm chart-files and was wondering if anyone could explain a bit further on the topic.
By my understanding, the EXPOSE 8443 9090 means that hostPort "8443" maps to containerPort "9090". It in that case seems clear that service.yaml should specify the ports in a manner similar to the following:
spec:
type: {{ .Values.service.type }}
ports:
- port: 8443
targetPort: 9090
The deployment.yaml file however only comes with the field "containerPort" and no port field for the 8443 port (as you can see below) Should I add some field here in the deployment.yaml to include port 8443?
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
ports:
- name: http
containerPort: 9090
protocol: TCP
As of now when I try to install the helm charts it gives me an error message: "Container image "kubernetesui/dashboard:v2.4.0" already present on machine" and I've heard that it means the ports in service.yaml are not configured to match the docker images exposed ports. I have tested this with simpler docker image which only exposes one port and just added the port everywhere and the error message goes away so it seems to be true, but I am still confused about how to do it with two exposed ports.
I would really appreciate some help, thank you in advance if you have any experience of this and is willing to share.
A Docker image never gets to specify any host resources it will use. If the Dockerfile has EXPOSE with two port numbers, then both ports are exposed (where "expose" means almost nothing in modern Docker). That is: this line says the container listens on both port 8443 and 9090 without requiring any specific external behavior.
In your Kubernetes Pod spec (usually nested inside a Deployment spec), you'd then generally list both ports as containerPorts:. Again, this doesn't really say anything about how a Service uses it.
# inside templates/deployment.yaml
ports:
- name: http
containerPort: 9090
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
Then in the corresponding Service, you'd republish either or both ports.
# inside templates/service.yaml
spec:
type: {{ .Values.service.type }}
ports:
- port: 80 # default HTTP port
targetPort: http # matching name: in pod, could also use 9090
- port: 443 # default HTTP/TLS port
targetPort: https # matching name: in pod, could also use 8443
I've chosen to publish the unencrypted and TLS-secured ports on their "normal" HTTP ports, and to bind the service to the pod using the port names.
None of this setup is Helm-specific; in this the only Helm template reference is the Service type: (for if the operator needs to publish a NodePort or LoadBalancer service).
i am trying to setup nexus 3 docker registry behind traefik v2.3.1, the problem is when i want to do
docker login <docker_url> -u <user> -p <password>
i receive this error
Error response from daemon: Get https://docker_url/v1/users/: x509: certificate is valid for 6ddc59ad70b84f1659f8ffb82376935b.6f07c26f5a92b019cea10818bc6b7b7e.traefik.default, not docker_url
Treafik parameters
- "--entryPoints.web.address=:80/tcp"
- "--entryPoints.websecure.address=:443/tcp"
- "--entryPoints.traefik.address=:9000/tcp"
- "--api.dashboard=true"
- "--api.insecure"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetesingress"
- "--log.level=DEBUG"
- "--serversTransport.insecureSkipVerify=true"
IngressRouteTCP
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: nexus
spec:
routes:
- match: Host(`docker_url`)
kind: Rule
services:
- name: nexus-svc
port: 5000
in nexus 3 i configured a docker registry to listen on port 5000 using http
so my question is it realy i need only treafik to stop serving default self-singed cert or there is another problem that i don't see
thanks for the help in advance
I'm using Jenkins and Kubernetes to perform this actions.
Since my loadBalancer needs a healthy pod I had to add the livenessProbe to my pod.
My configuration for the pod:
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: default
# Use the persisnte volume
containers:
- name: gcloud
image: gcr.io/cloud-builders/gcloud
command:
- cat
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
The issue that happens is when I want to deploy the code (CD over Jenkins) it comes to the touch
/tmp/healthy;
command and it's timed out.
The error response I get looks like this:
java.io.IOException: Failed to execute shell script inside container [kubectl] of pod [wobbl-mobile-label-qcd6x-13mtj]. Timed out waiting for the container to become ready!
When I type kubectl get events
I get the following response:
Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Any hints on how to solve this?
I have read this documentation for the liveness and I took the config for it from there.
As can be seen from the link you are referring . The example is to help you understand the working of liveness probe. I the example below from this link
they have purposely removed /tmp/healthy file after
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
what this does is it creates /tmp/healthy file when the container is created. After 5 seconds the liveness probe kicks in and checks for /tmp/healthy file , at this moment the container does have a /tmp/healthy file present . After 30 seconds it deletes the file and liveness probe fails to find the /tmp/healthy file and restarts the container. This process will continue to go on and liveness probe will fail the health check after every 30 seconds.
If you only add
touch /tmp/healthy
The liveness probe should work well
I have a frontend application built with React and backend on nodejs.
Both have a separate Docker image and therefore a separate deployment on k8s (gce).
Each deployment has a corresponding k8s service, let's say fe-serice and be-service.
I am trying to setup an Ingress so that both services are exposed on a single domain in the following manner:
/api/* - are routed to be-service
everything else is routed to fe-service
Here is my yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-host
http:
paths:
- path: /*
backend:
serviceName: fe-service
servicePort: 80
- path: /api/*
backend:
serviceName: be-service
servicePort: 5000
Here is what I get with curl:
curl [ip] --header "Host: my-host" -> React app (as expected)
curl [ip]/foo --header "Host: my-host" -> nginx 404 (why?)
curl [ip]/api --header "Host: my-host" -> nginx 404 (why?)
curl [ip]/api/ --header "Host: my-host" -> nodejs app
curl [ip]/api/foo --header "Host: my-host" -> nodejs app
As far as I can see a part with api/ works fine, but I can't figure out everything else, I tried different combinations with/without wildcards, but it still does not work in the way I want it to work.
What am I missing? Is this even possible?
Thanks in advance!
I can't explain why /foo is not working
But
/api/* does not cover /api, it covers only anything after /api/
I don't think the issue here is your ingress, but rather your nginx setup (without having seen it!). Since React apps are Single Page Applications, you need to tell the nginx server to always look for index.html instead of going to e.g. /usr/share/nginx/html/foo where there's probably nothing to find.
I think you'll find relevant information e.g. here. I wish you good luck #Andrey, and let me know if this was at all helpful!
I hope am not late to respond. Please use kubernetes use-regrex annotations to ensure that it maps to your services endpoints. Please make sure that the react app service mapping is done at the end of the reset. Use the following yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: my-host
http:
paths:
- path: /api/?(.*)
backend:
serviceName: be-service
servicePort: 5000
- path: /?(.*)
backend:
serviceName: fe-service
servicePort: 80