I am new to Kubernetes. I want to create Service Monitor in Prometheus Operator. I've installed Prometheus Operator and Grafana. I have the running pods as shown below:
The documentation of Prometheus Operator provides code below to create Service Monitor.
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
enableAdminAPI: false
I know this is a yaml file, but I am confused how to run this file? In other words, where should I put this piece of code to? I am learning to create Prometheus monitors. Can I get some help?
Thanks!
You can deploy it like any other manifest in Kubernetes cluster, for example by running kubectl apply -f servicemonitor.yaml.
You can check if it was deployed by simply running kubectl get prometheus:
$ kubectl get prometheus
NAME VERSION REPLICAS AGE
prometheus 5s
When you enabled RBAC authorization there is another yaml that you should use:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
alerting:
alertmanagers:
- namespace: default
name: alertmanager
port: web
Related
I have the following .yaml file to install redisinsights in kubernetes, with persistence support.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: redisinsight-storage-class
provisioner: 'kubernetes.io/gce-pd'
parameters:
type: 'pd-standard'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redisinsight-volume-claim
spec:
storageClassName: redisinsight-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
initContainers:
- name: change-data-dir-ownership
image: alpine:3.6
command:
- chmod
- -R
- '777'
- /db
volumeMounts:
- name: redisinsight
mountPath: /db
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.1 #repo/image
imagePullPolicy: Always #Always pull image
volumeMounts:
- name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: redisinsight
persistentVolumeClaim:
claimName: redisinsight-volume-claim
---
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- port: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
However, it fails to launch and gives an error:
INFO 2020-07-03 06:30:08,117 redisinsight_startup Registered SIGTERM handler
ERROR 2020-07-03 06:30:08,131 redisinsight_startup Error in main()
Traceback (most recent call last):
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
Traceback (most recent call last):
File "./startup.py", line 495, in <module>
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
But the same docker image, when run locally via docker as:
docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight
works fine. What am I doing wrong ?
It feels like redisinsights is trying to read port as an int but somehow gets a string and is confused. But I cannot understand how this works fine the local docker run.
UPDATE:
RedisInsight's kubernetes documentation has been updated recently. It clearly describes how to create a RedisInsight k8s deployment with and without a service.
IT also explains what to do when there's a service named "redisinsight" already:
Note - If the deployment will be exposed by a service whose name is ‘redisinsight’, set REDISINSIGHT_HOST and REDISINSIGHT_PORT environment variables to override the environment variables created by the service.
The problem is with the name of the service.
From the documentation, it is mentioned that RedisInsight has an environment variable REDISINSIGHT_PORT which can configure the port in which RedisInsight can run.
When you create a service in Kubernetes, all the pods that match the service, gets an environment variable <SERVICE_NAME>_PORT=<SERVICE_IP>:<SERVICE_PORT>.
So when you try to create the above mentioned service with name redisinsight, Kubernetes passes the service environment variable REDISINSIGHT_PORT=<SERVICE_IP>:SERVICE_PORT. But the port environment variable (REDISINSIGHT_PORT) is documented to be a port number and not an endpoint which makes the pod to crash when redisinsight running on the pod tries to use the environment variable as the port number.
So change the name of the service to be something different and not redisinsight and it should work.
Here's a quick deployment and service file:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.3 #repo/image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: db #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: db
emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Service:
apiVersion: v1
kind: Service
metadata:
name: redisinsight-http # name should not be redisinsight
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8001
selector:
app: redisinsight
Please note the name of the service.
Logs of redisinsight pod:
INFO 2020-09-02 11:46:20,689 redisinsight_startup Registered SIGTERM handler
INFO 2020-09-02 11:46:20,689 redisinsight_startup Starting webserver...
INFO 2020-09-02 11:46:20,689 redisinsight_startup Visit http://0.0.0.0:8001 in your web browser. Press CTRL-C to exit.
Also the service end point (from minikube):
$ minikube service list
|----------------------|------------------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|------------------------------------|--------------|-------------------------|
| default | kubernetes | No node port |
| default | redisinsight-http | 80 | http://172.17.0.2:30860 |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|------------------------------------|--------------|-------------------------|
BTW, If you don't want to create a service at all (which is not related the question), you can do port forwarding:
kubectl port-forward <redisinsight-pod-name> 8001:8001
Problem is related to service, as it's interfering with the pod causing it to crash.
As we can read in the Redis docs Installing RedisInsight on Kubernetes
Once the deployment has been successfully applied and the deployment complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below:
kubectl port-forward deployment/redisinsight 8001
Open your browser and point to http://localhost:8001
Or a service which in your case while using GCP can look like this:
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- protocol: TCP
port: 8001
targetPort: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
Once the service receives the External-IP you can use it to access Redis.
crou#cloudshell:~ $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 9d
redisinsight LoadBalancer 10.8.7.0 34.67.171.112 8001:31456/TCP 92s
via http://34.67.171.112:8001/ in my example.
It happens to me too. In case anyone miss the conversation in the comments, here is the solution.
Deploy the redisinsight pod first and wait until it run successfully.
Deploy the service.
I think this is a bug and it is not really working because a pod can die anytime. It is kinda against the reason of using Kubernetes.
Someone have reported this issue here https://forum.redislabs.com/t/redisinsight-fails-to-launch-in-kubernetes/652/2
There are several problems with redisinsight running in k8s as suggested by the current documentation. I will list them below:
Suggestion is to use emptyDir
Issue: Emptydir will most likely run out of space for larger redis clusters
Solution: Use persistent volume
redisinsight docker container uses a redisinsight use
Issue: redisinsight users does not ties to a specific uid. For this reason the persistent volume permissions cannot be set in a way that allows access to a pvc
Solution: use cryptexlabs/redisinsight:latest which extends redislabs/redisinsight:latest but set uid for redisinsight to 777
default permissions do not allow access for redisinsight
Issue: redisinsight will not be able to access the /db directory
Solution: Use init container to set the directory permissions so that user 777 owns the /db directory
Suggestion is to use a nodeport for service
Issue: this is a security hole
Solution: Use ClusterIP instead and then use kubectl portforwarding to gain access or other secure access to redisinsight
Accessing rdb files locally is impractical.
Problem: rdb files for large clusters must be downloaded and uploaded via the kubectl
Solution: Use the s3 solution. If you are using kube2iam in an EKS cluster you'll need to create a special role that has access the bucket. Before that you must create a backup of your cluster and then export the backup following these instructions: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html
Summary
Redisinsight is a good tool. But currently running it insight kubernetes cluster is an absolute nightmare and I t
I have an on-prem kubernetes cluster and I want to deploy to it a docker registry from which the cluster nodes can download images. In my attempts to do this, I've tried several methods of identifying the service: a NodePort, a LoadBalancer provided by MetalLB in Layer2 mode, its Flannel network IP (referring to the IP that, by default, would be on the 10.244.0.0/16 network), and its cluster IP (referring to the IP that, by default, would be on the 10.96.0.0/16 network). In every case, connecting to the registry via docker failed.
I performed a cURL against the IP and realized that while the requests were resolving as expected, the tcp dial step was consistently taking 63.15 +/- 0.05 seconds, followed by the HTTP(s) request itself completing in an amount of time that is within margin of error for the tcp dial. This is consistent across deployments with firewall rules varying from a relatively strict set to nothing except the rules added directly by kubernetes. It is also consistent across network architectures ranging from a single physical server with VMs for all cluster nodes to distinct physical hardware for each node and a physical switch. As mentioned previously, it is also consistent across the means by which the service is exposed. It is also consistent regardless of whether I use an ingress-nginx service to expose it or expose the docker registry directly.
Further, when I deploy another pod to the cluster, I am able to reach the pod at its cluster IP without any delays, but I do encounter an identical delay when trying to reach it at its external LoadBalancer IP or at a NodePort. No delays besides expected network latency are encountered when trying to reach the registry from a machine that is not a node on the cluster, e.g., using the LoadBalancer or NodePort.
As a matter of practice, my main inquiry is what is the "correct" way to do what I am attempting to do? Furthermore, as an academic matter, I would also like to know the source of the very long, very consistent delay that I've been seeing?
My deployment yaml file has been included below for reference. The ingress handler is ingress-nginx.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-pv-claim
namespace: docker-registry
labels:
app: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-registry
namespace: docker-registry
spec:
replicas: 1
selector:
matchLabels:
app: docker-registry
template:
metadata:
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2.7.1
env:
- name: REGISTRY_HTTP_ADDR
value: ":5000"
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: "/var/lib/registry"
ports:
- name: http
containerPort: 5000
volumeMounts:
- name: image-store
mountPath: "/var/lib/registry"
volumes:
- name: image-store
persistentVolumeClaim:
claimName: registry-pv-claim
---
kind: Service
apiVersion: v1
metadata:
name: docker-registry
namespace: docker-registry
labels:
app: docker-registry
spec:
selector:
app: docker-registry
ports:
- name: http
port: 5000
targetPort: 5000
---
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/ingress.class: docker-registry
name: docker-registry
namespace: docker-registry
spec:
rules:
- host: example-registry.com
http:
paths:
- backend:
serviceName: docker-registry
servicePort: 5000
path: /
tls:
- hosts:
- example-registry.com
secretName: tls-secret
For future visitors, seems like your issue is related to Flannel.
The whole problem was described here:
https://github.com/kubernetes/kubernetes/issues/88986
https://github.com/coreos/flannel/issues/1268
including workaround:
https://github.com/kubernetes/kubernetes/issues/86507#issuecomment-595227060
I create a yaml file to create rabbitmq kubernetes cluster. I can see pods. But when I write kubectl get deployment. I cant see there. I can't access to rabbitmq ui page.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbit
name: rabbit
spec:
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbit
spec:
serviceName: rabbit
replicas: 3
selector:
matchLabels:
app: rabbit
template:
metadata:
labels:
app: rabbit
spec:
containers:
- name: rabbitmq
image: rabbitmq
nodeSelector:
rabbitmq: "clustered"
#arghya-sadhu's answer is correct.
NB I'm unfamiliar with RabbitMQ but you may need to use a different image (see 'Management Plugin`) to include the UI.
See below for more details.
You should be able to hack your way to the UI on one (!) of the Pods via:
PORT=8888
kubectl port-forward pod/rabbit-0 --namespace=${NAMESPACE} ${PORT}:15672
And then browse localhost:${PORT} (if 8888 is unavailable, try another).
I suspect (!) this won't work unless you use the image with the management plugin.
Plus
The Service needs to select the StatefulSet's Pods
Within the Service spec you should add perhaps:
selector:
app: rabbit
Presumably (!?) you are using a private repo (because you have imagePullSecrets).
If you don't and wish to use DockerHub, you may remove the imagePullSecrets section.
It's useful to document (!) container ports albeit not mandatory:
In the StatefulSet
ports:
- containerPort: 5672
- containerPort: 15672
Debug
NAMESPACE="default" # Or ...
Ensure the StatefulSet is created:
kubectl get statesfulset/rabbit --namespace=${NAMESPACE}
Check the Pods:
kubectl get pods --selector=app=rabbit --namespace=${NAMESPACE}
You can check the the Pods are bound to a (!) Service:
kubectl describe endpoints/rabbit --namespace=${NAMESPACE}
NB You should see 3 addresses (one per Pod)
Get the NodePort either:
kubectl get service/rabbit --namespace=${NAMESPACE} --output=json
kubectl describe service/rabbit --namespace=${NAMESPACE}
You will need to use the NodePort to access both the MQTT endpoint and the UI.
statefulsets and deployments are different kubernetes resources. You have created statefulsets. That's why you don't see deployments. If you do
kubectl get statefulset you should see it and also both statefulset and deployment creates pod finally so you should be able to see rabbitmq pods if you do kubectl get pods
Since you have created a Nodeport service. You should be able to access it via http://nodeip:nodeport where nodeip is ip of any worker node in your kubernetes cluster.
You can get to know what is the Nodeport(a number between 30000-32767) by
kubectl describe services rabbit
Here is the doc on accessing a Nodeport service from outside the cluster.
I believe my question is pretty straightforward. I'm doing my prerequisites to install Kubernetes cluster on bare metal.
Let's say I have:
master - hostname for Docker DB container which is fixed on first node
slave - hostname for Docker DB container which is fixed on second node
Can I communicate with master from any container (app, etc.) in a cluster regardless it's running on the same node or not?
Is this a default behaviour?
Or anything additional should be done?
I assume that I need to setup hostname parameter in YAML or JSON file so Kubernetes is aware what the hostname is.
Probably this is not the factor, but I plan to use Kubespray installation method so it gets Calico networking for k8s.
Many thanks
Yes,
You can access and communication from any container in a namespace via hostname.
Here is an example about Kubernetes Service configure:
---
apiVersion: v1
kind: Service
metadata:
name: master
labels:
name: master
namespace: smart-office
spec:
ports:
- port: 5672
name: master
targetPort: 5672
selector:
name: master
Deployment configure:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: master
labels:
name: master
namespace: smart-office
spec:
replicas: 1
template:
metadata:
labels:
name: master
annotations:
prometheus.io/scrape: "false"
spec:
containers:
- name: master
image: rabbitmq:3.6.8-management
ports:
- containerPort: 5672
name: master
nodeSelector:
beta.kubernetes.io/os: linux
And from other services, for e.g your slaver .env will be:
AMQP_HOST=master <---- The hostname
AMQP_PORT=5672
AMQP_USERNAME=guest
AMQP_PASSWORD=guest
AMQP_HEARTBEAT=60
It's will work inside Cluster even if you not publish External IP.
Hope this can help you.
I have created a kubernetes cluster and deployed jenkins by following file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
and service by
apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??
Just kubectl delete pods -l run=jenkins-ci - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/
You can use command below to enter the pod container.
$ kubectl exec -it kubernetes pod -- /bin/bash
After apply service Jenkins restart command.
For more details please refer :how to restart service inside pod in kubernetes cluster.