How to add custom host entries to kubernetes Pods? - docker

My application communicates to some services via hostnames.
When running my application as a docker container i used to add hostnames to the /etc/hosts of the hostmachine and run the container using --net=host.
Now I'm running my containers in kubernetes cluster. I would like to know how can i add the /etc/hosts entries to the pod via yaml.
I'm using kubernetes v1.5.3.

From k8s 1.7 you can add hostAliases. Example from the docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"

Host files are going to give you problems, but if you really need to, you could use a configmap.
Add a configmap like so
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-hosts-file-configmap
data:
hosts: |-
192.168.0.1 gateway
127.0.0.1 localhost
Then mount that inside your pod, like so:
volumeMounts:
- name: my-app-hosts-file
mountPath: /etc/
volumes:
- name: my-app-hosts-file
configMap:
name: my-app-hosts-file-configmap

This works and also looks simpler:
kind: Service
apiVersion: v1
metadata:
name: {HOST_NAME}
spec:
ports:
- protocol: TCP
port: {PORT}
targetPort: {PORT}
type: ExternalName
externalName: {EXTERNAL_IP}
Now you can use the HOST_NAME from the pod directly to access the external machine.

Another approach could be to use postStart hook on the pod lifecycle as below:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo '192.168.1.10 weblogic-jms1.apizone.io' >> /etc/hosts; echo '192.168.1.20
weblogic-jms2.apizone.io' >> /etc/hosts; echo '192.168.1.30
weblogic-jms3.apizone.io' >> /etc/hosts; echo '192.168.1.40
weblogic-jms4.apizone.io' >> /etc/hosts"]

Related

How to reach spice server inside of kubernetes pod?

I have a docker container that runs an Ubuntu image that then runs a windows vm via qemu-system-x86_64.
I can use spice to access the windows vm by sharing a port with the docker container and then I tell qemu-system-x86_64 to use that port for spice.
Running container:
docker run -p 5930:5930...
Inside of container:
qemu-system-x86_64 -spice port=5930,disable-ticketing...
This works from a remote machine on the same VPN by using this address:
spice://<server ip>:5930
I now have this container running in a kubernetes pod inside minikube, but I'm not sure what kind of service to use to access the spice server remotely.
Use microk8s. Put your container into pod and create service with NodePort.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: image here
ports:
- containerPort: 5930
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-pod
ports:
- port: 5930
nodePort: 30000
Now call http://server_ip:30000

redisinsights with persistent volume in kubernetes

I have the following .yaml file to install redisinsights in kubernetes, with persistence support.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: redisinsight-storage-class
provisioner: 'kubernetes.io/gce-pd'
parameters:
type: 'pd-standard'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redisinsight-volume-claim
spec:
storageClassName: redisinsight-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
initContainers:
- name: change-data-dir-ownership
image: alpine:3.6
command:
- chmod
- -R
- '777'
- /db
volumeMounts:
- name: redisinsight
mountPath: /db
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.1 #repo/image
imagePullPolicy: Always #Always pull image
volumeMounts:
- name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: redisinsight
persistentVolumeClaim:
claimName: redisinsight-volume-claim
---
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- port: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
However, it fails to launch and gives an error:
INFO 2020-07-03 06:30:08,117 redisinsight_startup Registered SIGTERM handler
ERROR 2020-07-03 06:30:08,131 redisinsight_startup Error in main()
Traceback (most recent call last):
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
Traceback (most recent call last):
File "./startup.py", line 495, in <module>
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
But the same docker image, when run locally via docker as:
docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight
works fine. What am I doing wrong ?
It feels like redisinsights is trying to read port as an int but somehow gets a string and is confused. But I cannot understand how this works fine the local docker run.
UPDATE:
RedisInsight's kubernetes documentation has been updated recently. It clearly describes how to create a RedisInsight k8s deployment with and without a service.
IT also explains what to do when there's a service named "redisinsight" already:
Note - If the deployment will be exposed by a service whose name is ‘redisinsight’, set REDISINSIGHT_HOST and REDISINSIGHT_PORT environment variables to override the environment variables created by the service.
The problem is with the name of the service.
From the documentation, it is mentioned that RedisInsight has an environment variable REDISINSIGHT_PORT which can configure the port in which RedisInsight can run.
When you create a service in Kubernetes, all the pods that match the service, gets an environment variable <SERVICE_NAME>_PORT=<SERVICE_IP>:<SERVICE_PORT>.
So when you try to create the above mentioned service with name redisinsight, Kubernetes passes the service environment variable REDISINSIGHT_PORT=<SERVICE_IP>:SERVICE_PORT. But the port environment variable (REDISINSIGHT_PORT) is documented to be a port number and not an endpoint which makes the pod to crash when redisinsight running on the pod tries to use the environment variable as the port number.
So change the name of the service to be something different and not redisinsight and it should work.
Here's a quick deployment and service file:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.3 #repo/image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: db #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: db
emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Service:
apiVersion: v1
kind: Service
metadata:
name: redisinsight-http # name should not be redisinsight
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8001
selector:
app: redisinsight
Please note the name of the service.
Logs of redisinsight pod:
INFO 2020-09-02 11:46:20,689 redisinsight_startup Registered SIGTERM handler
INFO 2020-09-02 11:46:20,689 redisinsight_startup Starting webserver...
INFO 2020-09-02 11:46:20,689 redisinsight_startup Visit http://0.0.0.0:8001 in your web browser. Press CTRL-C to exit.
Also the service end point (from minikube):
$ minikube service list
|----------------------|------------------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|------------------------------------|--------------|-------------------------|
| default | kubernetes | No node port |
| default | redisinsight-http | 80 | http://172.17.0.2:30860 |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|------------------------------------|--------------|-------------------------|
BTW, If you don't want to create a service at all (which is not related the question), you can do port forwarding:
kubectl port-forward <redisinsight-pod-name> 8001:8001
Problem is related to service, as it's interfering with the pod causing it to crash.
As we can read in the Redis docs Installing RedisInsight on Kubernetes
Once the deployment has been successfully applied and the deployment complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below:
kubectl port-forward deployment/redisinsight 8001
Open your browser and point to http://localhost:8001
Or a service which in your case while using GCP can look like this:
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- protocol: TCP
port: 8001
targetPort: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
Once the service receives the External-IP you can use it to access Redis.
crou#cloudshell:~ $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 9d
redisinsight LoadBalancer 10.8.7.0 34.67.171.112 8001:31456/TCP 92s
via http://34.67.171.112:8001/ in my example.
It happens to me too. In case anyone miss the conversation in the comments, here is the solution.
Deploy the redisinsight pod first and wait until it run successfully.
Deploy the service.
I think this is a bug and it is not really working because a pod can die anytime. It is kinda against the reason of using Kubernetes.
Someone have reported this issue here https://forum.redislabs.com/t/redisinsight-fails-to-launch-in-kubernetes/652/2
There are several problems with redisinsight running in k8s as suggested by the current documentation. I will list them below:
Suggestion is to use emptyDir
Issue: Emptydir will most likely run out of space for larger redis clusters
Solution: Use persistent volume
redisinsight docker container uses a redisinsight use
Issue: redisinsight users does not ties to a specific uid. For this reason the persistent volume permissions cannot be set in a way that allows access to a pvc
Solution: use cryptexlabs/redisinsight:latest which extends redislabs/redisinsight:latest but set uid for redisinsight to 777
default permissions do not allow access for redisinsight
Issue: redisinsight will not be able to access the /db directory
Solution: Use init container to set the directory permissions so that user 777 owns the /db directory
Suggestion is to use a nodeport for service
Issue: this is a security hole
Solution: Use ClusterIP instead and then use kubectl portforwarding to gain access or other secure access to redisinsight
Accessing rdb files locally is impractical.
Problem: rdb files for large clusters must be downloaded and uploaded via the kubectl
Solution: Use the s3 solution. If you are using kube2iam in an EKS cluster you'll need to create a special role that has access the bucket. Before that you must create a backup of your cluster and then export the backup following these instructions: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html
Summary
Redisinsight is a good tool. But currently running it insight kubernetes cluster is an absolute nightmare and I t

How to add "-v /var/run/docker.sock:/var/run/docker.sock" when running container from kubernetes deployment yaml

I'm setting up a kubernetes deployment with an image that will execute docker commands (docker ps etc.).
My yaml looks as the following:
kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery
namespace: kube-system
labels:
discovery-app: kubernetes-discovery
spec:
selector:
matchLabels:
discovery-app: kubernetes-discovery
strategy:
type: Recreate
template:
metadata:
labels:
discovery-app: kubernetes-discovery
spec:
containers:
- image: docker:dind
name: discover
ports:
- containerPort: 8080
name: my-awesome-port
imagePullSecrets:
- name: regcred3
volumes:
- name: some-volume
emptyDir: {}
serviceAccountName: kubernetes-discovery
Normally I will run a docker container as following:
docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dind
Now, kubernetes yaml supports commands and args but for some reason does not support options.
What is the right thing to do?
Perhaps I should configure a volume, but then, is it volumeMount or just a volume?
I am new with kubernetes so it is important for me to do it the right way.
Thank you
You want to add the volume to the container.
spec:
containers:
- name: discover
image: docker:dind
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
It seems like a bad idea to interact directly with containers on any nodes in Kubernetes. The whole point of Kubernetes is to orchestrate. If you add containers outside of the Pod construct, then Kubernetes will not be aware the processes running on the nodes. This will affect resource allocation.
It also needs to be said that directly working with containers bypasses security.

How to configure multiple services/containers in Kubernetes?

I am new to Docker and Kubernetes.
Technologies used:
Dotnet Core 2.2
Asp.NET Core WebAPI 2.2
Docker for windows(Edge) with Kubernetes support enabled
Code
I am having two services hosted into two docker containers container1 and container2.
Below is my deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapi-dockerkube
spec:
replicas: 1
template:
metadata:
labels:
app: webapi-dockerkube
spec:
containers:
- name: webapi-dockerkube
image: "webapidocker:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/values
port: 80
readinessProbe:
httpGet:
path: /api/values
port: 80
- name: webapi-dockerkube2
image: "webapidocker2:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/other/values
port: 80
readinessProbe:
httpGet:
path: /api/other/values
port: 80
When I am running command:
kubectl create -f .\deploy.yaml
I am getting status as CrashLoopBackOff.
But same is running fine when i have only one container configured.
When checking logs I am getting following error:
Error from server (BadRequest): a container name must be specified for pod webapi-dockerkube-8658586998-9f8mk, choose one of: [webapi-dockerkube webapi-dockerkube2]
You are running two containers in the same pod which bind both to port 80. This is not possible within the same pod.
Think of a pod like a 'server' and you can't have two processes bind to the same port.
Solution in your situation: Use different ports inside the pod or use separate pods. From your deployment there seems to be no shared resources like filesystem, so it would be easy to split the containers to separate pods.
Note that it will not suffice to change the pod definition if you want to have both containers running in the same pod with different ports. The application in the container must bind to a different port as well.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
here sharing example for multi container you can use this template
Also you can check for logs of using
Kubectl logs
Check reason for crashloop back

how to restart jenkins service inside pod in kubernetes cluster

I have created a kubernetes cluster and deployed jenkins by following file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
and service by
apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??
Just kubectl delete pods -l run=jenkins-ci - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/
You can use command below to enter the pod container.
$ kubectl exec -it kubernetes pod -- /bin/bash
After apply service Jenkins restart command.
For more details please refer :how to restart service inside pod in kubernetes cluster.

Resources