I have the following .yaml file to install redisinsights in kubernetes, with persistence support.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: redisinsight-storage-class
provisioner: 'kubernetes.io/gce-pd'
parameters:
type: 'pd-standard'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redisinsight-volume-claim
spec:
storageClassName: redisinsight-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
initContainers:
- name: change-data-dir-ownership
image: alpine:3.6
command:
- chmod
- -R
- '777'
- /db
volumeMounts:
- name: redisinsight
mountPath: /db
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.1 #repo/image
imagePullPolicy: Always #Always pull image
volumeMounts:
- name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: redisinsight
persistentVolumeClaim:
claimName: redisinsight-volume-claim
---
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- port: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
However, it fails to launch and gives an error:
INFO 2020-07-03 06:30:08,117 redisinsight_startup Registered SIGTERM handler
ERROR 2020-07-03 06:30:08,131 redisinsight_startup Error in main()
Traceback (most recent call last):
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
Traceback (most recent call last):
File "./startup.py", line 495, in <module>
File "./startup.py", line 477, in main
ValueError: invalid literal for int() with base 10: 'tcp://10.69.9.111:8001'
But the same docker image, when run locally via docker as:
docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight
works fine. What am I doing wrong ?
It feels like redisinsights is trying to read port as an int but somehow gets a string and is confused. But I cannot understand how this works fine the local docker run.
UPDATE:
RedisInsight's kubernetes documentation has been updated recently. It clearly describes how to create a RedisInsight k8s deployment with and without a service.
IT also explains what to do when there's a service named "redisinsight" already:
Note - If the deployment will be exposed by a service whose name is ‘redisinsight’, set REDISINSIGHT_HOST and REDISINSIGHT_PORT environment variables to override the environment variables created by the service.
The problem is with the name of the service.
From the documentation, it is mentioned that RedisInsight has an environment variable REDISINSIGHT_PORT which can configure the port in which RedisInsight can run.
When you create a service in Kubernetes, all the pods that match the service, gets an environment variable <SERVICE_NAME>_PORT=<SERVICE_IP>:<SERVICE_PORT>.
So when you try to create the above mentioned service with name redisinsight, Kubernetes passes the service environment variable REDISINSIGHT_PORT=<SERVICE_IP>:SERVICE_PORT. But the port environment variable (REDISINSIGHT_PORT) is documented to be a port number and not an endpoint which makes the pod to crash when redisinsight running on the pod tries to use the environment variable as the port number.
So change the name of the service to be something different and not redisinsight and it should work.
Here's a quick deployment and service file:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redisinsight #deployment name
labels:
app: redisinsight #deployment label
spec:
replicas: 1 #a single replica pod
selector:
matchLabels:
app: redisinsight #which pods is the deployment managing, as defined by the pod template
template: #pod template
metadata:
labels:
app: redisinsight #label for pod/s
spec:
containers:
- name: redisinsight #Container name (DNS_LABEL, unique)
image: redislabs/redisinsight:1.6.3 #repo/image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: db #Pod volumes to mount into the container's filesystem. Cannot be updated.
mountPath: /db
ports:
- containerPort: 8001 #exposed conainer port and protocol
protocol: TCP
volumes:
- name: db
emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Service:
apiVersion: v1
kind: Service
metadata:
name: redisinsight-http # name should not be redisinsight
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8001
selector:
app: redisinsight
Please note the name of the service.
Logs of redisinsight pod:
INFO 2020-09-02 11:46:20,689 redisinsight_startup Registered SIGTERM handler
INFO 2020-09-02 11:46:20,689 redisinsight_startup Starting webserver...
INFO 2020-09-02 11:46:20,689 redisinsight_startup Visit http://0.0.0.0:8001 in your web browser. Press CTRL-C to exit.
Also the service end point (from minikube):
$ minikube service list
|----------------------|------------------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|------------------------------------|--------------|-------------------------|
| default | kubernetes | No node port |
| default | redisinsight-http | 80 | http://172.17.0.2:30860 |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|------------------------------------|--------------|-------------------------|
BTW, If you don't want to create a service at all (which is not related the question), you can do port forwarding:
kubectl port-forward <redisinsight-pod-name> 8001:8001
Problem is related to service, as it's interfering with the pod causing it to crash.
As we can read in the Redis docs Installing RedisInsight on Kubernetes
Once the deployment has been successfully applied and the deployment complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below:
kubectl port-forward deployment/redisinsight 8001
Open your browser and point to http://localhost:8001
Or a service which in your case while using GCP can look like this:
apiVersion: v1
kind: Service
metadata:
name: redisinsight
spec:
ports:
- protocol: TCP
port: 8001
targetPort: 8001
name: redisinsight
type: LoadBalancer
selector:
app: redisinsight
Once the service receives the External-IP you can use it to access Redis.
crou#cloudshell:~ $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 9d
redisinsight LoadBalancer 10.8.7.0 34.67.171.112 8001:31456/TCP 92s
via http://34.67.171.112:8001/ in my example.
It happens to me too. In case anyone miss the conversation in the comments, here is the solution.
Deploy the redisinsight pod first and wait until it run successfully.
Deploy the service.
I think this is a bug and it is not really working because a pod can die anytime. It is kinda against the reason of using Kubernetes.
Someone have reported this issue here https://forum.redislabs.com/t/redisinsight-fails-to-launch-in-kubernetes/652/2
There are several problems with redisinsight running in k8s as suggested by the current documentation. I will list them below:
Suggestion is to use emptyDir
Issue: Emptydir will most likely run out of space for larger redis clusters
Solution: Use persistent volume
redisinsight docker container uses a redisinsight use
Issue: redisinsight users does not ties to a specific uid. For this reason the persistent volume permissions cannot be set in a way that allows access to a pvc
Solution: use cryptexlabs/redisinsight:latest which extends redislabs/redisinsight:latest but set uid for redisinsight to 777
default permissions do not allow access for redisinsight
Issue: redisinsight will not be able to access the /db directory
Solution: Use init container to set the directory permissions so that user 777 owns the /db directory
Suggestion is to use a nodeport for service
Issue: this is a security hole
Solution: Use ClusterIP instead and then use kubectl portforwarding to gain access or other secure access to redisinsight
Accessing rdb files locally is impractical.
Problem: rdb files for large clusters must be downloaded and uploaded via the kubectl
Solution: Use the s3 solution. If you are using kube2iam in an EKS cluster you'll need to create a special role that has access the bucket. Before that you must create a backup of your cluster and then export the backup following these instructions: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html
Summary
Redisinsight is a good tool. But currently running it insight kubernetes cluster is an absolute nightmare and I t
Related
I've gone through a fair few stackoverflow posts, none are working... So here's my issue:
I've got a simple node app running on broadcast 0.0.0.0 at port 5000, it's got a simple single endpoint at /.
I've got two k8s objects, here's my Deployment object:
### pf deployment
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: pf-deployment
spec:
# 3 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: public-facing
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: public-facing
spec:
containers:
- name: public-facing
# Run this image
image: pf:ale8k
ports:
- containerPort: 5000
Next, here is my Service object:
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
name: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
Finally, a very simple dockerfile:
### generic docker file
FROM node:12
WORKDIR /usr/src/app
COPY . .
RUN npm i
EXPOSE 5000
CMD ["npm", "run", "start"]
I have my image in the minikubes local docker registry, so that's not the issue...
When I try:
curl $(minikube service pf-service --url)
I get:
curl: (7) Failed to connect to 192.168.99.101 port 31753: Connection refused
When I try:
minikube service pf-service
I get a little further output:
Most likely you need to configure your SUID sandbox correctly
I have the hello-minikube image running, this works perfectly fine. So I presume it isn't my nacl?
I'm very new to kubernetes, so apologies in advance if it's very simple.
Thanks!
Service has got selector name: public-facing but pod has got label app: public-facing. They need to be same for Endpoints of the service to be populated with pod IPs.
If you execute below command
kubectl describe svc pf-service
You will see that Endpoints has got no IPs which is the cause of connection refused error.
Change the selector in service as below to make it work.
### pf service
apiVersion: v1
kind: Service
metadata:
name: pf-service
labels:
run: pf-service-label
spec:
type: NodePort ### may be ommited as it is a default type
selector:
app: public-facing ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 5000 ### port your app listens on
port: 5000 ### port on which you want to expose it within your cluster
I create a yaml file to create rabbitmq kubernetes cluster. I can see pods. But when I write kubectl get deployment. I cant see there. I can't access to rabbitmq ui page.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbit
name: rabbit
spec:
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbit
spec:
serviceName: rabbit
replicas: 3
selector:
matchLabels:
app: rabbit
template:
metadata:
labels:
app: rabbit
spec:
containers:
- name: rabbitmq
image: rabbitmq
nodeSelector:
rabbitmq: "clustered"
#arghya-sadhu's answer is correct.
NB I'm unfamiliar with RabbitMQ but you may need to use a different image (see 'Management Plugin`) to include the UI.
See below for more details.
You should be able to hack your way to the UI on one (!) of the Pods via:
PORT=8888
kubectl port-forward pod/rabbit-0 --namespace=${NAMESPACE} ${PORT}:15672
And then browse localhost:${PORT} (if 8888 is unavailable, try another).
I suspect (!) this won't work unless you use the image with the management plugin.
Plus
The Service needs to select the StatefulSet's Pods
Within the Service spec you should add perhaps:
selector:
app: rabbit
Presumably (!?) you are using a private repo (because you have imagePullSecrets).
If you don't and wish to use DockerHub, you may remove the imagePullSecrets section.
It's useful to document (!) container ports albeit not mandatory:
In the StatefulSet
ports:
- containerPort: 5672
- containerPort: 15672
Debug
NAMESPACE="default" # Or ...
Ensure the StatefulSet is created:
kubectl get statesfulset/rabbit --namespace=${NAMESPACE}
Check the Pods:
kubectl get pods --selector=app=rabbit --namespace=${NAMESPACE}
You can check the the Pods are bound to a (!) Service:
kubectl describe endpoints/rabbit --namespace=${NAMESPACE}
NB You should see 3 addresses (one per Pod)
Get the NodePort either:
kubectl get service/rabbit --namespace=${NAMESPACE} --output=json
kubectl describe service/rabbit --namespace=${NAMESPACE}
You will need to use the NodePort to access both the MQTT endpoint and the UI.
statefulsets and deployments are different kubernetes resources. You have created statefulsets. That's why you don't see deployments. If you do
kubectl get statefulset you should see it and also both statefulset and deployment creates pod finally so you should be able to see rabbitmq pods if you do kubectl get pods
Since you have created a Nodeport service. You should be able to access it via http://nodeip:nodeport where nodeip is ip of any worker node in your kubernetes cluster.
You can get to know what is the Nodeport(a number between 30000-32767) by
kubectl describe services rabbit
Here is the doc on accessing a Nodeport service from outside the cluster.
I make configuration that my service is builded on 8080 port.
My docker image is also on 8080.
I put my ReplicaSet with configuration like this
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-backend-rs
spec:
containers:
- name: my-app-backend
image: go-my-app-backend
ports:
- containerPort: 8080
imagePullPolicy: Never
And finally I create service of type NodePort also on port 8080 with configuration like below:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-backend-rs
name: my-app-backend-svc-nodeport
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-app-backend
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
Type: NodePort
IP: 10.110.250.176
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31859/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I not understanding and what am I doing wrong? Can anyone explain me that?
From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.
Endpoints: 172.17.0.6:8080
First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.
Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node.
You have to access this service by below methods.
<CLUSTERIP:PORT>---In you case:10.110.250.176:80
<1st node's IP>:31859
<2nd node's IP>:31859
I tried to use curl after kubectl exec -it podname sh
In this very example the double dash is missed in front of the sh command.
Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:
kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
if you have only one container per Pod it can be simplified to:
kubectl exec -it PODNAME -- COMMAND
The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)
Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80
I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.
yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
It depends on where you are going to run that command.
In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".
Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
labels:
app: guestbook
tier: frontend-meta
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
tier: frontend-label
template:
metadata:
labels:
tier: frontend-label ## shall match spec.selector.matchLabels.tier
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
And the following service:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-svc-tier-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
tier: frontend-label ## shall match labels from ReplicaSet spec
We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:
kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend-rs 2 2 2 10m php-redis gcr.io/google_samples/gb-frontend:v3 tier=frontend-label
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
frontend-rs-76sgd 1/1 Running 0 11m 10.12.0.31 gke-6v3n
frontend-rs-fxxq8 1/1 Running 0 11m 10.12.1.33 gke-m7z8
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
frontend-svc-tier-nodeport NodePort 10.0.5.10 <none> 80:32113/TCP 9m41s tier=frontend-label
kubectl get ep -o wide
NAME ENDPOINTS AGE
frontend-svc-tier-nodeport 10.12.0.31:80,10.12.1.33:80 10m
kubectl describe svc/frontend-svc-tier-nodeport
Selector: tier=frontend-label
Type: NodePort
IP: 10.0.5.10
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32113/TCP
Endpoints: 10.12.0.31:80,10.12.1.33:80
Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.
That shall allow us accessing "gb-frontend:v3" app in a few different ways:
from inside cluster: curl 10.0.5.10:80
(CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
from external network (internet): curl PUBLIC_IP:32113
here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
from the Node : curl localhost:32113
Hope that helps.
I have two Java microservices (caller.jar which calls called.jar)
We can set the caller service http port through an env-var CALLERPORT and the address of called service through an env-var CALLEDADDRESS.
So caller uses two env-var.
We must set also called service env-var CALLEDPORT in order to set the specific http port on which called service is listening for http requests.
I don't know exactly how to simply expose these variables from a Dockerfile, in order to set them using Kubernetes.
Here is how I made the two Dockerfiles:
Dockerfile of caller
FROM openjdk:8-jdk-alpine
# ENV CALLERPORT (it's own port)
# ENV CALLEDADDRESS (the other service address)
ADD caller.jar /
CMD ["java", "-jar", "caller.jar"]
Dockerfile of called
FROM openjdk:8-jdk-alpine
# ENV CALLEDPORT (it's own port)
ADD called.jar /
CMD ["java", "-jar", "called.jar"]
With these I've made two Docker images:
myaccount/caller
myaccount/called
Then I've made the two deployments.yaml in order to let K8s deploy (on minikube) the two microservices using replicas and loadbalancers.
deployment-caller.yaml
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: caller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: caller
labels:
app: caller
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: caller
tier: caller
strategy:
type: Recreate
template:
metadata:
labels:
app: caller
tier: caller
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
And deployment-called.yaml
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
selector:
app: called
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: called
labels:
app: called
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: called
tier: called
strategy:
type: Recreate
template:
metadata:
labels:
app: called
tier: called
spec:
containers:
- image: myaccount/called
name: called
env:
- name: CALLEDPORT
value: "8081"
ports:
- containerPort: 8081
name: called
IMPORTANT:
The single services work well if called singularly (such as calling an healthcheck endpoint) but, when calling the endpoint which involves the communication between the two services, then there is this error:
java.net.UnknownHostException: called
The pods are correctly running and active, but i guess the problem is the part of deployment.yaml in which I must define how to find the pointed service, so here:
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
Neither
called
nor
called-loadbalancer
nor
http://caller
kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/called-855cc4d89b-4gf97 1/1 Running 0 3m23s 172.17.0.4 minikube <none> <none>
pod/called-855cc4d89b-6268l 1/1 Running 0 3m23s 172.17.0.5 minikube <none> <none>
pod/caller-696956867b-9n7zc 1/1 Running 0 106s 172.17.0.6 minikube <none> <none>
pod/caller-696956867b-djwsn 1/1 Running 0 106s 172.17.0.7 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/called-loadbalancer LoadBalancer 10.99.14.91 <pending> 8081:30161/TCP 171m app=called
service/caller-loadbalancer LoadBalancer 10.107.9.108 <pending> 8080:30078/TCP 65m app=caller
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 177m <none>
works if put in that line of the deployment.yaml.
So what to put in this line?
The short answer is you don't need to expose them in the Dockerfile. You can set any environment variables you want when you start a container and they don't have to be specified upfront in the Dockerfile.
You can verify this by starting a container using 'docker run' with '-e' to set env vars and '-it' to get an interactive session. The echo the value of your env var and you'll see it is set.
You can also get a terminal session with one of the containers in your running kubernetes Pod with 'kubectl exec' (https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/). From there you can echo environment variables from there to see that they are set. You can see them more quickly with 'kubectl describe pod ' after getting the pod name with 'kubectl get pods'.
Since you are having problems, you also want to check whether your services are working correctly. Since you are using minikube you can do 'minikube service ' to check they can be accessed externally. You'll also want to check the internal access - see Accessing spring boot controller endpoint in kubernetes pod
Your approach of using service names and ports is valid. With a bit of debugging you should be able to get it working. Your setup is similar to an illustration I did in https://dzone.com/articles/kubernetes-namespaces-explained so referring to that might help (except you are using env vars directly instead of through a configmap but it amounts to the same).
I think that in the caller you are injecting the wrong port in the env var - you are putting the caller's own port and not the port of what it is trying to call.
To access services inside Kubernetes you should use this DNS:
http://caller-loadbalancer.default.svc.cluster.local:8080
http://called-loadbalancer.default.svc.cluster.local:8081
First of all - it's completely impossible to understand what do you want. Your post starts from:
We can set...
We must set...
Nobody here doesn't know what do you want to do and it could be much more useful to see some definition of done you are expecting.
This having been said now, I have to turn to your substantive question...
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
This things will be exported by k8s automatically. For example, i have service kibana with a port:80 in service definition:
svc/kibana ClusterIP 10.222.81.249 <none> 80/TCP 1y app=kibana
this is how I can get this within the different pod which is in the same namespace:
root#some-pod:/app# env | grep -i kibana
KIBANA_SERVICE_PORT=80
KIBANA_SERVICE_HOST=10.222.81.249
Moving forward, why do you use LoadBalancer? Without any cloud it will be the similar to NodePort, but seems like ClusterIP is all you need.
Next, service ports can be the same and there won't be any port collisions, just because ClusterIP is unique every time and therefore socket will be unique for each service. Your services could be described like this:
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <--------------------
targetPort: 8080
selector:
app: caller
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <------------------
targetPort: 8081
selector:
app: called
That would simplify using service names just by names without ports specifying:
http://caller-loadbalancer.default.svc.cluster.local
http://called-loadbalancer.default.svc.cluster.local
or
http://caller-loadbalancer.default
http://called-loadbalancer.default
or (within the similar namespace):
http://caller-loadbalancer
http://called-loadbalancer
or (depending on the lib)
caller-loadbalancer
called-loadbalancer
Same things about containerPort/targetPort! Why do you use 8081 and 8080? Who cares about internal container ports? I agree different cases happen, but in this case you have a single process inside and you are definitely not going to run some more processes there, are you? So they also could be the same.
I'd like to advise you to use stackoverflow different way. Do not ask how to do something your way, much better to ask how to do something the best way
I have created a kubernetes cluster and deployed jenkins by following file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
and service by
apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??
Just kubectl delete pods -l run=jenkins-ci - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/
You can use command below to enter the pod container.
$ kubectl exec -it kubernetes pod -- /bin/bash
After apply service Jenkins restart command.
For more details please refer :how to restart service inside pod in kubernetes cluster.