Kubernetes + Istio Ingress Gateway port - docker

I have an application running in kubernetes pod (on my local docker desktop, with kubernetes enabled), listening on port 8080. I then have the following kubernetes configuration
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myrelease-foobar-app-gw
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: default-foobar-local-credential
hosts:
- test.foobar.local
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myrelease-foobar-app-vs
namespace: default
spec:
hosts:
- test.foobar.local
gateways:
- myrelease-foobar-app-gw
http:
- match:
- port: 443
route:
- destination:
host: myrelease-foobar-app.default.svc.cluster.local
subset: foobarAppDestination
port:
number: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myrelease-foobar-app-destrule
namespace: default
spec:
host: myrelease-foobar-app.default.svc.cluster.local
subsets:
- name: foobarAppDestination
labels:
app.kubernetes.io/instance: myrelease
app.kubernetes.io/name: foobar-app
---
apiVersion: v1
kind: Service
metadata:
name: myrelease-foobar-app
namespace: default
labels:
helm.sh/chart: foobar-app-0.1.0
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
selector:
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
This works fine. But I'd like to change that port 443 into something else, say 8443 (because I will have multiple Gateway). When I have this, I cant access the application anymore. Is there some configuration that I'm missing? I'm guessing I need to configure Istio to accept port 8443 too? I installed istio using the following command:
istioctl install --set profile=default -y
Edit:
I've done a bit more reading (https://www.dangtrinh.com/2019/09/how-to-open-custom-port-on-istio.html), and I've done the following:
kubectl -n istio-system get service istio-ingressgateway -o yaml > istio_ingressgateway.yaml
edit istio_ingressgateway.yaml, and add the following:
- name: foobarhttps
nodePort: 32700
port: 445
protocol: TCP
targetPort: 8445
kubectl apply -f istio_ingressgateway.yaml
Change within my Gateway above:
- port:
number: 445
name: foobarhttps
protocol: HTTPS
Change within my VirtualService above:
http:
- match:
- port: 445
But I still cant access it from my browser (https://foobar.test.local:445)

I suppose that port has to be mapped on the Istio Ingress Gateway. So if you want to use a custom port, you might have to customize that.
But usually it should not be a problem if multiple Gateways use the same port, it does not cause a clash. So for that use case it should not be necessary to do that.

Fixed it. What i've done wrong in my edit above is this:
- name: foobarhttps
nodePort: 32700
port: 445
protocol: TCP
targetPort: 8443
(notice that targetPort is still 8443). I'm guessing there is an istio component listening on port 8443, which handles all this https stuff. Thanks user140547 for the help!

Related

Cannot access ingress on local minikube

I am trying to setup a jenkins server inside of a kubernetes container with minikube. Following up the documentation for this, I have the following kubernetes configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
selector:
app: jenkins
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: jenkins-service
port:
number: 8080
I am trying to create a deployment (pod), service for it and an ingress to expose it.
I got the ip address of the ingress with kubectl get ingress and it is the following:
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress <none> * 80 5s
jenkins-ingress <none> * 192.168.49.2 80 5s
When I try to open 192.168.49.2 in the browser, I get timeout. The same happens when I try to ping this ip address from the terminal.
The addon for ingress is enabled in minikube, and when I describe the ingress, I see the following:
Name: jenkins-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/?(.*) jenkins-service:8080 (172.17.0.4:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6m15s (x2 over 6m20s) nginx-ingress-controller Scheduled for sync
Can someone help me what I am doing wrong and how can I access the jenkins instance through the browser?

Why ingress always return 502 Error to specific service nodeport only?

I have an alpine docker image to run my raw PHP website on an apache server(PHP 7.4) EXPOSE 80.
I want to run the image on Kubernetes(GKE) with an ingress controller.
I'm pushing the image with gcloud command to the google container registry.
Both the deployment and service have no errors and created successfully as NodePort.
The ingress I deployed is from google tutorials(https://cloud.google.com/community/tutorials/nginx-ingress-gke)
In my ingress now there is:
34.68.78.46.xip.io/
34.68.78.46.xip.io/hello
34.68.78.46.xip.io/jb(/|$)(.*)
The /hello is the same configuration of the tutorial and it is working fine.
The /jb is the same configuration as I mentioned below and always returning 502 Error.
Ingress details in the GCP console show no warnings or errors
I have checked:
Kubernetes GKE Ingress : 502 Server Error
GKE Ingress: 502 error when downloading file
502 Server Error Google kubernetes
Here is the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jomlahbazar-deployment
spec:
selector:
matchLabels:
greeting: jomlah
department: bazar
replicas: 1
template:
metadata:
labels:
greeting: jomlah
department: bazar
spec:
containers:
- name: jomlah
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v1"
ports:
- containerPort: 80
env:
- name: "PORT"
value: "80"
Here is the service file:
apiVersion: v1
kind: Service
metadata:
name: jomlahbazar-service
spec:
type: NodePort
selector:
greeting: jomlah
department: bazar
ports:
- protocol: TCP
port: 80
targetPort: 80
Here in the ingress file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url : "true"
spec:
rules:
- host: 34.68.78.46.xip.io
http:
paths:
- path: /
backend:
serviceName: jomlahbazar-service
servicePort: 80
- path: /hello
backend:
serviceName: hello-app
servicePort: 8080
- path: /jb(/|$)(.*)
backend:
serviceName: jomlahbazar-service
servicePort: 80
Here is the ingress description:
Name: ingress-resource
Namespace: default
Address: 34.68.78.46
Default backend: default-http-backend:80 (10.20.1.6:8080)
Rules:
Host Path Backends
---- ---- --------
34.68.78.46.xip.io
/ jomlahbazar-service:80 (<none>)
/hello hello-app:8080 (10.20.2.61:8080)
/jb(/|$)(.*) jomlahbazar-service:80 (<none>)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: true
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/add-base-url":"true","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.ingress.kubernetes.io/use-regex":"true"},"name":"ingress-resource","namespace":"default"},"spec":{"rules":[{"host":"34.68.78.46.xip.io","http":{"paths":[{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/"},{"backend":{"serviceName":"hello-app","servicePort":8080},"path":"/hello"},{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/jb(/|$)(.*)"}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 36m (x6 over 132m) nginx-ingress-controller Configuration for default/ingress-resource was added or updated
The output of kubectl get ing ingress-resource -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/add-base-url":"true","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.ingress.kubernetes.io/use-regex":"true"},"name":"ingress-resource","namespace":"default"},"spec":{"rules":[{"host":"34.68.78.46.xip.io","http":{"paths":[{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/"},{"backend":{"serviceName":"hello-app","servicePort":8080},"path":"/hello"},{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/jb(/|$)(.*)"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
creationTimestamp: "2021-02-11T06:00:07Z"
generation: 5
name: ingress-resource
namespace: default
resourceVersion: "2195351"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/ingress-resource
uid: 74dc822f-91cb-4902-991b-1ad298f44ae6
spec:
rules:
- host: 34.68.78.46.xip.io
http:
paths:
- backend:
serviceName: jomlahbazar-service
servicePort: 80
path: /
- backend:
serviceName: hello-app
servicePort: 8080
path: /hello
- backend:
serviceName: jomlahbazar-service
servicePort: 80
path: /jb(/|$)(.*)
status:
loadBalancer:
ingress:
- ip: 34.68.78.46
I've run some tests on my GKE cluster. I've replicate your behavior using 2 hello world applications v1 and v2.
Scenario 1
HW 1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
SVC HW1
spec:
type: NodePort
selector:
key: app
ports:
- port: 80
targetPort: 8080
HW 2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
env:
- name: "PORT"
value: "80"
SVC HW2
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: hello2
Ingress
spec:
rules:
- http:
paths:
- path: /hello2
backend:
serviceName: h2
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
Outputs:
$ curl 34.117.70.75/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-7rmmd
$ curl 34.117.70.75/hello2
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
In this scenario, deployment was configured to create pod which will listen on port: 80. As you skipped configureing containerPort in deployment, Kubernetes automatically used the same port in containerPort as was set in port. You can verify it using netstat command.
$ kubectl exec -ti h2-deploy-6dbf5b7899-g7rbj -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::80 :::* LISTEN 1/hello-app
/ #
In your service you set containerPort: 8080, so service expect traffic to go trought port 8080. As pod is listening only on 80 and traffic goes trought 8080 you are getting 502 error.
Scenario 2
After change value from "80" to "8080" and apply new configuration.
$ curl 34.117.70.75/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-7rmmd
$ curl 34.117.70.75/hello2
Hello, world!
Version: 2.0.0
Hostname: h2-deploy-5f5ccfbf9f-fjhrb
Netstat:
$ kubectl exec -ti f2-deploy-5f5ccfbf9f-fjhrb -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Solution
Solution 1
You should change your application deployment to:
env:
- name: "PORT"
value: "8080"
and apply new configuration.
Solution 2
Use containerPort
spec:
containers:
- name: jb
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v3"
ports:
- containerPort: 8080
Note
Please note, if you created your own image and used EXPOSE in your Dockerfile, you should configure your Deployment to use this specific port.
Let me know if you still have issue.

Exposing Neo4j Bolt using Kubernetes Ingress

I'm trying to build a Neo4j Learning Tool for some of our Trainings. I want to use Kubernetes to spin up a Neo4j Pod for each participant to use. Currently I struggle exposing the bolt endpoint using an Ingress and I don't know why.
Here are my deployment configs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j
namespace: learn
labels:
app: neo-manager
type: database
spec:
replicas: 1
selector:
matchLabels:
app: neo-manager
type: database
template:
metadata:
labels:
app: neo-manager
type: database
spec:
containers:
- name: neo4j
imagePullPolicy: IfNotPresent
image: neo4j:3.5.6
ports:
- containerPort: 7474
- containerPort: 7687
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: neo4j-service
namespace: learn
labels:
app: neo-manager
type: database
spec:
selector:
app: neo-manager
type: database
ports:
- port: 7687
targetPort: 7687
name: bolt
protocol: TCP
- port: 7474
targetPort: 7474
name: client
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /
backend:
serviceName: neo4j-service
servicePort: 7474
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: learn
data:
7687: "learn/neo4j-service:7687"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: learn
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16
args:
- /nginx-ingress-controller
- --tcp-services-configmap=${POD_NAMESPACE}/tcp-services
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
The client gets exposed nicely and it reachable under learn.neo4j.com but I don't know where to point it to to connect to the DB using bolt. Whatever I try, it fails saying ServiceUnavailable: Websocket Connection failure (WebSocket network error: The operation couldn’t be completed. Connection refused in the console).
What am I missing?
The nginx-ingress-controller by default creates http(s) proxies only.
In your case you're trying to use a different protocol (bolt) so you need to configure your ingress controller in order for it to make a TCP proxy.
In order to do so, you need to create a configmap (in the nginx-ingress-controller namespace) similar to the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
7687: "<your neo4j namespace>/neo4j-service:7687"
Then, make sure your ingress controller has the following flag in its command:
--tcp-services-configmap tcp-services
This will make your nginx-ingress controller listen to port 7687 with a TCP proxy.
You can delete the neo4j-bolt-ingress Ingress, that's not going to be used.
Of course you have to ensure that the ingress controller correctly exposes the 7687 port the same way it does with ports 80 and 443, and possibly you'll have to adjust the settings of any firewall and load balancer you might have.
Source: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It automatically tries to connect to port 7687 by default - if you enter the connection url http://learn.neo4j.bolt.com:80 (or https), it works.
I haven't used kubernetes ingress in this context before, but I think that when you use HTTP or HTTPS to connect to Neo4J, you still require external availability to connect to the the bolt port (7687). Does your setup allow for that?
Try using multi-path mapping in your Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /browser
backend:
serviceName: neo4j-service
servicePort: 7474
- path: /
backend:
serviceName: neo4j-service
servicePort: 7687
You should then be able to access the UI at learn.neo4j.com/browser. The bolt Connect URL would have to specified as:
bolt+s://learn.neo4j.com:443/

Kubernetes Nginx Ingress Controller expose Nginx Webserver

I basically want to access the Nginx-hello page externally by URL. I've made a (working) A-record for a subdomain to my v-server running kubernetes and Nginx ingress: vps.my-domain.com
I installed Kubernetes via kubeadm on CoreOS as a single-node cluster using these tutorials: https://kubernetes.io/docs/setup/independent/install-kubeadm/, https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/, and nginx-ingress using https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal.
I also added the following entry to the /etc/hosts file:
31.214.xxx.xxx vps.my-domain.com
(xxx was replaced with the last three digits of the server IP)
I used the following file to create the deployment, service, and ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
run: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "False"
spec:
rules:
- host: vps.my-domain.com
http:
paths:
- backend:
serviceName: my-nginx
servicePort: 80
Output of describe ing:
core#vps ~/k8 $ kubectl describe ing
Name: my-nginx
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
vps.my-domain.com
my-nginx:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1",...}
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 49m (x2 over 56m) nginx-ingress-controller Ingress default/my-nginx
While I can curl the Nginx hello page using the nodeip and port 80 it doesn't work from outside the VM. Failed to connect to vps.my-domain.com port 80: Connection refused
Did I forgot something or is the configuration just wrong? Any help or tips would be appreciated!
Thank you
EDIT:
Visiting "vps.my-domain.com:30519` gives me the nginx welcome page. But in the config I specified port :80.
I got the port from the output of get services:
core#vps ~/k8 $ kubectl get services --all-namespaces | grep "my-nginx"
default my-nginx ClusterIP 10.107.5.14 <none> 80/TCP 1h
I also got it to work on port :80 by adding
externalIPs:
- 31.214.xxx.xxx
to the my-nginx service. But this is not how it's supposed to work, right? In the tutorials and examples for kubernetes and ingress-nginx, it worked always without externalIPs. Also the ingress rules doesn't work now (e.g. if I set the path to /test).
So apparently I was missing one part: the load balancer. I'm not sure why this wasn't mentioned in those instructions as a requirement. But i followed this tutorial: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb and now everything works.
Since metallb requires multiple ip addresses, you have to list your single ip-adress with the subnet \32: 31.214.xxx.xxx\32

Kubernetes: Can not telnet to ClusterIP Service inside my Cluster

I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves

Resources