Jenkins on Kubernetes service yaml parsing error - jenkins

I'm following this Link to setup Jenkins on Kubernetes cluster. When I tried to apply jenkins-svc.yaml it fails with an error.
jenkins-svc.yaml file content as follows,
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui-service
namespace: jenkins
spec:
type: ClusterIP # NodePort, LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
# nodePort: 30100
name: ui
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp-service
namespace: jenkins
spec:
type: ClusterIP # NodePort, LoadBalancer
ports:
- port: 50000
targetPort: 50000
selector:
app: jenkins
The error I get when I tried to apply jenkins-svc.yaml.
# kubectl apply -f jenkins-svc.yaml -n jenkins
error: error parsing jenkins-svc.yaml: error converting YAML to JSON: yaml: line 7: did not find expected key
Please let me know how do I solve this issue?

please check indentation of line 7 & 23. remove 2 extra spaces before "type" in spec section for both svc.

Related

Pods cannot communicate with each other

I have two jobs that will run only once. One is called Master and one is called Slave. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:
Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
To achieve this I created 5 yaml file:
A job-master.yaml for creating a Master pod:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
A service(NodePort) that will allow the master to fetch info online:
apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
A job for the Slave pod:
apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:
apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a configMap with upstreamNameservers: | ["8.8.8.8"]
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated.
Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a selector: that matches the labels: of the Pods.
In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
You should be able to verify with kubectl describe service master-service. At the end of its output will be a line that says Endpoints:. If the Service selector and the Pod labels don't match this will say <none>; if they do match you will see the Pod IP addresses.
(You don't need a NodePort service unless you need to accept requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)
Try with headless service:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
type: ClusterIP
clusterIP: None
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
and use command: ["python", "run.py", "master-service"] in your job_slave.yaml
Make sure your master job is listening on port 6666 inside your container.

Kubernetes Nginx Ingress Controller expose Nginx Webserver

I basically want to access the Nginx-hello page externally by URL. I've made a (working) A-record for a subdomain to my v-server running kubernetes and Nginx ingress: vps.my-domain.com
I installed Kubernetes via kubeadm on CoreOS as a single-node cluster using these tutorials: https://kubernetes.io/docs/setup/independent/install-kubeadm/, https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/, and nginx-ingress using https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal.
I also added the following entry to the /etc/hosts file:
31.214.xxx.xxx vps.my-domain.com
(xxx was replaced with the last three digits of the server IP)
I used the following file to create the deployment, service, and ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
run: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "False"
spec:
rules:
- host: vps.my-domain.com
http:
paths:
- backend:
serviceName: my-nginx
servicePort: 80
Output of describe ing:
core#vps ~/k8 $ kubectl describe ing
Name: my-nginx
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
vps.my-domain.com
my-nginx:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1",...}
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 49m (x2 over 56m) nginx-ingress-controller Ingress default/my-nginx
While I can curl the Nginx hello page using the nodeip and port 80 it doesn't work from outside the VM. Failed to connect to vps.my-domain.com port 80: Connection refused
Did I forgot something or is the configuration just wrong? Any help or tips would be appreciated!
Thank you
EDIT:
Visiting "vps.my-domain.com:30519` gives me the nginx welcome page. But in the config I specified port :80.
I got the port from the output of get services:
core#vps ~/k8 $ kubectl get services --all-namespaces | grep "my-nginx"
default my-nginx ClusterIP 10.107.5.14 <none> 80/TCP 1h
I also got it to work on port :80 by adding
externalIPs:
- 31.214.xxx.xxx
to the my-nginx service. But this is not how it's supposed to work, right? In the tutorials and examples for kubernetes and ingress-nginx, it worked always without externalIPs. Also the ingress rules doesn't work now (e.g. if I set the path to /test).
So apparently I was missing one part: the load balancer. I'm not sure why this wasn't mentioned in those instructions as a requirement. But i followed this tutorial: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb and now everything works.
Since metallb requires multiple ip addresses, you have to list your single ip-adress with the subnet \32: 31.214.xxx.xxx\32

Kubernetes: Can not telnet to ClusterIP Service inside my Cluster

I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves

Kubernetes Service External IP not being assigned

I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?
Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.

External endpoint of Kubernetes dashboard

I was just wondering how to manually set the external endpoint used by the Kubernetes web dashboard.
After creating the namespace kube-system, I ran the following:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Is there a flag I can use to specify which tcp port to use for external access? As far as I can tell it's just randomly assigning one. I've looked through the documentation but I'm having a hard time finding a solution. Any help would be appreciated.
You can specify the desired port as the nodePort in the yaml spec that you use to create the service. In this case, where the yaml file you linked to defines the service as:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
You would want to define it as below, assuming your desired port number is 33333:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 33333
selector:
app: kubernetes-dashboard

Resources