Kubernetes apply service but endpoints is none - docker

When I tried to apply a service to pod, endpoint is always none. Could someone know any root cause? I also check if selector match to what is defined in the deployment.yaml. Belows are the deployment, service file that I used. I also attached the service describe.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
replicas: 1
selector:
matchLabels:
app: gethnode
env: dev1
template:
metadata:
labels:
app: gethnode
env: dev1
spec:
containers:
- name: gethnode
image: myserver.th/bc/gethnode:1.1
ports:
- containerPort: 8550
env:
- name: TZ
value: Asis/Bangkok
tty: true
stdin: true
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
imagePullSecrets:
- name: regcred-harbor
service.yaml
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode
env: dev1
kubectl describe svc
Name: gethnode
Namespace: mynamespace
Labels: app=gethnode
env=dev1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gethnode","env":"dev1"},"name":"gethnode","namespace":"c...
Selector: app=gethnode,env=dev1
Type: ClusterIP
IP: 192.97.37.19
Port: tcp 8550/TCP
TargetPort: 8550/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl get pods -n mynamespace --show-labels
NAME READY STATUS RESTARTS AGE LABELS
console-bctest-6bff897bf4-xmch8 1/1 Running 0 6d3h app=bctest,env=dev1,pod-template-hash=6bff897bf4
console-dev1-595c47c678-s5mzz 1/1 Running 0 20d app=console,env=dev1,pod-template-hash=595c47c678
gethnode-7f9b7bbd77-pcbfc 1/1 Running 0 3s app=gethnode,env=dev1,pod-template-hash=7f9b7bbd77
gotty-dev1-59dcb68f45-4mwds 0/2 ImagePullBackOff 0 20d app=gotty,env=dev1,pod-template-hash=59dcb68f45
kubectl get svc gethnode -n mynamespace -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
gethnode ClusterIP 192.107.220.229 <none> 8550/TCP 64m app=gethnode,env=dev1

Remove env: dev1 from the selector of the service
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode

I had same issue, and what I did was to delete the Deployment, Secrets associated, Service, and Ingress to start fresh. Then make sure that my Deployment is consistent with my service in the naming, specifically talking about app.kubernetes.io/name as I used to have just name in my deployment and app.kubernetes.io/name in my service causing this discrepancy. In any case, now I got endpoints populated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: webhook
template:
metadata:
labels:
app.kubernetes.io/name: webhook
spec:
containers:
- name: webhook
image: registry.min.dev/minio/webhook:latest
ports:
- name: http
containerPort: 23411
env:
- name: GH_TOKEN
valueFrom:
secretKeyRef:
name: webhooksecret
key: GH_TOKEN
imagePullSecrets:
- name: registry-creds
apiVersion: v1
kind: Service
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
ports:
- name: http
port: 23411
selector:
app.kubernetes.io/name: webhook
And as a result:
$ k get ep webhook -n apps
NAME ENDPOINTS AGE
webhook 192.168.177.67:23411 4m15s
|
|___ Got populated!

Related

Failed to connect to all addresses - Spark Beam on Kubernetes

I am trying to run a beam application on spark on kubernetes.
beam-deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-beam-jobserver
spec:
serviceName: spark-headless
selector:
matchLabels:
app: spark-beam-jobserver
template:
metadata:
labels:
app: spark-beam-jobserver
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: spark-beam-jobserver
image: apache/beam_spark_job_server:2.33.0
imagePullPolicy: Always
ports:
- containerPort: 8099
name: jobservice
- containerPort: 8098
name: artifact
- containerPort: 8097
name: expansion
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
command: [
"/bin/bash", "-c", "./spark-job-server.sh --job-port=8099 --spark-master-url=spark://spark-primary:7077"
]
volumes:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-beam-jobserver
labels:
app: spark-beam-jobserver
spec:
selector:
app: spark-beam-jobserver
type: NodePort
ports:
- port: 8099
nodePort: 32090
name: job-service
- port: 8098
nodePort: 32091
name: artifacts
# type: ClusterIP
# ports:
# - port: 8099
# name: job-service
# - port: 8098
# name: artifacts
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-primary
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: primary
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: primary
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: SPARK_MODE
value: "master"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 7077
name: masterendpoint
- containerPort: 8080
name: ui
- containerPort: 7078
name: driver-rpc-port
- containerPort: 7079
name: blockmanager
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 0.5Gi
---
apiVersion: v1
kind: Service
metadata:
name: spark-primary
labels:
app: spark
component: primary
spec:
type: ClusterIP
ports:
- name: masterendpoint
port: 7077
targetPort: 7077
- name: rest
port: 6066
targetPort: 6066
- name: ui
port: 8080
targetPort: 8080
- name: driver-rpc-port
protocol: TCP
port: 7078
targetPort: 7078
- name: blockmanager
protocol: TCP
port: 7079
targetPort: 7079
selector:
app: spark
component: primary
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-children
labels:
app: spark
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: children
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: docker
image: docker:19.03.5-dind
securityContext:
privileged: true
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 100Mi
- name: children
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: DOCKER_HOST
value: "tcp://localhost:2375"
- name: SPARK_MODE
value: "worker"
- name: SPARK_MASTER_URL
value: "spark://spark-primary:7077"
- name: SPARK_WORKER_MEMORY
value: "1G"
- name: SPARK_WORKER_CORES
value: "1"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 8081
name: ui
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.5
memory: 1Gi
volumes:
- name: dind-storage
emptyDir:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-children
labels:
app: spark
component: children
spec:
type: ClusterIP
ports:
- name: ui
port: 8081
targetPort: 8081
selector:
app: spark
component: children
---
apiVersion: v1
kind: Service
metadata:
name: spark-headless
spec:
clusterIP: None
selector:
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
type: ClusterIP
$ kubectl get all --namespace spark-beam
NAME READY STATUS RESTARTS AGE
pod/spark-beam-jobserver-0 1/1 Running 0 58m
pod/spark-children-0 2/2 Running 0 58m
pod/spark-primary-0 1/1 Running 0 58m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/spark-beam-jobserver NodePort 10.97.173.68 <none> 8099:32090/TCP,8098:32091/TCP
58m
service/spark-children ClusterIP 10.105.209.30 <none> 8081/TCP
58m
service/spark-headless ClusterIP None <none> <none>
58m
service/spark-primary ClusterIP 10.109.32.126 <none> 7077/TCP,6066/TCP,8080/TCP,7078/TCP,7079/TCP 58m
NAME READY AGE
statefulset.apps/spark-beam-jobserver 1/1 58m
statefulset.apps/spark-children 1/1 58m
statefulset.apps/spark-primary 1/1 58m
beam-application.py
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class ConvertToByteArray(beam.DoFn):
def __init__(self):
pass
def setup(self):
pass
def process(self, row):
try:
yield bytearray(row + '\n', 'utf-8')
except Exception as e:
raise e
def run():
options = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=localhost:32090",
"--save_main_session",
"--environment_type=DOCKER",
"--environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"
])
with beam.Pipeline(options=options) as p:
lines = (p
| 'Create words' >> beam.Create(['this is working'])
| 'Split words' >> beam.FlatMap(lambda words: words.split(' '))
| 'Build byte array' >> beam.ParDo(ConvertToByteArray())
| 'Group' >> beam.GroupBy() # Do future batching here
| 'print output' >> beam.Map(print)
)
if __name__ == "__main__":
run()
When I am trying to run the python application in my conda environment:
python beam-application.py
I am getting the below error :
File "beam.py", line 39, in <module>
run()
File "beam.py", line 35, in run
| 'print output' >> beam.Map(print)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 586, in __exit__
self.result = self.run()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 565, in run
return self.runner.run_pipeline(self, self._options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 440, in run_pipeline
job_service_handle.submit(proto_pipeline)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 114, in submit
prepare_response.staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 218, in stage
staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\artifact_service.py", line 237, in offer_artifacts
for request in requests:
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 426, in __next__
return self._next()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNAVAILABLE: WSA Error"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2022-10-10T14:38:39.520460502+00:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: WSA Error {grpc_status:14, created_time:"2022-10-10T14:38:39.520457024+00:00"}]}"
>
I am not sure where exactly is the problem.
What should I pass in job_endpoint and artifact_endpoint?
I also tried port-forwarding :
kubectl port-forward service/spark-beam-jobserver 32090:8099 --namespace spark-beam
kubectl port-forward service/spark-primary 8080:8080 --namespace spark-beam
kubectl port-forward service/spark-children 8081:8081 --namespace spark-beam
I suppose this is based on https://github.com/cometta/python-apache-beam-spark?
spark-beam-jobserver is using service type NodePort. So, if running in a local (minikube) cluster, you won't need any port forwarding to reach the job server.
You should be able to submit a Python job from your local shell using the following pipeline options:
--job_endpoint=localhost:32090
--artifact_endpoint=localhost:32091
Note, your python code above misses the artifact_endpoint. You have to provide both endpoints.

How can I publish external turnserver by using Nodeport via kubernetes?

I don't have any knowledge about turnserver and trying to use turnserver by using Kubernetes from "zolochevska/turn-server" image. (My Linux is WSL2 - windows subsystem linux)
Deployment yaml: (turnserverdeployment.yml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: turn-server
namespace: xxx-prod
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: turn-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: turn-server
spec:
containers:
- args:
- yusuf
- yusuf123
- 176.55.12.108
image: zolochevska/turn-server
imagePullPolicy: Always
name: turn-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
and service yaml : (turnserverservice.yml)
apiVersion: v1
kind: Service
metadata:
labels:
app: turn-server
name: turn-server
namespace: xxx-prod
spec:
ports:
- name: tcp
nodePort: 30100
port: 3478
protocol: TCP
targetPort: 3478
- name: udp
nodePort: 30100
port: 3478
protocol: UDP
targetPort: 3478
selector:
app: turn-server
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Localhost is working but WSL ip access is not working :
this my setup :
I tried to set externalip manuelly but it doesn't work for curl call :
kubectl patch svc turn-server -p '{"spec":{"externalIPs":["176.55.yyy.xxx"]}}'

Why my canary deployment does not work with istio?

I am trying to learn the basics about istio so I have gone through the official documentation here in order to create a 80/20 canary deployment, I have also follow this guide from digitalocean which explains it very easy for a simple deployment https://www.digitalocean.com/community/tutorials/how-to-do-canary-deployments-with-istio-and-kubernetes.
I have created a simple app with 2 different messages on the homepage, and then created the virtualService, Gateway and the destination rules. I (as mentioned in the guide) get the external ip with kubectl -n istio-system get svc and the try to navigate to that address but I get a 503 error. It seems very simple but I have to be missing something. These are my 3 files (as far as I understand there are no more necessary files):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-app
subset: v1
weight: 80
- destination:
host: flask-app
subset: v2
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask-app
namespace: istio
spec:
host: flask-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Here are the yaml with deployment and services for v1 and v2:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v1
name: flask-deployment-v1
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
version: v1
app: flask-app
spec:
containers:
- name: flask-app
image: latalavera/flask-app:1.3
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v2
name: flask-deployment-v2
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
version: v2
spec:
containers:
- name: flask-app
image: latalavera/flask-app:2.0
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service2
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
I have added the labels version: v1 and version: v2 to my deployments, and I have also used the kubectl label ns istio istio-injection=enabled command, but they are not working anyways
You named the service flask-service and set the host in your VirtualService to flask-app.
The host field is not a selector but the FQDN of the service you want to route the traffic to. So it should be called flask-service or better flask-service.istio.svc.cluster.local:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-service.istio.svc.cluster.local
subset: v1
weight: 80
- destination:
host: flask-service.istio.svc.cluster.local
subset: v2
weight: 20
Alternatively you could just call the service flask-app like the Deployment. But using the full FQDN <service-name>.<namespace-name>.svc.cluster.local is recommended in any case. From docs:
Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> hosts
You btw don't need 2 services, just one. Your service has a selector for app: flask-app so it can route traffic to v1 and v2. How the traffic is routed is defined by the VirtualService and DestionationRule. I would recommand to remove service flask-service2. If you need to route the traffic inside the mesh, add mesh as gateways in the VirtualService or create a new one for mesh internal traffic, to reach both versions. More on that topic:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> gateways

Kubernetes: Modeling Jobs/Cron tasks for Postgres + Tomcat application

I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.

What is the difference between the template section in replicator.yml and pod.yml in Kubernetes?

I'm trying to understand the difference between the manifest files used for bringing up the Kubernetes cluster.
Say suppose I have a file called pod.yml that defines my pod, that is the containers running it:
Pod.yml
apiversion : v1
kind: Pod
metadata:
name : web
spec:
containers:
- name : webserver
image : httpd
ports :
- ContainerPort: 80
HostPort: 80`
And I have replicator.yml file to launch 3 of these pods:
Replicator.yml
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "webserver-controller"
spec:
replicas: 3
selector:
app: "webserver"
template:
spec:
containers:
- name: webserver
image: httpd
ports:
- containerPort: 80
hostport: 80`
Can I avoid the template section in the replicator.yml if I'm already using pod.yml to define the images to be used to build the containers in the pod.
Do you need all three manifest files pod.yml, service.yml and replicator.yml or can you just use service.yml and replicator.yml to create the cluster.
If you are using a ReplicationController, Deployment, DaemonSet or a Pet Set, you don't need a separate pod definition. However, the service should be defined if you want to expose the pod and this can be done on the same file.
Example:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: default
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
namespace: default
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi

Resources