Poddefault not working for DEPLOYMENT object....working fine for POD - kubeflow

Getting error:
I1004 10:45:33.498781 1 main.go:444] Entering mutatePods in mutating webhook
I1004 10:45:33.508313 1 main.go:465] Looking at pod annotations, found: map[kubernetes.io/psp:eks.privileged]
I1004 10:45:33.799087 1 main.go:485] fetched 23 poddefault(s) in namespace
I1004 10:45:33.799196 1 main.go:85] PodDefault 'enable-feast' is not in the namespcae of pod ''
I1004 10:45:33.799256 1 main.go:85] PodDefault 'enable-feast' is not in the namespcae of pod ''
I1004 10:45:33.799314 1 main.go:85] PodDefault 'enable-feast' is not in the namespcae of pod ''
apiVersion: apiVersion: apps/v1
kind: Deployment
metadata:
name: feast
namespace: test
labels:
app: feast
feast-enabled: "true"
spec:
selector:
matchLabels:
app: feast
feast-enabled: "true"
template:
metadata:
labels:
app: feast
feast-enabled: "true"
spec:
containers:
- name: feast
image: imagetest:v1
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10000;done"]
resources:
limits:
cpu: "0.4"
memory: 1Gi
requests:
cpu: 100m
memory: 100Mi
MutatingWebhookConfiguration
service:
name: admission-webhook-service
namespace: kubeflow
path: /apply-poddefault
port: 443
failurePolicy: Ignore
matchPolicy: Equivalent
name: admission-webhook-deployment.kubeflow.org
namespaceSelector:
matchLabels:
app.kubernetes.io/part-of: kubeflow-profile
objectSelector: {}
reinvocationPolicy: Never
rules:
apiGroups:
""
apiVersions:
v1
operations:
CREATE
resources:
pods
deployments
scope: '*'
sideEffects: Unknown
timeoutSeconds: 30

Related

Failed to connect to all addresses - Spark Beam on Kubernetes

I am trying to run a beam application on spark on kubernetes.
beam-deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-beam-jobserver
spec:
serviceName: spark-headless
selector:
matchLabels:
app: spark-beam-jobserver
template:
metadata:
labels:
app: spark-beam-jobserver
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: spark-beam-jobserver
image: apache/beam_spark_job_server:2.33.0
imagePullPolicy: Always
ports:
- containerPort: 8099
name: jobservice
- containerPort: 8098
name: artifact
- containerPort: 8097
name: expansion
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
command: [
"/bin/bash", "-c", "./spark-job-server.sh --job-port=8099 --spark-master-url=spark://spark-primary:7077"
]
volumes:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-beam-jobserver
labels:
app: spark-beam-jobserver
spec:
selector:
app: spark-beam-jobserver
type: NodePort
ports:
- port: 8099
nodePort: 32090
name: job-service
- port: 8098
nodePort: 32091
name: artifacts
# type: ClusterIP
# ports:
# - port: 8099
# name: job-service
# - port: 8098
# name: artifacts
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-primary
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: primary
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: primary
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: SPARK_MODE
value: "master"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 7077
name: masterendpoint
- containerPort: 8080
name: ui
- containerPort: 7078
name: driver-rpc-port
- containerPort: 7079
name: blockmanager
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 0.5Gi
---
apiVersion: v1
kind: Service
metadata:
name: spark-primary
labels:
app: spark
component: primary
spec:
type: ClusterIP
ports:
- name: masterendpoint
port: 7077
targetPort: 7077
- name: rest
port: 6066
targetPort: 6066
- name: ui
port: 8080
targetPort: 8080
- name: driver-rpc-port
protocol: TCP
port: 7078
targetPort: 7078
- name: blockmanager
protocol: TCP
port: 7079
targetPort: 7079
selector:
app: spark
component: primary
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-children
labels:
app: spark
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: children
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: docker
image: docker:19.03.5-dind
securityContext:
privileged: true
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 100Mi
- name: children
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: DOCKER_HOST
value: "tcp://localhost:2375"
- name: SPARK_MODE
value: "worker"
- name: SPARK_MASTER_URL
value: "spark://spark-primary:7077"
- name: SPARK_WORKER_MEMORY
value: "1G"
- name: SPARK_WORKER_CORES
value: "1"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 8081
name: ui
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.5
memory: 1Gi
volumes:
- name: dind-storage
emptyDir:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-children
labels:
app: spark
component: children
spec:
type: ClusterIP
ports:
- name: ui
port: 8081
targetPort: 8081
selector:
app: spark
component: children
---
apiVersion: v1
kind: Service
metadata:
name: spark-headless
spec:
clusterIP: None
selector:
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
type: ClusterIP
$ kubectl get all --namespace spark-beam
NAME READY STATUS RESTARTS AGE
pod/spark-beam-jobserver-0 1/1 Running 0 58m
pod/spark-children-0 2/2 Running 0 58m
pod/spark-primary-0 1/1 Running 0 58m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/spark-beam-jobserver NodePort 10.97.173.68 <none> 8099:32090/TCP,8098:32091/TCP
58m
service/spark-children ClusterIP 10.105.209.30 <none> 8081/TCP
58m
service/spark-headless ClusterIP None <none> <none>
58m
service/spark-primary ClusterIP 10.109.32.126 <none> 7077/TCP,6066/TCP,8080/TCP,7078/TCP,7079/TCP 58m
NAME READY AGE
statefulset.apps/spark-beam-jobserver 1/1 58m
statefulset.apps/spark-children 1/1 58m
statefulset.apps/spark-primary 1/1 58m
beam-application.py
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class ConvertToByteArray(beam.DoFn):
def __init__(self):
pass
def setup(self):
pass
def process(self, row):
try:
yield bytearray(row + '\n', 'utf-8')
except Exception as e:
raise e
def run():
options = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=localhost:32090",
"--save_main_session",
"--environment_type=DOCKER",
"--environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"
])
with beam.Pipeline(options=options) as p:
lines = (p
| 'Create words' >> beam.Create(['this is working'])
| 'Split words' >> beam.FlatMap(lambda words: words.split(' '))
| 'Build byte array' >> beam.ParDo(ConvertToByteArray())
| 'Group' >> beam.GroupBy() # Do future batching here
| 'print output' >> beam.Map(print)
)
if __name__ == "__main__":
run()
When I am trying to run the python application in my conda environment:
python beam-application.py
I am getting the below error :
File "beam.py", line 39, in <module>
run()
File "beam.py", line 35, in run
| 'print output' >> beam.Map(print)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 586, in __exit__
self.result = self.run()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 565, in run
return self.runner.run_pipeline(self, self._options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 440, in run_pipeline
job_service_handle.submit(proto_pipeline)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 114, in submit
prepare_response.staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 218, in stage
staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\artifact_service.py", line 237, in offer_artifacts
for request in requests:
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 426, in __next__
return self._next()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNAVAILABLE: WSA Error"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2022-10-10T14:38:39.520460502+00:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: WSA Error {grpc_status:14, created_time:"2022-10-10T14:38:39.520457024+00:00"}]}"
>
I am not sure where exactly is the problem.
What should I pass in job_endpoint and artifact_endpoint?
I also tried port-forwarding :
kubectl port-forward service/spark-beam-jobserver 32090:8099 --namespace spark-beam
kubectl port-forward service/spark-primary 8080:8080 --namespace spark-beam
kubectl port-forward service/spark-children 8081:8081 --namespace spark-beam
I suppose this is based on https://github.com/cometta/python-apache-beam-spark?
spark-beam-jobserver is using service type NodePort. So, if running in a local (minikube) cluster, you won't need any port forwarding to reach the job server.
You should be able to submit a Python job from your local shell using the following pipeline options:
--job_endpoint=localhost:32090
--artifact_endpoint=localhost:32091
Note, your python code above misses the artifact_endpoint. You have to provide both endpoints.

Kubernetes apply service but endpoints is none

When I tried to apply a service to pod, endpoint is always none. Could someone know any root cause? I also check if selector match to what is defined in the deployment.yaml. Belows are the deployment, service file that I used. I also attached the service describe.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
replicas: 1
selector:
matchLabels:
app: gethnode
env: dev1
template:
metadata:
labels:
app: gethnode
env: dev1
spec:
containers:
- name: gethnode
image: myserver.th/bc/gethnode:1.1
ports:
- containerPort: 8550
env:
- name: TZ
value: Asis/Bangkok
tty: true
stdin: true
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
imagePullSecrets:
- name: regcred-harbor
service.yaml
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode
env: dev1
kubectl describe svc
Name: gethnode
Namespace: mynamespace
Labels: app=gethnode
env=dev1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gethnode","env":"dev1"},"name":"gethnode","namespace":"c...
Selector: app=gethnode,env=dev1
Type: ClusterIP
IP: 192.97.37.19
Port: tcp 8550/TCP
TargetPort: 8550/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl get pods -n mynamespace --show-labels
NAME READY STATUS RESTARTS AGE LABELS
console-bctest-6bff897bf4-xmch8 1/1 Running 0 6d3h app=bctest,env=dev1,pod-template-hash=6bff897bf4
console-dev1-595c47c678-s5mzz 1/1 Running 0 20d app=console,env=dev1,pod-template-hash=595c47c678
gethnode-7f9b7bbd77-pcbfc 1/1 Running 0 3s app=gethnode,env=dev1,pod-template-hash=7f9b7bbd77
gotty-dev1-59dcb68f45-4mwds 0/2 ImagePullBackOff 0 20d app=gotty,env=dev1,pod-template-hash=59dcb68f45
kubectl get svc gethnode -n mynamespace -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
gethnode ClusterIP 192.107.220.229 <none> 8550/TCP 64m app=gethnode,env=dev1
Remove env: dev1 from the selector of the service
apiVersion: v1
kind: Service
metadata:
name: gethnode
namespace: mynamespace
labels:
app: gethnode
env: dev1
spec:
type: ClusterIP
ports:
- name: tcp
port: 8550
targetPort: 8550
protocol: TCP
selector:
app: gethnode
I had same issue, and what I did was to delete the Deployment, Secrets associated, Service, and Ingress to start fresh. Then make sure that my Deployment is consistent with my service in the naming, specifically talking about app.kubernetes.io/name as I used to have just name in my deployment and app.kubernetes.io/name in my service causing this discrepancy. In any case, now I got endpoints populated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: webhook
template:
metadata:
labels:
app.kubernetes.io/name: webhook
spec:
containers:
- name: webhook
image: registry.min.dev/minio/webhook:latest
ports:
- name: http
containerPort: 23411
env:
- name: GH_TOKEN
valueFrom:
secretKeyRef:
name: webhooksecret
key: GH_TOKEN
imagePullSecrets:
- name: registry-creds
apiVersion: v1
kind: Service
metadata:
name: webhook
namespace: apps
labels:
app.kubernetes.io/name: webhook
spec:
ports:
- name: http
port: 23411
selector:
app.kubernetes.io/name: webhook
And as a result:
$ k get ep webhook -n apps
NAME ENDPOINTS AGE
webhook 192.168.177.67:23411 4m15s
|
|___ Got populated!

Configure prometheus to collect custom metrics from dockerized nodejs pod

I have set up prom-client (unofficial client library for prometheus) to collect custom metrics what I need.
I have prometheus server deployed from helm following this eks setup guide. Now I am trying to edit default configmap to collect my app metrics as well, but getting error
parsing YAML file /etc/config/prometheus.yml: yaml: unmarshal errors:\n line 22: field cluster_ip not found in type kubernetes.plain\n line 25: cannot unmarshal !!strdefaultinto []string
This is what I have done as per docs
prometheus.yaml configmap file
apiVersion: v1
data:
alerting_rules.yml: |
{}
alerts: |
{}
prometheus.yml: |
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
rule_files:
- /etc/config/recording_rules.yml
- /etc/config/alerting_rules.yml
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
...DEFAULT CONFIGS...
- job_name: my_metrics
scrape_interval: 5m
scrape_timeout: 10s
honor_labels: true
metrics_path: /api/metrics
kubernetes_sd_configs:
- role: service
cluster_ip: 10.100.200.92
namespaces:
names:
default
recording_rules.yml: |
{}
rules: |
{}
kind: ConfigMap
metadata:
creationTimestamp: "2020-06-08T09:26:38Z"
labels:
app: prometheus
chart: prometheus-11.3.0
component: server
heritage: Helm
release: prometheus
name: prometheus-server
namespace: prometheus
uid: 8fadb17a-f5c5-4f9d-a931-fa1f77684847
Here clusterIP is the IP assigned for my service to expose deployment.
My deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
spec:
containers:
- image: IMAGE_URL:BUILD_NUMBER
name: myapp
resources:
limits:
cpu: "1000m"
memory: "2400Mi"
requests:
cpu: "500m"
memory: "2000Mi"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: myapp
My service.yaml file which is exposing deployment
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
deploy: staging
name: myapp
type: ClusterIP
ports:
- port: 80
targetPort: 5000
protocol: TCP
Is there some different/efficient way to target my app for metrics collection, please let me know. Thanks
This is what I am using to enable prometheus scraping inside the cluster.
In the scrape config, I have this snippet:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- action: labeldrop
regex: '(kubernetes_pod|app_kubernetes_io_instance|app_kubernetes_io_name|instance)'
This is taken directly from the default values for the prometheus helm chart: https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml#L1452
What that does, is instruct prometheus to scrape every pod that has the annotation:
prometheus.io/scrape: "true"
set. With these annotations on the pod you can then configure the port and path of the scrape:
prometheus.io/path: "/metrics"
prometheus.io/port: "9090"
So, you would need to modify your deployment.yaml to specify these annotations as well:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "<enter port of pod to scrape>"
prometheus.io/path: "<enter path to scrape>"
spec:
containers:
- image: IMAGE_URL:BUILD_NUMBER
...

Issue with Jenkins Deployment File: Unknown resource kind: Deployment

I'm struggling to figure out what the solution might be, so I thought to ask here. I'm trying to use the code below to deploy a Jenkins pod to Kubernetes, but it fails with a Unknown resource kind: Deployment error:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
The output of kubectl api-versions is:
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Does anyone know what the problem might be?
If this is an indentation issue, I'm failing to see it.
It seems the apiVersion is deprecated. You can simply convert to current apiVersion and apply.
$ kubectl convert -f jenkins-dep.yml
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins-deployment
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: jenkins
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
containers:
- image: jenkins/jenkins:lts-alpine
imagePullPolicy: IfNotPresent
name: jenkins
ports:
- containerPort: 8080
name: http-port
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: jenkins-home
status: {}
$ kubectl convert -f jenkins-dep.yml -oyaml > jenkins-dep-latest.yml
Change the apiVersion from extensions/v1beta1 to apps/v1 and use kubectl version to check if the kubectl client and kube API Server version is matching and not too old.

How to read multiple files(pdfs) from mapr volume performs OCR and dump in a mapr volume using kubernetes jobs

'''
apiVersion: batch/v1
kind: Job
metadata:
name: parallel-job
namespace:
spec:
completions: 6 # number of times to run
parallelism: 2 # number of pods that can run in parallel
backoffLimit: 6 # number of retries before throwing error
activeDeadlineSeconds: 10 # time to allow job to run
template:
metadata:
labels:
app: kubernetes-series
tier: job
spec:
restartPolicy: OnFailure
containers:
- name: job-fibonacci-2
image:
args:
- sleep
- "1000000"
resources:
requests:
memory: 500M
cpu: 100
volumeMounts:
- name:
mountPath:
name: maprvolume
volumes:
- name: maprvolume
persistentVolumeClaim:
claimName: pvc-mapr

Resources