I'm trying to setup a Kubernetes PetSet as described in the documentation. When I create the PetSet I can't seem to get the Persistent Volume Claim to bind to the persistent volume. Here is my Yaml File for defining the PetSet:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: 'ml-nodes'
spec:
serviceName: "ml-service"
replicas: 1
template:
metadata:
labels:
app: marklogic
tier: backend
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: 'ml'
image: "192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
ports:
- containerPort: 8000
name: ml8000
protocol: TCP
- containerPort: 8001
name: ml8001
- containerPort: 7997
name: ml7997
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
volumeMounts:
- name: ml-data
mountPath: /data/vol-data
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/etc/init.d/MarkLogic stop"]
volumes:
- name: ml-data
persistentVolumeClaim:
claimName: ml-data
terminationGracePeriodSeconds: 30
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
If I do a 'describe' on my created PetSet I see the following:
Name: ml-nodes
Namespace: default
Image(s): 192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1
Selector: app=marklogic,tier=backend
Labels: app=marklogic,tier=backend
Replicas: 1 current / 1 desired
Annotations: <none>
CreationTimestamp: Tue, 20 Sep 2016 13:23:14 -0400
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33m 33m 1 {petset } Warning FailedCreate pvc: ml-data-ml-nodes-0, error: persistentvolumeclaims "ml-data-ml-nodes-0" not found
33m 33m 1 {petset } Normal SuccessfulCreate pet: ml-nodes-0
I'm trying to run this in a minikube environment on my local machine. Not sure what I'm missing here???
There is an open issue on minikube for this. Persistent volume provisioning support appears to be unfinished in minikube at this time.
For it to work with local storage, it needs the following flag on the controller manager and that isn't currently enabled on minikube.
--enable-hostpath-provisioner[=false]: Enable HostPath PV
provisioning when running without a cloud provider. This allows
testing and development of provisioning features. HostPath
provisioning is not supported in any way, won't work in a multi-node
cluster, and should not be used for anything other than testing or
development.
Reference: http://kubernetes.io/docs/admin/kube-controller-manager/
For local development/testing, it would work if you were to use hack/local_up_cluster.sh to start a local cluster, after setting an environment variable:
export ENABLE_HOSTPATH_PROVISIONER=true
You should be able to use PetSets in the latest version of minikube as it uses kubernetes v1.4.1 as the default version.
Related
I created a docker image of my app which is running an internal server exposed at 8080.
Then I tried to create a local kubernetes cluster for testing, using the following set of commands.
$ kubectl create deployment --image=test-image test-app
$ kubectl set env deployment/test-app DOMAIN=cluster
$ kubectl expose deployment test-app --port=8080 --name=test-service
I am using Docker-desktop on windows to run run kubernetes. This exposes my cluster to external IP localhost but i cannot access my app. I checked the status of the pods and noticed this issue:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-66-ps2 0/1 ImagePullBackOff 0 8h
test-6f-6jh 0/1 InvalidImageName 0 7h42m
May I know what could be causing this issue? And how can i make it work on local ?
Thanks, Look forward to the suggestions!
My YAML file for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2021-10-13T18:00:15Z"
generation: 4
labels:
app: test-app
name: test-app
namespace: default
resourceVersion: "*****"
uid: ************
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- env:
- name: DOMAIN
value: cluster
image: C:\Users\test-image
imagePullPolicy: Always
name: e20f23453f27
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-10-13T18:00:15Z"
lastUpdateTime: "2021-10-13T18:00:15Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-10-13T18:39:51Z"
lastUpdateTime: "2021-10-13T18:39:51Z"
message: ReplicaSet "test-66" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 4
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-10-13T18:01:49Z"
labels:
app: test-app
name: test-service
namespace: default
resourceVersion: "*****"
uid: *****************
spec:
clusterIP: 10.161.100.100
clusterIPs:
- 10.161.100.100
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 41945
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
The reason you are facing ImagePullBackOff and InvalidImageName issue is because your app image does not exist on the kubernetes cluster you deployed via docker, rather it exists on your local machine!
To resolve this issue for testing purpose what you can do is to mount the project workspace and create your image there on your kubernetes cluster and then build image using docker on the k8s cluster or upload your image to docker hub and then setting your deployment to pick image from docker hub!
Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.
I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000
I'm trying to deploy a MicroServices system on my local machine using Skaffold.
ingress-srv.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dot
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ****MYDOCKERID****/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
auth-mongo-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
I've followed through the guidelines in the manual:
https://kubernetes.github.io/ingress-nginx/deploy/
and hit:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.0/deploy/static/provider/cloud/deploy.yaml
However Skaffold keep terminating the deployment:
[34mListing files to watch...[0m
[34m - ****MYDOCKERID****/auth
[0m[34mGenerating tags...[0m
[34m - ****MYDOCKERID****/auth -> [0m****MYDOCKERID****/auth:683e8db
[34mChecking cache...[0m
[34m - ****MYDOCKERID****/auth: [0m[32mFound Locally[0m
[34mTags used in deployment:[0m
[34m - ****MYDOCKERID****/auth -> [0m****MYDOCKERID****/auth:3c4bb66ff693320b5fac3fde91906768f8b54b968813b226822d057d1dd3a995
[34mStarting deploy...[0m
- deployment.apps/auth-depl created
- service/auth-srv created
- deployment.apps/auth-mongo-depl created
- service/auth-mongo-srv created
- ingress.extensions/ingress-service created
[34mWaiting for deployments to stabilize...[0m
- deployment/auth-depl:
- deployment/auth-mongo-depl:
- deployment/auth-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
- deployment/auth-mongo-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
- deployment/auth-mongo-depl is ready. [1/2 deployment(s) still pending]
- deployment/auth-depl failed. Error: could not stabilize within 2m0s: context deadline exceeded.
[34mCleaning up...[0m
- deployment.apps "auth-depl" deleted
- service "auth-srv" deleted
- deployment.apps "auth-mongo-depl" deleted
- service "auth-mongo-srv" deleted
- ingress.extensions "ingress-service" deleted
[31mexiting dev mode because first deploy failed: 1/2 deployment(s) failed[0m
How can we fix this annoying issue?
EDIT 9:44 AM ISRAEL TIME :
C:\Development-T410\Micro Services - JAN>kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-645bbf7b9d-llp2q 0/1 CreateContainerConfigError 0 115s
auth-depl-c6c765d7c-7wvcg 0/1 CreateContainerConfigError 0 28m
auth-mongo-depl-6b594c4847-4kzzt 1/1 Running 0 115s
client-depl-5888f95b59-vznh6 1/1 Running 0 114s
nats-depl-7dfccdf5-874vm 1/1 Running 0 114s
orders-depl-74f4d48559-cbwlp 0/1 CreateContainerConfigError 0 114s
orders-depl-78fc845b4-9tfml 0/1 CreateContainerConfigError 0 28m
orders-mongo-depl-688676d675-lrvhp 1/1 Running 0 113s
tickets-depl-7cc7ddbbff-z9pvc 0/1 CreateContainerConfigError 0 113s
tickets-depl-8574fc8f9b-tm6p4 0/1 CreateContainerConfigError 0 28m
tickets-mongo-depl-b95f45947-hf6wq 1/1 Running 0 113s
C:\Development-T410\Micro Services>kubectl logs auth-depl-c6c765d7c-7wvcg
Error from server (BadRequest): container "auth" in pod "auth-depl-c6c765d7c-7wvcg" is waiting to start: CreateContainerConfigError
Looks look your auth-depl deployment is failing. Possibly the container is crashing or erroring out. To debug you can see the pod logs
$ kubectl logs auth-depl-xxxxxxxxxx-xxxxx
Make sure you run skaffold with the --cleanup=false option so that you can debug. For example,
$ skaffold dev --cleanup=false
Update:
Based on the logs it looks like it's an issue with your Kubernetes Secret and how it's defined, possibly the format or YAML format. This answer sheds some details on what the problem may be: Pod status as `CreateContainerConfigError` in Minikube cluster
You should add the environment variable for the mongo image in your deployment file
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: root
- name: MONGO_INITDB_ROOT_PASSWORD
value: "rootuser"
Trying to set up PetSet using Kube-Solo
In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: health-check
- containerPort: 8000
name: app-services
- containerPort: 8001
name: admin
- containerPort: 8002
name: manage
- containerPort: 8040
name: sof-sdl
- containerPort: 8041
name: sof-sdl-xcc
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: sof-sdl-admin
- containerPort: 8051
name: sof-sdl-cache
- containerPort: 8060
name: sof-sdl-camel
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /var/opt/MarkLogic
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 1Gi
In the Kubernetes dashboard, I see the following error message:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.
It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending".
I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.
I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.
I am using
Kubernetes version: 1.4.0
Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)
Corectl version: 0.2.8
Kube-Solo version: 0.9.6
I am not familiar with kube-solo.
However, the issue here might be that you are attempting to use a feature, dynamic volume provisioning, which is in beta, which does not have specific support for volumes in your environment.
The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.
The same error happened to me and found clues about the following config (considering volumeClaimTemplates and StorageClass) at the slack group and this pull request
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path