I've been hitting errors when trying to set up a dev platform in Kubernetes & minikube. The config is creating a service, persistentVolume, persistentVolumeClaim & deployment.
The deployment is creating a single pod with a single container based on bitnami/mariadb:latest
I am mounting a local volume into the minikube vm via:
minikube mount <source-path>:/data
This local volume is mounting correctly and can be inspected when I ssh into the minikube vm via: minikube ssh
I now run:
kubectl create -f mariadb-deployment.yaml
to fire up the platform, the yaml config:
kind: Service
apiVersion: v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
ports:
- port: 3306
selector:
app: supertubes
tier: mariadb
type: LoadBalancer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-db-pv
labels:
type: local
tier: mariadb
app: supertubes
spec:
storageClassName: slow
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/staging/sumatra/mariadb-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-db-pv-claim
labels:
app: supertubes
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: local
tier: mariadb
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
selector:
matchLabels:
app: supertubes
tier: mariadb
template:
metadata:
labels:
app: supertubes
tier: mariadb
spec:
securityContext:
fsGroup: 1001
containers:
- image: bitnami/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: <db-password>
- name: MARIADB_DATABASE
value: <db-name>
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
volumes:
- name: mariadb-persistent-storage
persistentVolumeClaim:
claimName: local-db-pv-claim
The above config will then fail to boot the pod and inspecting the pods logs within minikube dashboard shows the following:
nami INFO Initializing mariadb
mariadb INFO ==> Cleaning data dir...
mariadb INFO ==> Configuring permissions...
mariadb INFO ==> Validating inputs...
mariadb INFO ==> Initializing database...
mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb INFO ==> Creating database pw_tbs...
mariadb INFO ==> Enabling remote connections...
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mariadb'
Looking at the above I believed the issue was to do with Bitnami using user: 1001 to launch their mariadb image:
https://github.com/bitnami/bitnami-docker-mariadb/issues/134
Since reading the above issue I've been playing with securityContext within the containers spec. At present you'll see I have it set to:
deployment.template.spec
securityContext:
fsGroup: 1001
but this isn't working. I've also tried:
securityContext:
privileged: true
but didn't get anywhere with that either.
One other check I made was to remove the volumeMount from deployment.template.spec.containers and see if things worked correctly without it, which they do :)
I then opened a shell into the pod to see what the permissions on /bitnami are:
Reading a bit more on the Bitnami issue posted above it says the user: 1001 is a member of the root group, therefore I'd expect them to have the neccessary permissions... At this stage I'm a little lost as to what is wrong.
If anyone could help me understand how to correctly set up this minikube vm volume within a container that would be amazing!
Edit 15/03/18
Following #Anton Kostenko's suggestions I added a busybox container as an initContainer which ran a chmod on the bitnami directory:
...
spec:
initContainers:
- name: install
image: busybox
imagePullPolicy: Always
command: ["chmod", "-R", "777", "/bitnami"]
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
containers:
- image: bitnami/mariadb:latest
name: mariadb
...
however even with setting global rwx permissions (777) the directory couldn't mount as the MariaDB container doesn't allow user 1001 to do so:
nami INFO Initializing mariadb
Error executing 'postInstallation': EPERM: operation not permitted, utime '/bitnami/mariadb/.restored'
Another Edit 15/03/18
Have now tried setting the user:group on my local machine (MacBook) so that when passed to the minikube vm they should already be correct:
Now mariadb-data has rwx permission for eveyone and user: 1001, group: 1001
I then removed the initContainer as I wasn't really sure what that would be adding.
SSHing onto the minikube vm I can see the permissions and user:group have been carried across:
The user & group now being set as docker
Firing up this container results in the same sort of error:
nami INFO Initializing mariadb
Error executing 'postInstallation': EIO: i/o error, utime '/bitnami/mariadb/.restored'
I've tried removing the securityContext, and also adding it as runAsUser: 1001, fsGroup: 1001, however neither made any difference.
Looks like that is an issue in Minikube.
You can try to use the init-container which will fix a permissions before main container will be started, like this:
...........
spec:
initContainers:
- name: "fix-non-root-permissions"
image: "busybox"
imagePullPolicy: "Always"
command: [ "chmod", "-R", "g+rwX", "/bitnami" ]
volumeMounts:
- name: datadir
mountPath: /bitnami
containers:
.........
Related
We have a docker image that is processing some files on a samba share.
For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:
docker run -v /mnt/dfs/project1:/workspace image
Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.
I installed this repo as a daemonset
https://k8scifsvol.juliohm.com.br/
and it's up and running.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cifs-volumedriver-installer
spec:
selector:
matchLabels:
app: cifs-volumedriver-installer
template:
metadata:
name: cifs-volumedriver-installer
labels:
app: cifs-volumedriver-installer
spec:
containers:
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.4
name: flex-deploy
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?
Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?
Do we even need a PV? Could the pod just read from the host's mounted share?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)
Managed to get it working with the fstab/cifs plugin.
Copy its cifs script to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and give it execute permissions. Also restart kubelet on all nodes.
https://github.com/fstab/cifs
Then added
containers:
- name: pablo
image: "10.203.32.80:5000/pablo"
volumeMounts:
- name: dfs
mountPath: /data
volumes:
- name: dfs
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//dfs/dir"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
Now there is the /data mount inside the container pointing to //dfs/dir
I am converting a docker-compose file to kubernetes using kompose running the follwing command:
$kompose convert -f docker-compose.yml -o kubernetes_image.yaml
After the command finish the ouput is the following.
WARN Volume mount on the host "/usr/docker/adpater/dbdata" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.crt" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.key" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/server.xml" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
To push the converted file to kubernetes I run the follwoing command:
$kubectl apply -f kubernetes_image.yaml
NAME READY STATUS RESTARTS AGE
mysql-557dd849c8-bsdq7 1/1 Running 1 17h
tomcat-7cd65d4556-spjbl 0/1 CrashLoopBackOff 76 18h
if I run:
$ kubectl describe pod tomcat-7cd65d4556-spjbl
I get the following message:
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/usr/docker/adapter/server.xml\\\" to rootfs \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged\\\" at \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged/usr/local/tomcat/conf/server.xml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Exit Code: 127
Started: Sun, 31 May 2020 13:35:00 +0100
Finished: Sun, 31 May 2020 13:35:00 +0100
Ready: False
Restart Count: 75
Environment: <none>
Mounts:
/run/secrets/rji_license.json from tomcat-hostpath0 (rw)
/usr/local/tomcat/conf/server.xml from tomcat-hostpath3 (rw)
/usr/local/tomcat/conf/ssl.crt from tomcat-hostpath1 (rw)
/usr/local/tomcat/conf/ssl.key from tomcat-hostpath2 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8dhnk (ro)
This is my docker-compose.yml file:
version: '3.6'
networks:
integration:
services:
mysql:
environment:
MYSQL_USER: 'integrationdb'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
image: db:poc
networks:
- integration
ports:
- '3306:3306'
restart: always
volumes:
- ./dbdata:/var/lib/mysql
tomcat:
image: adapter:poc
networks:
- integration
ports:
- '8080:8080'
- '8443:8443'
restart: always
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
Versions of the tools:
kompose: 1.21.0 (992df58d8)
docker: 19.03.9
kubectl:Major:"1", Minor:"18"
I think my challange here is whithin this type of volumes or files, I dont know how can I migrate or convert them to kubernetes and put the tomcat pod running fine.
Could someone give me a hand?
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
thanks in advance.
When Kompose warns you:
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
It means that it can't translate this fragment of the docker-compose.yml file into Kubernetes syntax:
volumes:
- ./license.json:/run/secrets/rji_license.json
In native Kubernetes, you'd need to provide this content in ConfigMap or Secret objects, and then mount the file into the pod. You can't directly access content on the system from which you're launching the containers.
You can't really get around directly working with the Kubernetes YAML files here. You could run kompose convert to generate the skeleton files, but then you'll need to edit those to add the ConfigMaps, PersistentVolumeClaims (for the database storage), and relevant volume and mount declarations, and then run kubectl apply -f to actually run them. I'd check the Kubernetes YAML files into source control, and maintain them in parallel with your Docker Compose setup.
Move2Kube (which does support docker-compose translation), can handle this case and tries to convert the volumes by interacting with you.
? 6. [] What type of container registry login do you want to use?
Hints:
[Docker login from config mode, will use the default config from your local machine.]
No authentication
? 7. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/dbdata]?:
Hints:
[Use PVC for persistent storage wherever applicable]
Yes
? 8. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/license.json]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 9. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 10. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 11. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/server.xml]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 12. Which storage class to use for persistent volume claim [vol17655897939759777588] used by [mysql]
Hints:
[If you have a custom cluster, you can use collect to get storage classes from it.]
default
? 13. Provide the ingress host domain
Hints:
[Ingress host domain is part of service URL]
myproject.com
? 14. Provide the TLS secret for ingress
Hints:
[Enter TLS secret name]
If the above choices were made Move2Kube creates the following artifacts:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: tomcat
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
containers:
- image: adapter:poc
imagePullPolicy: Always
name: tomcat
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /run/secrets/rji_license.json
name: vol16871681589659214643
- mountPath: /usr/local/tomcat/conf/ssl.crt
name: vol12635587774184387470
- mountPath: /usr/local/tomcat/conf/ssl.key
name: vol7446232639477381794
- mountPath: /usr/local/tomcat/conf/server.xml
name: vol4920239289720818926
restartPolicy: Always
volumes:
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/license.json
name: vol16871681589659214643
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt
name: vol12635587774184387470
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key
name: vol7446232639477381794
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/server.xml
name: vol4920239289720818926
status: {}
and
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: mysql
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
containers:
- env:
- name: MYSQL_USER
value: integrationdb
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_ROOT_PASSWORD
value: password
image: db:poc
imagePullPolicy: Always
name: mysql
ports:
- containerPort: 3306
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: vol17655897939759777588
restartPolicy: Always
volumes:
- name: vol17655897939759777588
persistentVolumeClaim:
claimName: vol17655897939759777588
status: {}
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: vol17655897939759777588
spec:
resources:
requests:
storage: 100Mi
storageClassName: default
volumeName: vol17655897939759777588
status: {}
Essentially depending on your choice Move2Kube will create the appropriate artifacts for you.
You can check out how it works in https://konveyor.github.io/move2kube/tutorials/docker-compose/.
I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above
Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar
Trying to set up PetSet using Kube-Solo
In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: health-check
- containerPort: 8000
name: app-services
- containerPort: 8001
name: admin
- containerPort: 8002
name: manage
- containerPort: 8040
name: sof-sdl
- containerPort: 8041
name: sof-sdl-xcc
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: sof-sdl-admin
- containerPort: 8051
name: sof-sdl-cache
- containerPort: 8060
name: sof-sdl-camel
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /var/opt/MarkLogic
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 1Gi
In the Kubernetes dashboard, I see the following error message:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.
It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending".
I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.
I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.
I am using
Kubernetes version: 1.4.0
Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)
Corectl version: 0.2.8
Kube-Solo version: 0.9.6
I am not familiar with kube-solo.
However, the issue here might be that you are attempting to use a feature, dynamic volume provisioning, which is in beta, which does not have specific support for volumes in your environment.
The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.
The same error happened to me and found clues about the following config (considering volumeClaimTemplates and StorageClass) at the slack group and this pull request
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path