I’m curious about the Kubeflow GPU Resource. I’m running the job below.
The only part where I specified the GPU Resource is on first container with only 1 GPU. However, the event message tells me 0/4 nodes are available: 4 Insufficient nvidia.com/gpu.
Why is this job searching for 4 nodes though I specified only 1 GPU resource? Does my interpretation have a problem? Thanks much in advance.
FYI) I have 3 worker nodes with each 1 gpu.
apiVersion: batch/v1
kind: Job
metadata:
name: saint-train-3
annotations:
sidecar.istio.io/inject: "false"
spec:
template:
spec:
initContainers:
- name: dataloader
image: <AWS CLI Image>
command: ["/bin/sh", "-c", "aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data; cd /s3-data; tar -xvzf kubeflowdata.tar.gz; cd kubeflow_data; ls"]
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: trainer
image: <Our Model Image>
command: ["/bin/sh", "-c", "wandb login <ID>; python /opt/ml/src/main.py --base_path='/s3-data/kubeflow_data' --debug_mode='0' --project='kubeflow-test' --name='test2' --gpu=0 --num_epochs=1 --num_workers=4"]
volumeMounts:
- mountPath: /s3-data
name: s3-data
resources:
limits:
nvidia.com/gpu: "1"
- name: gpu-watcher
image: pytorch/pytorch:latest
command: ["/bin/sh", "-c", "--"]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
persistentVolumeClaim:
claimName: test-claim
restartPolicy: OnFailure
backoffLimit: 6
0/4 nodes are available: 4 Insufficient nvidia.com/gpu
This is mean you haven't nodes with label nvidia.com/gpu
Related
I try to run my private docker image along with the docker-dind container to be able to run docker commands from the private image in Kubernetes.
My only issue is that the docker run command does not read the docker-secrets so fails by requiring to run docker login. How could I pass the credentials to the docker run command?
Here the piece of my Kubernetes deployment:
containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'private/image:latest' ]
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
envFrom:
- secretRef:
name: docker-secret-keys
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
EDIT
I do have my certificate as Kubernetes secrets that I try to mount to the running docker but until now without any success :(
apiVersion: v1
data:
.dockerconfigjson: eyJhXXXXXXdoihfc9w8fwpeojfOFwhfoiuwehfo8wfhoi2ehfioewNlcm5hbWUiOiJlbGRhcmVudGas4hti45ytg45hgiVsZGFXXXXXXyQGVudG9yLmlvIiwiYXV0aCI6IlpXeGtZWEpsYm5SdmNqb3dObVl4WmpjM1lTMDVPRFZrTFRRNU5HRXRZVEUzTXkwMk5UYzBObVF4T0RjeFpUWT0ifX19XXXXXXXXXXX
kind: Secret
metadata:
name: staging-docker-keys
namespace: staging
resourceVersion: "6383"
uid: a7yduyd-xxxx-xxxx-xxxx-ae2ede3e4ed
type: kubernetes.io/dockerconfigjson
The final goal is to get the "inner docker" (that runs private/image:latest) be able to run any docker command without a need to login before each command.
docker:dind will create ca, server, client cert in /certs.
Just create emptyDir volume to share cert.
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
volumes:
- name: docker-tls-certdir
emptyDir: {}
containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'nginx' ]
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
securityContext:
privileged: true
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
Assuming you are not using docker cert authentication, but username and password you may follow the below path:
modify docker client image (docker:20.1) entrypoint using command field
command may look like below:
command: ["/bin/sh"]
args: ["-c", "docker login...;docker run..."]
Sample working pod using the idea:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
containers:
- name: myapp
image: docker:20.10
command: ["/bin/sh"]
args: ["-c", "docker version;docker info"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
Based on docs
EDIT:
If you do use docker cert authentication, you can have many options:
bake the certificates by extending docker client image and using it instead.
mount the certificates if you have them as Kubernetes secrets in the cluster
...
Ok, I finally created an access token on my docker repository and used it to perform the docker login command. It works just fine :)
I am using Cassandra image w.r.t.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
Now I need to add below line to cassandra-env.sh in postStart or in cassandra yaml file:
-JVM_OPTS="$JVM_OPTS
-javaagent:$CASSANDRA_HOME/lib/cassandra-exporter-agent-<version>.jar"
Now I was able to achieve this, but after this step, Cassandra requires a restart but as it's already running as a pod, I don't know how to restart the process. So is there any way that this step is done prior to running the pod and not after it is up?
I was suggested below solution:-
This won’t work. Commands that run postStart don’t impact the running container. You need to change the startup commands passed to Cassandra.
The only way that I know to do this is to create a new container image in the artifactory based on the existing image. and pull from there.
But I don't know how to achieve this.
I am using pushgateway to exposes metrics coming from short-lived batch jobs.
At the moment the pushgateway instance is launched on a baremetal machine, where I have a docker volume mounted to allow survival of metrics in case of a container restart (in conjunction with the --persistence.file parameter).
Here an extract of the docker-compose.yml file used to run the container:
pushgateway:
image: prom/pushgateway:v1.2.0
restart: unless-stopped
volumes:
- pushgw-data:/data
ports:
- "${PUSHGW_PORT:-9091}:9091"
command: --persistence.file="/data/metric.store"
I am moving to a (private) kubernetes cluster without persistent volumes, but equipped with an s3-compatible object storage.
From this issue on github it seems possible to target s3 for the checkpointing, but without further input I am not sure how to achieve this, and that's the best I could find by searching the Web for information.
Can anyone point me in the right direction?
So finally https://serverfault.com/questions/976764/kubernetes-run-aws-s3-sync-rsync-against-persistent-volume-on-demand pointed me in the right direction.
This is an extract of the deployment.yaml descriptor which works as expected:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{K8S_NAMESPACE}}
name: {{K8S_DEPLOYMENT_NAME}}
spec:
selector:
matchLabels:
name: {{K8S_DEPLOYMENT_NAME}}
strategy:
type: Recreate
template:
metadata:
labels:
name: {{K8S_DEPLOYMENT_NAME}}
version: v1
spec:
containers:
- name: {{AWSCLI_NAME}}
image: {{IMAGE_AWSCLI}}
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{SECRET_NAME}}
key: accesskey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{SECRET_NAME}}
key: secretkey
command: [ "/bin/bash",
"-c",
"aws --endpoint-url {{ENDPOINT_URL}} s3 sync s3://{{BUCKET}} /data; while true; do aws --endpoint-url {{ENDPOINT_URL}} s3 sync /data s3://{{BUCKET}}; sleep 60; done" ]
volumeMounts:
- name: pushgw-data
mountPath: /data
- name: {{PUSHGATEWAY_NAME}}
image: {{IMAGE_PUSHGATEWAY}}
command: [ '/bin/sh', '-c' ]
args: [ 'sleep 10; /bin/pushgateway --persistence.file=/data/metric.store' ]
ports:
- containerPort: 9091
volumeMounts:
- name: pushgw-data
mountPath: /data
volumes:
- name: pushgw-data
emptyDir: {}
- name: config-volume
configMap:
name: {{K8S_DEPLOYMENT_NAME}}
imagePullSecrets:
- name: harbor-bot
restartPolicy: Always
Note the override of entrypoint for the docker image of the pushgateway. In my case I have put 10 seconds delay to start, you might need to tune the delay to suits your needs. This delay is needed because the pushgateway container will boot faster than the sidecar (also due to the network exchange with s3, I suppose).
If the pushgateway starts when not metric store file is already present, it won't be used/considered. But it gets worse, when you first send data to the pushgateway, it will override the file. At that point, the "sync" from the sidecar container will also override the original "copy", so please pay attention and be sure you have a backup of the metrics file before experimenting with this delay value.
I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode
I am trying to play with init pods. I want to use init container to create file and default container to check if file exist and sleep for a while.
my yaml:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -e /workdir/test.txt ]; then sleep 99999; fi']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
When I am trying to debug from alpine image I use the command to create:
kubectl run alpine --rm -ti --image=alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # if [ -e /workdir/test.txt ]; then sleep 3; fi
/ # mkdir /workdir; echo>/workdir/test.txt
/ # if [ -e /workdir/test.txt ]; then sleep 3; fi
/ *here shell sleeps for 3 seconds
/ #
And it seems like commands working as expected.
But on my real k8s cluster I have only CrashLoopBackOff for main container.
kubectl describe pod init-test-pod
Shows me only that error:
Containers:
myapp-container:
Container ID: docker://xxx
Image: alpine
Image ID: docker-pullable://alpine#sha256:xxx
Port: <none>
Host Port: <none>
Command:
sh
-c
if [ -e /workdir/test.txt ]; then sleep 99999; fi
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 3
Environment: <none>
The problem here is that your main container is not finding the folder you create. When your initial container completes running, the folder gets wiped with it. You will need to use a Persistent Volume to be able to share the folder between the two containers:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: mypvc
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- name: mypvc
mountPath: /workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- name: mypvc
mountPath: /workdir
You can as well look at emptyDir, so you won't need the PVC:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
volumes:
- name: mydir
emptyDir: {}
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- name: mydir
mountPath: /workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- name: mydir
mountPath: /workdir
That's because your 2 containers have separate filesystems. You need to share this file using an emtyDir volume:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -e /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- mountPath: /workdir
name: workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- mountPath: /workdir
name: workdir
volumes:
- name: workdir
emptyDir: {}