Unable to mount hostPath with Docker Desktop on Linux - docker

i need mount an existing local directory into a pod with Docker Desktop Kubernetes, I use this yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: my-image:latest
resources:
limits:
memory: "0"
cpu: "0"
ports:
- containerPort: 80
volumeMounts:
- name: api-code
mountPath: /opt/app/api
volumes:
- name: api-code
hostPath:
path: /home/my-home/repositories/my-project/app/api
Local folder /home/my-home/repositories/my-project/app/api exists and there are files, into pod instead, exists /opt/app/api but is empty, /home directory is correctly listed in the section File sharing of Docker Desktop
UPDATE:
Same configuration work with microk8s

Related

Accessing CIFS files from pods

We have a docker image that is processing some files on a samba share.
For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:
docker run -v /mnt/dfs/project1:/workspace image
Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.
I installed this repo as a daemonset
https://k8scifsvol.juliohm.com.br/
and it's up and running.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cifs-volumedriver-installer
spec:
selector:
matchLabels:
app: cifs-volumedriver-installer
template:
metadata:
name: cifs-volumedriver-installer
labels:
app: cifs-volumedriver-installer
spec:
containers:
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.4
name: flex-deploy
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?
Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?
Do we even need a PV? Could the pod just read from the host's mounted share?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)
Managed to get it working with the fstab/cifs plugin.
Copy its cifs script to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and give it execute permissions. Also restart kubelet on all nodes.
https://github.com/fstab/cifs
Then added
containers:
- name: pablo
image: "10.203.32.80:5000/pablo"
volumeMounts:
- name: dfs
mountPath: /data
volumes:
- name: dfs
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//dfs/dir"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
Now there is the /data mount inside the container pointing to //dfs/dir

rancher persistent data lost after reboot

I have installed nfs-provisioner in my rancher cluster. I make persistant volume for my MongoDB. When I restart server or upgrade mongodb container all my data is lost. How to fix this?
My mongodb configuration
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-db
spec:
selector:
matchLabels:
app: mongo-db
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
app: mongo-db
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: data #reference the volumeClaimTemplate below
mountPath: /data/db
#this is a key difference with statefulsets
#A unique volume will be attached to each pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
#If no storageClassName is provided the default storage class will be used
#storageClassName: "standard"
resources:
requests:
storage: 2Gi

Can't enable images deletion from a private docker registry

all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"

Kubernetes Volume Mount with Replication Controllers

Found this example for Kubernetes EmptyDir volume
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
I want to volume mount between 2 pods. I am creating these pods using 2 different Replication Controllers. The replication controllers looks like this
Replication Controller 1:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-worker
labels:
name: node-worker
spec:
replicas: 1
selector:
name: node-worker
template:
metadata:
labels:
name: node-worker
spec:
containers:
-
name: node-worker
image: image/node-worker
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Replication Controller 2:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-manager
labels:
name: node-manager
spec:
replicas: 1
selector:
name: node-manager
template:
metadata:
labels:
name: node-manager
spec:
containers:
-
name: node-manager
image: image/node-manager
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Can Kubernetes emptyDir volume be used for this scenario?
EmptyDir volumes are inherently bound to the lifecycle of a single pod and can't be shared amongst pods in replication controllers or otherwise. If you want to share volumes amongst pods, the best choices right now are NFS or gluster, in a persistent volume. See an example here: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md
Why do you want to share the volume mount between pods? This will not work reliably because you aren't guaranteed to have a 1:1 mapping between where pods in replication controller 1 and replication controller 2 are scheduled in your cluster.
If you want to share local storage between containers, you should put both of the containers into the same pod, and have each container mount the emptyDir volume.
You require three things to get this working. More info can be found here and some documentation here, but it's a little confusing at first.
This example mounts a NFS volume.
1. Create a PersistentVolume pointing to your NFS server
file : mynfssharename-pv.yaml
(update server to point to your server)
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfssharename
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: yourservernotmine.yourcompany.com
path: "/yournfspath"
kubectl create -f mynfssharename-pv.yaml
2. Create a PersistentVolumeClaim to points to PersistentVolume mynfssharename
file : mynfssharename-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynfssharename
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl create -f mynfssharename-pvc.yaml
3. Add the claim to your ReplicationController or Deployment
spec:
containers:
- name: sample-pipeline
image: yourimage
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
volumeMounts:
# name must match the volume name below
- name: mynfssharename
mountPath: "/mnt"
volumes:
- name: mynfssharename
persistentVolumeClaim:
claimName: mynfssharename

How to Run a script at the start of Container in Cloud Containers Engine with Kubernetes

I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory

Resources