Minikube mounted host folders are not working - docker

I am using ubuntu 18 with minikube and virtual box and trying to mount the host's directory in order to get the input data my pod needs.
I found that minikube has issues with mounting host directories, but by default according to your OS and vm driver, there are directories that are mounted by default
I can't find those on my pods. They are simply not there.
I tried to create a persistent volume, it works, I can see it on my dashboard, but I cant mount it to the pod, I used this yaml to create the volume
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv0003",
"selfLink": "/api/v1/persistentvolumes/pv0001",
"uid": "28038976-9ee4-414d-8478-b312a24a6b94",
"resourceVersion": "2030",
"creationTimestamp": "2019-08-08T10:48:23Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"pv0001\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"hostPath\":{\"path\":\"/data/pv0001/\"}}}\n"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "6Gi"
},
"hostPath": {
"path": "/user/data",
"type": ""
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"volumeMode": "Filesystem"
},
"status": {
"phase": "Available"
}
}
And this yaml to create the job.
apiVersion: batch/v1
kind: Job
metadata:
name: pi31
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["sleep"]
args: ["300"]
volumeMounts:
- mountPath: /data
name: pv0003
volumes:
- name: pv0003
hostPath:
path: /user/data
restartPolicy: Never
backoffLimit: 1
I also tried to create the volumnes acording to the so called default mount paths but with no success.
I tried to add the volume claim to the job creation yaml, still nothing.
When I mount the drives and create them in the job creation yaml files, the jobs are able to see the data that other jobs create, but it's invisible to the host, and the host's data is invisible to them.
I am running minikube from my main user, and checked the logs in the dashboard, not getting any permissions error
Is there any way to get data into this minikube without setting up NFS? I am trying to use it for an MVP, the entire idea is for it to be simple...

It's not so easy as minikube is working inside VM created in Virtualbox that's why using hostPath you see that VM's file system instead of your PC.
I would really recommend to use minikube mount command - you can find description there
From docs:
minikube mount /path/to/dir/to/mount:/vm-mount-path is the recommended
way to mount directories into minikube so that they can be used in
your local Kubernetes cluster.
So after that you can share your host's files inside minikube Kubernetes.
Edit:
Here is log step-by-step how to test it:
➜ ~ minikube start
* minikube v1.3.0 on Ubuntu 19.04
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.6 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
➜ ~ mkdir -p /tmp/test-dir
➜ ~ echo "test-string" > /tmp/test-dir/test-file
➜ ~ minikube mount /tmp/test-dir:/test-dir
* Mounting host path /tmp/test-dir into VM as /test-dir ...
- Mount type: <no value>
- User ID: docker
- Group ID: docker
- Version: 9p2000.L
- Message Size: 262144
- Permissions: 755 (-rwxr-xr-x)
- Options: map[]
* Userspace file server: ufs starting
* Successfully mounted /tmp/test-dir to /test-dir
* NOTE: This process must stay alive for the mount to be accessible ...
Now open another console:
➜ ~ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /test-dir/test-file
test-string
Edit 2:
example job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: ubuntu
command: ["cat", "/testing/test-file"]
volumeMounts:
- name: test-volume
mountPath: /testing
volumes:
- name: test-volume
hostPath:
path: /test-dir
restartPolicy: Never
backoffLimit: 4

Related

Openshift missing permissions to create a file

The spring boot application is deployed on openshift 4. This application needs to create a file on the nfs-share.
The openshift container has configured a volume mount on the type NFS.
The container on openshift creates a pod with random userid as
sh-4.2$ id
uid=1031290500(1031290500) gid=0(root) groups=0(root),1031290500
The mount point is /nfs/abc
sh-4.2$ ls -la /nfs/
ls: cannot access /nfs/abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 25 09:34 .
drwxr-xr-x. 1 root root 50 Nov 25 10:09 ..
d?????????? ? ? ? ? ? abc
on the docker image I created a user "technical" with uid= gid=48760 as shown below.
FROM quay.repository
MAINTAINER developer
LABEL description="abc image" \
name="abc" \
version="1.0"
ARG APP_HOME=/opt/app
ARG PORT=8080
ENV JAR=app.jar \
SPRING_PROFILES_ACTIVE=default \
JAVA_OPTS=""
RUN mkdir $APP_HOME
ADD $JAR $APP_HOME/
WORKDIR $APP_HOME
EXPOSE $PORT
ENTRYPOINT java $JAVA_OPTS -Dspring.profiles.active=$SPRING_PROFILES_ACTIVE -jar $JAR
my deployment config file is as shown below
spec:
volumes:
- name: bad-import-file
persistentVolumeClaim:
claimName: nfs-test-pvc
containers:
- resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
terminationMessagePath: /dev/termination-log
name: abc
env:
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: abc-configmap
key: spring.profiles.active
- name: DB_URL
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.url
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.username
- name: BAD_IMPORT_PATH
valueFrom:
configMapKeyRef:
name: abc-configmap
key: bad.import.path
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: abc-secret
key: db.password
ports:
- containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: bad-import-file
mountPath: /nfs/abc
dnsPolicy: ClusterFirst
securityContext:
runAsGroup: 44337
runAsNonRoot: true
supplementalGroups:
- 44337
the PV request is as follows
apiVersion: v1
kind: PersistentVolume
metadata:
name: abc-tuc-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: classic-nfs
mountOptions:
- hard
- nfsvers=3
nfs:
path: /tm03v06_vol3014
server: tm03v06cl02.jit.abc.com
readOnly: false
Now the openshift user has id
sh-4.2$ id
uid=1031290500(1031290500) gid=44337(technical) groups=44337(technical),1031290500
RECENT UPDATE
Just to be clear with the problem, Below I have two commands from the same pod terminal,
sh-4.2$ cd /nfs/
sh-4.2$ ls -la (The first command I tried immediately after pod creation.)
total 8
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
drwxrwx---. 14 technical technical 8192 Nov 28 19:06 abc
sh-4.2$ ls -la(few seconds later on the same pod terminal)
ls: cannot access abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
d?????????? ? ? ? ? ? abc
So the problem is that I see these question marks(???) on the mount point.
The mounting is working correctly but I cannot access this /nfs/abc directory and I see this ????? for some reason
UPDATE
sh-4.2$ ls -la /nfs/abc/
ls: cannot open directory /nfs/abc/: Stale file handle
sh-4.2$ ls -la /nfs/abc/ (after few seconds on the same pod terminal)
ls: cannot access /nfs/abc/: Permission denied
Could this STALE FILE HANDLE be the reason for this issue?
TL;DR
You can use the anyuid security context to run the pod to avoid having OpenShift assign an arbitrary UID, and set the permissions on the volume to the known UID of the user.
OpenShift will override the user ID the image itself may specify that it should run as:
The user ID isn't actually entirely random, but is an assigned user ID which is unique to your project. In fact, your project is assigned a range of user IDs that applications can be run as. The set of user IDs will not overlap with other projects. You can see what range is assigned to a project by running oc describe on the project.
The purpose of assigning each project a distinct range of user IDs is so that in a multitenant environment, applications from different projects never run as the same user ID. When using persistent storage, any files created by applications will also have different ownership in the file system.
... this is a blessing and a curse, when using shared persistent volume claims for example (e.g. PVC's mounted in ReadWriteMany with multiple pods that read / write data - files created by one pod won't be accessible by the other pod because of the incorrect file ownership and permissions).
One way to get around this issue is using the anyuid security context which "provides all features of the restricted SCC, but allows users to run with any UID and any GID".
When using the anyuid security context, we know the user and group ID's the pod(s) are going to run as, and we can set the permissions on the shared volume in advance. For example, where all pods run with the restricted security context by default:
When running the pod with the anyuid security context, OpenShift doesn't assign an arbitrary UID from the range of UID's allocated for the namespace:
This is just for example, but an image that is built with a non-root user with a fixed UID and GID (e.g. 1000:1000) would run in OpenShift as that user, files would be created with the ownership of that user (e.g. 1000:1000), permissions can be set on the PVC to the known UID and GID of the user set to run the service. For example, we can create a new PVC:
cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
namespace: k8s
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: portworx-shared-sc
EOF
... then mount it in a pod:
kubectl run -i --rm --tty ansible --image=lazybit/ansible:v4.0.0 --restart=Never -n k8s --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"serviceAccountName": "default",
"containers": [
{
"name": "nginx",
"imagePullPolicy": "Always",
"image": "lazybit/ansible:v4.0.0",
"command": ["ash"],
"stdin": true,
"stdinOnce": true,
"tty": true,
"env": [
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
}
],
"volumeMounts": [
{
"mountPath": "/data",
"name": "data"
}
]
}
],
"volumes": [
{
"name": "data",
"persistentVolumeClaim": {
"claimName": "data"
}
}
]
}
}'
... and create files in the PVC as the USER set in the Dockerfile.

minikube shared volume not shows files after sometime

I have to share local .ssh directory content to pod. I search for hat and got answer from one of the post to share start as --mount-string.
$ minikube start --mount-string="$HOME/.ssh/:/ssh-directory" --mount
😄 minikube v1.9.2 on Darwin 10.14.6
✨ Using the docker driver based on existing profile
👍 Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
E0426 23:44:18.447396 80170 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32810 with https://127.0.0.1:32813
📁 Creating mount /Users/myhome/.ssh/:/ssh-directory ...
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
❗ /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.0.
💡 You can also use 'minikube kubectl -- get pods' to invoke a matching version
When I check the docker for the given Minikube, it return
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad64f642b63 gcr.io/k8s-minikube/kicbase:v0.0.8 "/usr/local/bin/entr…" 3 weeks ago Up 45 seconds 127.0.0.1:32815->22/tcp, 127.0.0.1:32814->2376/tcp, 127.0.0.1:32813->8443/tcp minikube
And check the .ssh directory content are there or not.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
id_rsa id_rsa.pub known_hosts
I have deployment yml as
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
stack: api
app: api-web
spec:
replicas: 1
selector:
matchLabels:
app: api-web
template:
metadata:
labels:
app: api-web
spec:
containers:
- name: api-web-pod
image: tiangolo/uwsgi-nginx-flask
ports:
- name: api-web-port
containerPort: 80
envFrom:
- secretRef:
name: api-secrets
volumeMounts:
- name: ssh-directory
mountPath: /app/.ssh
volumes:
- name: ssh-directory
hostPath:
path: /ssh-directory/
type: Directory
When it ran, it gives error for /ssh-directory.
$ kubectl describe pod/api-deployment-f65db9c6c-cwtvt
Name: api-deployment-f65db9c6c-cwtvt
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sat, 02 May 2020 23:07:51 -0500
Labels: app=api-web
pod-template-hash=f65db9c6c
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/api-deployment-f65db9c6c
Containers:
api-web-pod:
Container ID:
Image: tiangolo/uwsgi-nginx-flask
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
api-secrets Secret Optional: false
Environment: <none>
Mounts:
/app/.ssh from ssh-directory (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9shz5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ssh-directory:
Type: HostPath (bare host directory volume)
Path: /ssh-directory/
HostPathType: Directory
default-token-9shz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9shz5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/api-deployment-f65db9c6c-cwtvt to minikube
Warning FailedMount 11m kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[default-token-9shz5 ssh-directory]: timed out waiting for the condition
Warning FailedMount 2m13s (x4 over 9m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[ssh-directory default-token-9shz5]: timed out waiting for the condition
Warning FailedMount 62s (x14 over 13m) kubelet, minikube MountVolume.SetUp failed for volume "ssh-directory" : hostPath type check failed: /ssh-directory/ is not a directory
When I check the content of /ssh-directory in docker.
It gives IO error.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
ls: cannot access '/ssh-directory': Input/output error
I know there are default mount points for Minikube. As mentioned in https://minikube.sigs.k8s.io/docs/handbook/mount/,
+------------+----------+---------------+----------------+
| Driver | OS | HostFolder | VM |
+------------+----------+---------------+----------------+
| VirtualBox | Linux | /home |/hosthome |
+------------+----------+---------------+----------------+
| VirtualBox | macOS | /Users |/Users |
+------------+----------+---------------+----------------+
| VirtualBox | Windows |C://Users | /c/Users |
+------------+----------+---------------+----------------+
|VMware Fusio| macOS |/Users |/Users |
+------------+----------+---------------+----------------+
| KVM | Linux | Unsupported. | |
+------------+----------+---------------+----------------+
| HyperKit | Linux | Unsupported |(see NFS mounts)|
+------------+----------+---------------+----------------+
But I installed minikube as brew install minikube and its set driver as docker.
$ cat ~/.minikube/config/config.json
{
"driver": "docker"
}
There is no mapping for docker driver in mount point.
Initially, this directory has the files, but somehow, when I try to create the pod, it delete or something is wrong.
While reproducing this on ubuntu I encountered the exact issue.
The directory was indeed looked like mounted but the files were missing which lead me to think that this is a general issue with mounting directories with docker driver.
There is open issue on github about the same problem ( mount directory empty ) and open feature request to mount host volumes into docker driver.
Inspecting minikube container shows no record of that mounted volume and confirms information mentioned in the github request that the only volume shared with host as of now is the one that mounts by default (that is /var/lib/docker/volumes/minikube/_data mounted into minikube's /var directory).
$ docker inspect minikube
"Mounts": [
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
As the workaround you could copy your .ssh directory into the running minikube docker container with following command:
docker cp $HOME/.ssh minikube:<DESIRED_DIRECTORY>
and then mount this desired directory into the pod.

Setting requirepass with args while deploying Redis in k8s works with $() but not with ${}

I am trying to deploy Redis (by creating a Helm chart) as a StatefulSet in Kubernetes cluster. I am not creating another Redis image on top of Official Redis Docker image, rather I am just trying to use the defaults available in Official Redis Docker image and just provide my redis.conf and requirepass at runtime.
To provide redis.conf, I am using a ConfigMap and mounting it in /config/redis.conf in the container.
Now, I want to pass --requirepass option as args in Kubernetes as below:
...
containers: [
{
name: redis,
image: {
repository: redis,
tag: 5.0
},
imagePullPolicy: Always,
workingDir: /data/,
args: [ "/config/redis.conf", "--requirepass", "<password>" ], # line of concern
ports: [
containerPort: 6379
],
env: [
{
name: REDIS_AUTH,
valueFrom: {
secretKeyRef: {
name: redis,
key: password
}
}
}
],
...
The following line fails:
args: [ "/config/redis.conf", "--requirepass", "${REDIS_AUTH}" ]
and on the contrary, this works:
args: [ "/config/redis.conf", "--requirepass", "$(REDIS_AUTH)" ]
Even though, $() syntax is for command substitution and REDIS_AUTH is an environment variable rather than an executable, how does it work and ${REDIS_AUTH} does not?
This is a Kubernetes specific feature that if you want to expand an environment variable in command or args field then you've to use the $() syntax instead of ${} syntax.
Check this link: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments

How to start the cloudwatch agent in container?

From the docker hub there is an image which is maintained by amazon.
Any one know how to configure and start the container as I cannot find any documentation
I got this working! I was having the same issue with you when you see Reading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ... Cannot access /etc/cwagentconfig: lstat /etc/cwagentconfig: no such file or directoryValid Json input schema.
What you need to do is put your config file in /etc/cwagentconfig. A functioning dockerfile:
FROM amazon/cloudwatch-agent:1.230621.0
COPY config.json /etc/cwagentconfig
Where config.json is some cloudwatch agent configuration, such as given by LinPy's answer.
You can ignore the warning about /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json, or you can also COPY the config.json file to that location in the dockerfile as well.
I will also share how I found this answer:
I needed this run in ECS as a sidecar, and I could only find docs on how to run it in kubernetes. Following this documentation: https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-StatsD.html I decided to download all the example k8s manifests, when I saw this one:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: amazonlinux
spec:
containers:
- name: amazonlinux
image: amazonlinux
command: ["/bin/sh"]
args: ["-c", "sleep 300"]
- name: cloudwatch-agent
image: amazon/cloudwatch-agent
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: cwagentconfig
mountPath: /etc/cwagentconfig
volumes:
- name: cwagentconfig
configMap:
name: cwagentstatsdconfig
terminationGracePeriodSeconds: 60
So I saw that the volume mount cwagentconfig mounts to /etc/cwagentconfig and that's from the cwagentstatsdconfig configmap, and that's just the json file.
You just to run the container with log-opt, as the log agent is the main process of the container.
docker run --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=myLogGroup amazon/cloudwatch-agent
You can find more details here and here.
I do not know why you need an agent in a container, but the best practice is to send each container log directly to cloud watch using aws log driver.
Btw this is entrypoint of the container.
"Entrypoint": [
"/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"
],
All you need to call
/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent
Here is how I got it working in our Docker containers without systemctl or System V init.
This is from official Documentation:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:configuration-file-path -s
here the Docs:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#start-CloudWatch-Agent-EC2-commands-fleet
Installation path may be different, but that is how the agent is started as per docs.

Is it correct to attach code through volume in kubernetes?

In order to do ease development in Docker, the code is attached to the containers through volumes. In that way, there is no need to rebuild the images each time the code is changed.
So, is it correct to think to use the same idea in Kubernetes?
PS: I know that the concepts PersistentVolume and PersistentVolumeClaim allow to attach volume, but they are intended for data.
Update
To ease the development, I do need to use the volume for both code and data. This will avoid me to rebuild the images at each change of code.
Below this is what I am trying to do in minikube:
the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: '/home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/'
the service
apiVersion: v1
kind: Service
metadata:
name: php-hostpath
namespace: default
labels:
app: php-hostpath
spec:
selector:
app: php-hostpath
ports:
- port: 80
targetPort: 80
type: "LoadBalancer"
The service and the deployment are well created in minikube:
$ kubectl get pods -l app=php-hostpath
NAME READY STATUS RESTARTS AGE
php-hostpath-3796606162-bt94w 1/1 Running 0 19m
$ kubectl get service -l app=php-hostpath
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-hostpath 10.0.0.110 <pending> 80:30135/TCP 27m
The folder src and the file src/index.php are also well created.
<?php
echo "This is my first docker project";
Now I want to check that every thing is running:
$ kubectl exec -ti php-hostpath-3796606162-bt94w bash
root#php-hostpath-3796606162-bt94w:/var/www/html# ls
root#php-hostpath-3796606162-bt94w:/var/www/html# exit
exit
The folder src and the file index.php are not in /var/www/html!
Have I missed something?
PS: if I were in production env, I will not put my code in a volume.
Thanks,
Based on this doc, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.
To overcome this, there are 2 solutions:
Use the virtualbox driver so that you can mount your hostPath volume by changing the path on you localhost /home/THE_USR/... to /hosthome/THE_USR/...
Mount your volume to the minikube VM based on the command $ minikube mount /home/THE_USR/.... The command will return you the path of your mounted volume on the minikube VM. Example is given down.
Example
(a) mounting a volume on the minikube VM
the minikube mount command returned that path /mount-9p
$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
(b) Specification of the path on the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
(c) Checking if mounting the volume worked well
amine#amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html#
PS: this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.
PS: I recommend the virtualbox in stead of KVM.
Hope it helps others.
There is hostPath that allows you to bind mount a directory on the node into the a container.
In a multi node cluster you will want to restrict your dev pod to a particular node with nodeSelector (use the built-in label kubernetes.io/hostname: mydevhost).
With minikube look at the Mounted Host Folders section.
In my honest opinion, you can do it, but you shouldn't. One of the features of using containers is that you can have artifacts (containers) with always the same behaviour. A new version of your code should generate a new container. This way you can be sure, when testing, that any new issue detected will be directly related to the new code.
An hybrid approach (that I don't like either but I think is better) is to create a docker that downloads your code (selecting the correct release with envs) and runs it.
Using hostPaths is not a bad idea but can be a mess, if you have a not-so-small cluster.
Of course you can use PV, after all your code is data. You can use a distributed storage filesystem like NFS to do it.

Resources