The closest answer I found is this.
But I want to know is that, will the Dockerfile VOLUME command be totally ignored by Kubernetes? Or data will be persisted into two places? One for docker volume (in the host which pod running) and another is Kubernetes's PV?
The reason of asking this is because I deploy some containers from docker hub which contain VOLUME command. Meanwhile I also attach PVC to my pod. I am thinking whether local volume (docker volume, not K8 PV) will be created in the node? If my pod scheduled to another node, then another new volume created?
On top of this, thanks for #Rico to point out that -v command and Kubernetes's mount will take precedence over dockerfile VOLUME command, but what if as scenario below:
dockerfile VOLUME onto '/myvol'
Kubernetes mount PVC to '/anotherMyVol'
In this case, will myvol mount to my local node harddisk? and cause unaware data persisted locally?
It will not be ignored unless you override it on your Kubernetes pod spec. For example, if you follow this example from the Docker documentation:
$ docker run -it container bash
root#7efcf5ef12a2:/# mount | grep myvol
/dev/nvmeXnXpX on /myvol type ext4 (rw,relatime,discard,data=ordered)
root#7efcf5ef12a2:/#
You'll see that it's mounted on the root drive of the host where the container is running on. Docker actually creates a volume on the host filesystem under /var/lib/docker/volumes (/var/lib/docker is your Docker graph directory):
$ pwd
/var/lib/docker/volumes
$ find . | grep greeting
./d0bc20d085243c39c4f386dce2f6cafcd8146128d6b0c8f9dcb27cfb61a7ecab/_data/greeting
You can override this with the -v option in Docker:
$ docker run -it -v /mnt:/myvol container bash
root#1c7211cf43d0:/# cd /myvol/
root#1c7211cf43d0:/myvol# touch hello
root#1c7211cf43d0:/myvol# exit
exit
$ pwd # <= on the host
/mnt
$ ls
hello
So on Kubernetes you can override it in the pod spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: container
volumeMounts:
- name: storage
mountPath: /myvol
volumes:
- name: storage
hostPath:
path: /mnt
type: Directory
You need to explicitly define a PersistentVolumeClaim and/or PersistentVolume. This is not done for you.
Related
I'm trying to deploy the Prometheus docker container with persistent data via an NFS volume using a Docker named volume. I'm deploying with Ansible, so I'll post Ansible config, but I've executed the same using Docker's CLI commands and the issue presents in that case as well.
When I deploy the container and review the containers docker logs, I see that /etc/prometheus is shared appropriately and attached to the container. However, /prometheus, which is where the container stores relevant DB and metrics, gives permission denied.
According to this answer, /prometheus is required to be chowned to nobody. This doesn't seem to happen within the container upon startup.
Here's the volume creation from my Ansible role:
- name: "Creates named docker volume"
docker_volume:
volume_name: prometheus_persist
state: present
driver_options:
type: nfs
o: "addr={{ nfs_server }},rw,nolock"
device: ":{{ prometheus_nfs_path }}"
Which is equivalent to this Docker CLI command:
docker volume create -d local -o type=nfs -o o=addr={{ nfs_server }},rw -o device=:{{ prometheus_nfs_path }} prometheus_persist
Here's my container deployment stanza
- name: "Deploy prometheus container"
docker_container:
name: prometheus
# hostname: prometheus
image: prom/prometheus
restart_policy: always
state: started
ports: 9090:9090
user: ansible:docker
volumes:
- "{{ prometheus_config_path }}:/etc/prometheus"
mounts:
- source: prometheus_persist
target: /prometheus
read_only: no
type: volume
comparisons:
env: strict
Which is equivalent to this Docker CLI command:
docker run -v prometheus_persist:/prometheus -v "{{ prometheus_config_path }}:/etc/prometheus" -p 9090:9090 --name prometheus prom/prometheus
Again, the container logs upon deployment indicate permission denied on /prometheus. I've tested by mounting the prometheus_persist named volume on a generic Ubuntu container, it mounts fine, and I can touch files within. Any advice on how to resolve this?
This turned out to be an issue with NFS squashing. The NFS exporter in my case is an old Synology, which doesn't allow no_root_squash to be set. However, mapping all users to admin on the NFS share resolved the issue.
I have a docker image A that contains a folder I need to share with another container B in the same K8s pod.
At first I decided to use a shared volume (emptyDir) and launched A as an init container to copy all the content of the folder into the shared volume. This works fine.
Then looking at k8s doc I realised I could use mountPropagation between the containers.
So I changed the initContainer to a plain container (side car) in the same pod and performed a mount of the container A folder I want to share with container B. This works fine but I need to keep the container running A up with a wait loop. Or not...
Then I decided to come back to the InitContainer pattern and do the same, meaning mount the folder in A inside the shared volume and then the container finishes cause it is an InitContainer and then use the newly mounted folder in container B. And it works !!!!
So my question is, can someone explains me if this is expected on all Kubernetes clusters ? and explain to me why the mounted folder from A that is no longer running as a container can still be seen by my other container ?
Here is a simple manifest to demonstrate it.
apiVersion: v1
kind: Pod
metadata:
name: testvol
spec:
initContainers:
- name: busybox-init
image: busybox
securityContext:
privileged: true
command: ["/bin/sh"]
args: ["-c", "mkdir -p /opt/connectors; echo \"bar\" > /opt/connectors/foo.txt; mkdir -p /opt/connectors_new; mount --bind /opt/connectors /opt/connectors_new; echo connectors mount is ok"]
volumeMounts:
- name: connectors
mountPath: /opt/connectors_new
mountPropagation: Bidirectional
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "cat /opt/connectors/foo.txt; trap : TERM INT; (while true; do sleep 1000; done) & wait"]
volumeMounts:
- name: connectors
mountPath: /opt/connectors
mountPropagation: HostToContainer
volumes:
- name: connectors
emptyDir: {}
here the manifest to reproduce the behavior
This works because your containers run in a pod. The pod is where your volume is defined, not the container. So you are creating a volume in your pod that is an empty directory. Then you are mounting it in your init container and making changes. That makes changes to the volume on the pod.
Then when your init container finishes, the files at the pod level don't go away, they are still there, so your second container picks up the files when it mounts the same volume from the pod.
This is expected behavior and doesn't need mountPropagation fields at all. The mountPropagation fields may have some effect on emptyDir volumes, but it is not related to preserving the files:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.
Note: A container crashing does not remove a Pod from a node. The data in an emptyDir volume is safe across container crashes.
The note here doesn't explicitly state it, but this implies it is also safe across initContainer to Container transitions. As long as your pod exists on the node, your data will be there in the volume.
I have a docker image that uses a volume to write files:
docker run --rm -v /home/dir:/out/ image:cli args
when I try to run this inside a pod the container exit normally but no file is written.
I don't get it.
The container throw errors if it does not find the volume, for example if I run it without the -v option it throws:
Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'.
But I don't have any error from the container.
It finishes like it wrote files, but files do not exist.
I'm quite new to Kubernetes but this is getting me crazy.
Does kubernetes prevent files from being written? or am I missing something obvious?
The whole Kubernetes context is managed by GCP composer-airflow, if it helps...
docker -v: Docker version 17.03.2-ce, build f5ec1e2
If you want to have that behavior in Kubernetes you can use a hostPath volume.
Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: image:cli
name: test-container
volumeMounts:
- mountPath: /home/dir
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /out
type: Directory
when I try to run this inside a pod the container exit normally but no file is written
First of all, there is no need to run the docker run command inside the pod :). A spec file (yaml) should be written for the pod and kubernetes will run the container in the pod using docker for you. Ideally, you don't need to run docker commands when using kubernetes (unless you are debugging docker-related issues).
This link has useful kubectl commands for docker users.
If you are used to docker-compose, refer Kompose to go from docker-compose to kubernetes:
https://github.com/kubernetes/kompose
http://kompose.io
Some options to mount a directory on the host as a volume inside the container in kubernetes:
hostPath
emptyDir
configMap
I run a Ansible Playbook that gathers machine specific information and store the date within a file. One for each host. So I end up with a buch of files which should now be send to my Docker-Based Application for further processing.
Actuall I need to store it in a specific folder and create a volume so the container is able to read the files.
This requires the existence/creation of /tmp/incoming ...
Now if the Monitor app gets moved or a second Instance is needed, you'll have to access the filesystem and create the dirctory.
So I'd like to create a more dynamic volume:
docker volume create --name monitor-incoming:/var/www/monitor/incoming
Now Docker containers will be able to access the volume. But can I use Ansible to "copy" the files to this remote volume ? Sending them to monitor-incoming instead of /tmp/incoming ?
You could use any of the following methods:
Ansible non-SSH docker connection with standard Ansible file/copy modules. Example taken from from Ansible documentation:
- name: create jenkins container
docker_container:
docker_host: myserver.net:4243
name: my_jenkins
image: jenkins
- name: add container to inventory
add_host:
name: my_jenkins
ansible_connection: docker
ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
ansible_user: jenkins
changed_when: false
- name: create directory for ssh keys
delegate_to: my_jenkins
file:
path: "/var/jenkins_home/.ssh/jupiter"
state: directory
Ansible command module using the docker cli. I have had more success with this method.
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent
- name: recreate data volume
become: yes
docker_volume:
name: your_data_volume
state: present
- name: create temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
image: tianon/true
state: present
volumes:
- your_data_volume:/data
- name: copy folder contents to data volume via temporary data-only container
become: yes
command: docker cp /some_data_path_on_this_ansible_host/. your_temp_data_container:/data
- name: remove temporary data-only container
become: yes
docker_container:
name: your_temp_data_container
state: absent
I don't have a docker environment to test this out, but I think you could do it from the host with a docker run on a bash image that:
Binds the host's directory in which the file you want to copy is (I assumed we are copying /path/to/filename/on/host). I bound that directory to /tmp/source/ inside the container.
Binds the monitor-incoming volume somewhere in the container. I chose to bind it to /tmp/destination/.
Runs a simple cp command (since the entrypoint of the bash image is bash itself, we just have to add the command to run).
Here is the command:
docker run \
--mount source=monitor-incoming,target=/tmp/destination \
--mount type=bind,source=/path/to/filename/on/host,target=/tmp/source \
bash:4.4 \
cp "/tmp/source" "/tmp/destination/path/to/destination/inside/volume/"
This is not tested, but I think something along those lines should work. Notice that if that script is used fairly frequently, you should probably have a container dedicated to that task rather than call docker run many times.
I'm not sure if there's a more direct way that would not involve running cp inside a container...
I wish we could do it with a built-in module, but unless we have one, you can add a file into a named volume in a single task:
- name: actual config file is in the volume
command:
cmd: docker run --rm -iv your-config-volume:/v busybox sh -c 'cat > /v/config.yml'
stdin: "{{ lookup('file', 'config.yml') | regex_replace('\\r\\n', '\\n') }}"
Here we use command module to create a temporary container with the your-config-volume volume attached.
The sh -c 'cat > /v/config.yml' command saves all stdin into a file.
Finally, we set stdin to be the config.yml file from your role.
The regex_replace('\\r\\n', '\\n') part is required only if you run Windows.
I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector