Pass "volume-driver" option to kubernetes pod - docker

Is it possible to pass the "--volume-driver" with in kubernetes' yml file?
Ex. Using Docker I can perform the following
docker run --volume-driver rbd -v image:/mountpoint ubuntu
Thanks

Kubernetes does support several volume types, including rbd, as you mention in your example. When you create a pod, you can specify what volumes and their types you want in the yaml file. Documentation on volumes is here: http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd
Kubernetes uses its own volume system that is different from Docker's: Kubenetes supports some types of volumes that Docker doesn't and vice versa.

Related

Support for `volume_mount` in Nomad Podman task driver?

I am doing some proof of concept work using Nomad to orchestrate several different containers running on RHEL 8 hosts using Podman. I am using the Nomad Podman driver to execute my containers using Podman. I have shared state in the form of an Elasticsearch data directory that I mount into /usr/share/elasticsearch/data.
I initially tried to get this working by defining a host volume in the Nomad client configuration, then adding a volume stanza that references my host volume and a volume mount stanza that references the volume in my Nomad job specification. That approach didn't work - no errors, but the mounting never happens.
After some digging, I found that the Podman task driver's capabilities documentation says that volume mounts are not supported. Instead, I seem to have to use the more limited driver-specific volumes configuration.
So my question is this: Is the lack of support for volume mounts just a temporary shortcoming that will eventually be supported? It does appear that the Docker task driver supports volume mapping and only Podman does not, so perhaps the Podman driver is just not there yet? Or is there a specific reason why there is a difference between how Docker supports volumes and how Podman does it?
yes, currently it does not support host volume defined in nomad client section.
this will works if this PR get merge:
https://github.com/hashicorp/nomad-driver-podman/pull/152
you can build the binary uging golang in this branch:
git clone https://github.com/ttys3/nomad-driver-podman
git checkout append-nomad-task-mounts
./build.sh
replace with new generated nomad-driver-podman and restart nomad.

Where do docker images' new Files get saved to in GCP?

I want to create some docker images that generates text files. However, since images are pushed to Container Registry in GCP. I am not sure where the files will be generated to when I use kubectl run myImage. If I specify a path in the program, like '/usr/bin/myfiles', would they be downloaded to the VM instance where I am typing "kubectl run myImage"? I think this is probably not the case.. What is the solution?
Ideally, I would like all the files to be in one place.
Thank you
Container Registry and Kubernetes are mostly irrelevant to the issue of where a container will persist files it creates.
Some process running within a container that generates files will persist the files to the container instance's file system. Exceptions to this are stdout and stderr which are both available without further ado.
When you run container images, you can mount volumes into the container instance and this provides possible solutions to your needs. Commonly, when running Docker Engine, it's common to mount the host's file system into the container to share files between the container and the host: docker run ... --volume=[host]:[container] yourimage ....
On Kubernetes, there are many types of volumes. An seemingly obvious solution is to use gcePersistentDisk but this has a limitation in that it these disks may only be mounted for write on one pod at a time. A more powerful solution may be to use an NFS-based solution such as nfs or gluster. These should provide a means for you to consolidate files outside of the container instances.
A good solution but I'm unsure whether it is available, would be to write your files as Google Cloud Storage objects.
A tenet of containers is that they should operate without making assumptions about their environment. Your containers should not make assumptions about running on Kubernetes and should not make assumptions about non-default volumes. By this I mean, that your containers will write files to container's file system. When you run the container, you apply the configuration that e.g. provides an NFS volume mount or GCS bucket mount etc. that actually persists the files beyond the container.
HTH!

What Is The Difference Between Binding Mounts And Volumes While Handling Persistent Data In Docker Containers?

I want to know why we have two different options to do the same thing, What are the differences between the two.
We basically have 3 types of volumes or mounts for persistent data:
Bind mounts
Named volumes
Volumes in dockerfiles
Bind mounts are basically just binding a certain directory or file from the host inside the container (docker run -v /hostdir:/containerdir IMAGE_NAME)
Named volumes are volumes which you create manually with docker volume create VOLUME_NAME. They are created in /var/lib/docker/volumes and can be referenced to by only their name. Let's say you create a volume called "mysql_data", you can just reference to it like this docker run -v mysql_data:/containerdir IMAGE_NAME.
And then there's volumes in dockerfiles, which are created by the VOLUME instruction. These volumes are also created under /var/lib/docker/volumes but don't have a certain name. Their "name" is just some kind of hash. The volume gets created when running the container and are handy to save persistent data, whether you start the container with -v or not. The developer gets to say where the important data is and what should be persistent.
What should I use?
What you want to use comes mostly down to either preference or your management. If you want to keep everything in the "docker area" (/var/lib/docker) you can use volumes. If you want to keep your own directory-structure, you can use binds.
Docker recommends the use of volumes over the use of binds, as volumes are created and managed by docker and binds have a lot more potential of failure (also due to layer 8 problems).
If you use binds and want to transfer your containers/applications on another host, you have to rebuild your directory-structure, where as volumes are more uniform on every host.
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container. More on
Differences between -v and --mount behavior
Because the -v and --volume flags have been a part of Docker for a long time, their behavior cannot be changed. This means that there is one behavior that is different between -v and --mount.
If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error. More on
Docker for Windows shared folders limitation
Docker for Windows does make much of the VM transparent to the Windows host, but it is still a virtual machine. For instance, when using –v with a mongo container, MongoDB needs something else supported by the file system. There is also this issue about volume mounts being extremely slow.
More on
Bind mounts are like a superset of Volumes (named or unnamed).
Bind mounts are created by binding an existing folder in the host system (host system is native linux machine or vm (in windows or mac)) to a path in the container.
Volume command results in a new folder, created in the host system under /var/lib/docker
Volumes are recommended because they are managed by docker engine (prune, rm, etc).
A good use case for bind mount is linking development folders to a path in the container. Any change in host folder will be reflected in the container.
Another use case for bind mount is keeping the application log which is not crucial like a database.
Command syntax is almost the same for both cases:
bind mount:
note that the host path should start with '/'. Use $(pwd) for convenience.
docker container run -v /host-path:/container-path image-name
unnamed volume:
creates a folder in the host with an arbitrary name
docker container run -v /container-path image-name
named volume:
should not start with '/' as this is reserved for bind mount.
'volume-name' is not a full path here. the command will cause a folder to be created with path "/var/lib/docker/volumes/volume-name" in the host.
docker container run -v volume-name:/container-path image-name
A named volume can also be created beforehand a container is run (docker volume create). But this is almost never needed.
As a developer, we always need to do comparison among the options provided by tools or technology. For Volume & Bind mounts, I would suggest to list down what kind of application you are trying to containerize.
Following are the parameters that I would consider before choosing Volume over Bind Mounts:
Docker provide various CLI commands to Volumes easily outside containers.
For backup & restore, Volume is far easier than Bind as it depends upon the underlying host OS.
Volumes are platform-agnostic so they can work on Linux as well as on Window containers.
With Bind, you have 2 technologies to take care of. Your host machine directory structure as well as Docker.
Migration of Volumes are easier not only on local machines but on cloud machines as well.
Volumes can be easily shared among multiple containers.

Docker -v (volume mount) equivalent in kubernetes

I am looking for a kubernetes equivalent of docker -v for mounting the volumes in gcloud.
I am trying to run my container using google-container-engine which uses kubectl to manage clusters. In the kubectl run command I could not fund any provision for mounting the volumes.
kubectl run foo --image=gcr.io/project_id/myimage --port 8080
I checkout their official docs but could not find any clue whatsoever.
As at the moment, It's not possible to mount a persistent Volume in a container by using imperative ways or using generators command (run, expose).Therefore, You could use declarative way to get it done.
Kubernetes provides 2 abstractions for storage in a cluster which are persistent volume claim (PVC) and persistent volume (PV). Moreover, you can use storage class to provide Persistent volume (PV) in a dynamic way.
persistent-volumes.
storage-classes
When you write a manifest file for deployment you need to use a volume claim field to access PVC as well as you will write a PVC to claim PV.

How to mimic --device option in docker run in kubernetes

I am very new to Kubernetes and docker. Am trying to find the config equivalent of --device option in docker run. This option in docker is used to add a device on the host to the container.
Is there a equivalent in kubernetes which can be added to the yaml file?
Thanks
Currently we do not have a passthrough to this option in the API, though you may have some success with using a hostpath volume to mount a device file in.

Resources