I am looking for a kubernetes equivalent of docker -v for mounting the volumes in gcloud.
I am trying to run my container using google-container-engine which uses kubectl to manage clusters. In the kubectl run command I could not fund any provision for mounting the volumes.
kubectl run foo --image=gcr.io/project_id/myimage --port 8080
I checkout their official docs but could not find any clue whatsoever.
As at the moment, It's not possible to mount a persistent Volume in a container by using imperative ways or using generators command (run, expose).Therefore, You could use declarative way to get it done.
Kubernetes provides 2 abstractions for storage in a cluster which are persistent volume claim (PVC) and persistent volume (PV). Moreover, you can use storage class to provide Persistent volume (PV) in a dynamic way.
persistent-volumes.
storage-classes
When you write a manifest file for deployment you need to use a volume claim field to access PVC as well as you will write a PVC to claim PV.
Related
I'm going to transfer what I worked on the previous EC2 to the ECS.
In traditional EC2, the -v /home/ubuntu:/data option allowed the volume to be set.
First, I added volume through "Volume add in task definition" and proceeded with mounting as before.
However, this did not produce a normal result.
So I have some concerns.
For Ubuntu, it's the /home/ubuntu path, but I'm not sure how the ECS Fargate path is configured.
Secondly, I am wondering if adding :/data at the end of the container path is the right way.
Defined Volume
Volume set to existing EC2 written in JSON
Mount Points in ECS
With Fargate you would need to use an EFS volume for this. You don't have access to host volumes with Fargate.
I am doing some proof of concept work using Nomad to orchestrate several different containers running on RHEL 8 hosts using Podman. I am using the Nomad Podman driver to execute my containers using Podman. I have shared state in the form of an Elasticsearch data directory that I mount into /usr/share/elasticsearch/data.
I initially tried to get this working by defining a host volume in the Nomad client configuration, then adding a volume stanza that references my host volume and a volume mount stanza that references the volume in my Nomad job specification. That approach didn't work - no errors, but the mounting never happens.
After some digging, I found that the Podman task driver's capabilities documentation says that volume mounts are not supported. Instead, I seem to have to use the more limited driver-specific volumes configuration.
So my question is this: Is the lack of support for volume mounts just a temporary shortcoming that will eventually be supported? It does appear that the Docker task driver supports volume mapping and only Podman does not, so perhaps the Podman driver is just not there yet? Or is there a specific reason why there is a difference between how Docker supports volumes and how Podman does it?
yes, currently it does not support host volume defined in nomad client section.
this will works if this PR get merge:
https://github.com/hashicorp/nomad-driver-podman/pull/152
you can build the binary uging golang in this branch:
git clone https://github.com/ttys3/nomad-driver-podman
git checkout append-nomad-task-mounts
./build.sh
replace with new generated nomad-driver-podman and restart nomad.
Is it possible to use VDO on Kubernetes(with Docker containers)?
As far as I know, block devices are mountable - the problem here(I think) would be loading the VDO modules into the Docker container. I assume it's not possible to do it within the Docker container, so the responsibility lies on the host.
Correct, it's not directly supported by Kubernetes but you can always manage your VDO modules and volumes at the host level. For example, mount the volumes under /mnt/vdo0 and then use them in a container with the HostPath volume option.
You can also, for example, specify a VDO volume as the main graph directory for your docker daemon with something like /usr/bin/dockerd -g /mnt/vdo0 That will make your images and your non-external volume container storage stored in that directory.
I want to know why we have two different options to do the same thing, What are the differences between the two.
We basically have 3 types of volumes or mounts for persistent data:
Bind mounts
Named volumes
Volumes in dockerfiles
Bind mounts are basically just binding a certain directory or file from the host inside the container (docker run -v /hostdir:/containerdir IMAGE_NAME)
Named volumes are volumes which you create manually with docker volume create VOLUME_NAME. They are created in /var/lib/docker/volumes and can be referenced to by only their name. Let's say you create a volume called "mysql_data", you can just reference to it like this docker run -v mysql_data:/containerdir IMAGE_NAME.
And then there's volumes in dockerfiles, which are created by the VOLUME instruction. These volumes are also created under /var/lib/docker/volumes but don't have a certain name. Their "name" is just some kind of hash. The volume gets created when running the container and are handy to save persistent data, whether you start the container with -v or not. The developer gets to say where the important data is and what should be persistent.
What should I use?
What you want to use comes mostly down to either preference or your management. If you want to keep everything in the "docker area" (/var/lib/docker) you can use volumes. If you want to keep your own directory-structure, you can use binds.
Docker recommends the use of volumes over the use of binds, as volumes are created and managed by docker and binds have a lot more potential of failure (also due to layer 8 problems).
If you use binds and want to transfer your containers/applications on another host, you have to rebuild your directory-structure, where as volumes are more uniform on every host.
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container. More on
Differences between -v and --mount behavior
Because the -v and --volume flags have been a part of Docker for a long time, their behavior cannot be changed. This means that there is one behavior that is different between -v and --mount.
If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error. More on
Docker for Windows shared folders limitation
Docker for Windows does make much of the VM transparent to the Windows host, but it is still a virtual machine. For instance, when using –v with a mongo container, MongoDB needs something else supported by the file system. There is also this issue about volume mounts being extremely slow.
More on
Bind mounts are like a superset of Volumes (named or unnamed).
Bind mounts are created by binding an existing folder in the host system (host system is native linux machine or vm (in windows or mac)) to a path in the container.
Volume command results in a new folder, created in the host system under /var/lib/docker
Volumes are recommended because they are managed by docker engine (prune, rm, etc).
A good use case for bind mount is linking development folders to a path in the container. Any change in host folder will be reflected in the container.
Another use case for bind mount is keeping the application log which is not crucial like a database.
Command syntax is almost the same for both cases:
bind mount:
note that the host path should start with '/'. Use $(pwd) for convenience.
docker container run -v /host-path:/container-path image-name
unnamed volume:
creates a folder in the host with an arbitrary name
docker container run -v /container-path image-name
named volume:
should not start with '/' as this is reserved for bind mount.
'volume-name' is not a full path here. the command will cause a folder to be created with path "/var/lib/docker/volumes/volume-name" in the host.
docker container run -v volume-name:/container-path image-name
A named volume can also be created beforehand a container is run (docker volume create). But this is almost never needed.
As a developer, we always need to do comparison among the options provided by tools or technology. For Volume & Bind mounts, I would suggest to list down what kind of application you are trying to containerize.
Following are the parameters that I would consider before choosing Volume over Bind Mounts:
Docker provide various CLI commands to Volumes easily outside containers.
For backup & restore, Volume is far easier than Bind as it depends upon the underlying host OS.
Volumes are platform-agnostic so they can work on Linux as well as on Window containers.
With Bind, you have 2 technologies to take care of. Your host machine directory structure as well as Docker.
Migration of Volumes are easier not only on local machines but on cloud machines as well.
Volumes can be easily shared among multiple containers.
Is it possible to pass the "--volume-driver" with in kubernetes' yml file?
Ex. Using Docker I can perform the following
docker run --volume-driver rbd -v image:/mountpoint ubuntu
Thanks
Kubernetes does support several volume types, including rbd, as you mention in your example. When you create a pod, you can specify what volumes and their types you want in the yaml file. Documentation on volumes is here: http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd
Kubernetes uses its own volume system that is different from Docker's: Kubenetes supports some types of volumes that Docker doesn't and vice versa.