I have defined job in my kubernetes cluster which suppose to create some folder with some data. Now I would like to share this folder between all other pods, because they need to use this data.
Currently other pods are not running if above mentioned job is not finished.
So I think about volumes. Let's say - result of the job is mounted folder, which is accessible from other pods when job is finished.
Other pods in cluster needs only environment variable - path to this mounted folder.
Could you please how I could define this?
ps. I know this is not a very good use case, however I Have legacy monolit application with lots of dependencies.
I'm assuming that the folder you are referring to is a single folder at some disk that can be mounted by multiple clients.
Check here , or at your volume plugin documentation reference if the access mode you are requesting is supported.
Create the persistent volume claim that your pods will use, no need for the matching volume to exists yet. You can use label/expression matching to make sure that this PVC will only be satisfied by the persistent volume you will be creating at your job.
At your job add a final task that creates the persistent volume claim that satisfies the PVC.
Create your pods adding the PVC as volume. I don't think pod presets are needed, plus they are alpha, not enabled by default, and not widely used, but depending on your case you might want to take a look at them.
Related
I'd like to be able to create a temporary container within an existing pod to handle processing arbitrary code from a user (security concerns), however this container must also in the same pod due to data locality / performance concerns.
My question is, what is the proper way to achieve this? Ephemeral Containers are described as "to inspect services rather than to build applications" and "Ephemeral containers may not have ports".
So I feel that this is not the proper way to go about this. My temporary container must be able to share mounted data with the original container in the same Pod, and must be able to communicate via a port that is opened to the original container of the same Pod.
You can achieve this by either creating a sidecar which will intercept the traffic to your original pod, or just creating a second pod in your deployment and a way of automatically trigger the process you wanna do. One caveat about this is that the two pods share the same network so you can not expose the same port for both containers.
The downside of both approaches is that you no longer have a temporary container,now you would have both up and running.
If what you wanna do is a one time task when your container is up I highly recommend exposing an API in your original pod and make a call from a Job.
I have a directory that is configured to be managed by an automounter (as described here). I need to use this directory (and all directories that are mounted inside) in multiple pods as a Local Persistent Volume.
I am able to trigger the automounter within the containers, but there are some use-cases when this directory is not empty when the container starts up. This makes sub-directories appear as empty and not being able to trigger the automounter (whithin the container)
I did some investigation and discovered that when using Local PVs, there is a mount -o bind command between the source directory and some internal directory managed by the kubelet (this is the line in the source code).
What I actually do need is rbind to be used (recursive binding - here is a good explanation).
Using rbind also requires some changes to the part that unmounts the volume (recursive unmounting is needed)
I don't want to patch the kubelet and recompile it..yet.
So my question is: are there some official methods to provide to Kubernetes some custom mounter/unmounter?
Meanwhile, I did find a solution for this use-case.
Based on Kubernetes docs there is something called Out-Of-Tree Volume Plugins
The Out-of-tree volume plugins include the Container Storage Interface (CSI) and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository
Even that CSI is encouraged to be used, I chose FlexVolume to implement my custom driver. Here is a detailed documentation.
This driver is actually a py script that supports three actions: init/mount/unmount (--rbind is used to mount that directory managed by automounter and unmounts it like this). It is deployed using a DaemonSet (docs here)
And this is it!
We are running a pod in Kubernetes that needs to load a file during runtime. This file has the following properties:
It is known at build time
It should be mounted read-only by multiple pods (the same kind)
It might change (externally to the cluster) and needs to be updated
For various reasons (security being the main concern) the file cannot be inside the docker image
It is potentially quite large, theoretically up to 100 MB, but in practice between 200kB - 10MB.
We have considered various options:
Creating a persistent volume, mount the volume in a temporary pod to write (update) the file, unmount the volume, and then mount it in the service with ROX (Read-Only Multiple) claims. This solution means we need downtime during upgrade, and it is hard to automate (due to timings).
Creating multiple secrets using the secrets management of Kubernetes, and then "assemble" the file before loading it in an init-container or something similar.
Both of these solutions feels a little bit hacked - is there a better solution out there that we could utilize for solving this?
You need to use a shared filesystem that supports Read/Write Multiple Pods.
Here is a link to the CSI Drivers which can be used with Kubernetes and provide those access:
https://kubernetes-csi.github.io/docs/drivers.html
Ideally, you need a solution that is not an appliance, and can run anywhere meaning it can run in the cloud or on-prem.
The platforms that could work for you are Ceph, GlusterFS, and Quobyte (Disclaimer, I work for Quobyte)
I have an application war which reads an API implementation jar file to load some data in memory. What I am doing currently is that I COPY that jar inside my "application-war/lib" directory using docker file while generating my applications images.
The downside of this is that whenever the jar needs to be changed; I need to recreate my application war docker image.
Is there a way that I can externalize this jar file location, that I just need to restart of my running pod rather creating a new image each time.
I mean someway if I can give an additional CLASSPATH which my pods container can read while starting up.
Thanks
You are already doing it the right way.
The docker image should always be created for each build so you have history as well. It won't be a good practice to change something inside a running pod and restart it somehow.
If you still want to do it that way, you could mount an external volume to your pod and configure your application server to read war file from that location. In this case you will still need access to that volume some other way which allows you to place the file there.
To provide a bit more context.
Everything what was said by #Hazim is correct and I fully agree with him about you doing the current build the correct way as it's allows you to see image history and quickly switch if needed.
As for using external files inside your image.
You need to setup a PV - persistent volume, which will be utilized by PVC - persistent volume claim.
A really detailed description with exampled is available on Configure a Pod to Use a PersistentVolume for Storage.
It shows how to create a folder on your node place a file in it which later will be loaded into a pod. You won't be loading that file into a pod but using the path in your Dockerfile to load the .jar file.
If your .jar file is composed of key=value entries you could also use ConfigMap instead of PV. This is nicely explained on a Redis application which you can see here DOCKER / KUBERNETES - CONFIGURE A POD TO USE A CONFIGMAP.
I hope this provides all needed information.
In Kubernetes I create a deployment with 3 replica and it is creating 3 pods.
After Pod creation I create a property file which has all the key/value that are required for my application (on all 3 pods).
If I reboot the machine the property file inside the pods is missing.So I am creating it manually every time if the machine reboots.
Is there any way to save the property file inside the pod?
What you do depends on what the file is for and where it needs to be located.
If a config file, you may want to use a config map and mount the config map into the container.
If it needs to be long lived storage for data, create a persistent volume claim and then mount the volume into the container.
Something like this is necessary as the container file system is otherwise ephemeral and anything written to it will be lost when the container is shutdown.