Is there a way to give classpath in Kubernetes deployment/pod definition? - docker

I have an application war which reads an API implementation jar file to load some data in memory. What I am doing currently is that I COPY that jar inside my "application-war/lib" directory using docker file while generating my applications images.
The downside of this is that whenever the jar needs to be changed; I need to recreate my application war docker image.
Is there a way that I can externalize this jar file location, that I just need to restart of my running pod rather creating a new image each time.
I mean someway if I can give an additional CLASSPATH which my pods container can read while starting up.
Thanks

You are already doing it the right way.
The docker image should always be created for each build so you have history as well. It won't be a good practice to change something inside a running pod and restart it somehow.
If you still want to do it that way, you could mount an external volume to your pod and configure your application server to read war file from that location. In this case you will still need access to that volume some other way which allows you to place the file there.

To provide a bit more context.
Everything what was said by #Hazim is correct and I fully agree with him about you doing the current build the correct way as it's allows you to see image history and quickly switch if needed.
As for using external files inside your image.
You need to setup a PV - persistent volume, which will be utilized by PVC - persistent volume claim.
A really detailed description with exampled is available on Configure a Pod to Use a PersistentVolume for Storage.
It shows how to create a folder on your node place a file in it which later will be loaded into a pod. You won't be loading that file into a pod but using the path in your Dockerfile to load the .jar file.
If your .jar file is composed of key=value entries you could also use ConfigMap instead of PV. This is nicely explained on a Redis application which you can see here DOCKER / KUBERNETES - CONFIGURE A POD TO USE A CONFIGMAP.
I hope this provides all needed information.

Related

How to make the Kubernetes pod aware of new file changes?

Is there a way to make Kubernetes Pods aware of the new file changes ?
Lets say, I have an Kubernetes(K8) pod running with 4 replicas created, also I have an K8 PV created and attached to the external file system where we can modify the files. Lets consider K8 pod is running
a tomcat server with an application name test_app which is located in the following directory inside the container
tomcat/webapps/test_app/
Inside the test_app directory, i have few sub-directories like below
test_app/xml
test_app/properties
test_app/jsp
All these sub-directories are attached to an volume and it is mounted to an external file system. Anyone who have access to the external file system, will be updating xml / properties / jsp files.
When these files are changed in the external file system, it will get reflected inside the sub-directories test_app/xml, test_app/properties, test_app/jsp as well as we have an PV attached. But these changes will not reflected in th web application unless we restart the tomcat server. To restart the tomcat server, we need to restart the pod.
So whenever someone make any changes to the files exist in the external file system, how do i make K8 aware that there is some new changes which require Pods needs to be restarted ?
is it even possible in Kubernetes right now ?
If you are referring to file changes meaning changes to your application, the best practice is to bake a container image with your application code, and push a new container image when you need to deploy new code. You can do this by modifying your Kubernetes deployment to point to the latest digest hash.
For instance, in a deployment YAML file:
image: myimage#sha256:digest0
becomes
image: myimage#sha256:digest1
and then kubectl apply would be one way to do it.
You can read more about using container images with Kubernetes here.

How to create and mount common data for all pods

I have defined job in my kubernetes cluster which suppose to create some folder with some data. Now I would like to share this folder between all other pods, because they need to use this data.
Currently other pods are not running if above mentioned job is not finished.
So I think about volumes. Let's say - result of the job is mounted folder, which is accessible from other pods when job is finished.
Other pods in cluster needs only environment variable - path to this mounted folder.
Could you please how I could define this?
ps. I know this is not a very good use case, however I Have legacy monolit application with lots of dependencies.
I'm assuming that the folder you are referring to is a single folder at some disk that can be mounted by multiple clients.
Check here , or at your volume plugin documentation reference if the access mode you are requesting is supported.
Create the persistent volume claim that your pods will use, no need for the matching volume to exists yet. You can use label/expression matching to make sure that this PVC will only be satisfied by the persistent volume you will be creating at your job.
At your job add a final task that creates the persistent volume claim that satisfies the PVC.
Create your pods adding the PVC as volume. I don't think pod presets are needed, plus they are alpha, not enabled by default, and not widely used, but depending on your case you might want to take a look at them.

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

Kubernetes - File missing inside pods after reboot

In Kubernetes I create a deployment with 3 replica and it is creating 3 pods.
After Pod creation I create a property file which has all the key/value that are required for my application (on all 3 pods).
If I reboot the machine the property file inside the pods is missing.So I am creating it manually every time if the machine reboots.
Is there any way to save the property file inside the pod?
What you do depends on what the file is for and where it needs to be located.
If a config file, you may want to use a config map and mount the config map into the container.
If it needs to be long lived storage for data, create a persistent volume claim and then mount the volume into the container.
Something like this is necessary as the container file system is otherwise ephemeral and anything written to it will be lost when the container is shutdown.

InitContainer consume config.template from container image in kubernetes

Summary:
I want to be able to get config.template files in InitContainer from Container.
The existing state:
There are template configuration files, that rarely change and contains PlaceHolders, stored inside the image of the container.
When we create the container in the kubernetes there is a script stored also in the image that run and replace all the PlaceHolders with the real values and then start the service.
The desired state:
Having a Init-Container that built from generic image with generic code that need to get as arguments only the directories of the template files (as array of directories) and when it run it take all the template files from the container's image (throw volume), replace the PlaceHolders with the real value and create a final configuration files in a volume that shared with the container.
That way the Init-Container do the preparations and when it done the Container need to start immediately with the prepared configuration files.
also, the same image of the Init-Container can be used in pods with other containers.
The Problem:
The Init-Container is start first and the volume that map to the Container image and supposed to contain the config.template files is still empty when the Init-Container is running.
My Questions:
- Is there an easy and good way to get those config.template files from the container image from the Init-Container before the container in running?
- Is there a better solution for this problem to get the same or similar result?
I think there is no a way to access the files from container in pod into init-containers. The volume that is shared between your init-container and the container of pod is empty because the container of pod is not started until all init-containers successfully exit(with exit code 0).
So the way I suggest you do stuff is create a configMap with the templates you want. Mount this configMap inside the init-container. And inside the init-container do the replacement of place holder values in template coming from configMap and dump it into the volume you have shared between container of pod and the init-container of the pod.
Now this helps you change the config as you want, all you need to do is update the configMap resource. Also the code to replace the place holder values can change and all you need to do is build the image for init-container. Also this helps you keep the init-container image generic as you want.
With this de-coupling your source code container remains independent of change in the config.
Also this init-container and configMap gets used into other pods as you like it.

Resources