How to make the Kubernetes pod aware of new file changes? - docker

Is there a way to make Kubernetes Pods aware of the new file changes ?
Lets say, I have an Kubernetes(K8) pod running with 4 replicas created, also I have an K8 PV created and attached to the external file system where we can modify the files. Lets consider K8 pod is running
a tomcat server with an application name test_app which is located in the following directory inside the container
tomcat/webapps/test_app/
Inside the test_app directory, i have few sub-directories like below
test_app/xml
test_app/properties
test_app/jsp
All these sub-directories are attached to an volume and it is mounted to an external file system. Anyone who have access to the external file system, will be updating xml / properties / jsp files.
When these files are changed in the external file system, it will get reflected inside the sub-directories test_app/xml, test_app/properties, test_app/jsp as well as we have an PV attached. But these changes will not reflected in th web application unless we restart the tomcat server. To restart the tomcat server, we need to restart the pod.
So whenever someone make any changes to the files exist in the external file system, how do i make K8 aware that there is some new changes which require Pods needs to be restarted ?
is it even possible in Kubernetes right now ?

If you are referring to file changes meaning changes to your application, the best practice is to bake a container image with your application code, and push a new container image when you need to deploy new code. You can do this by modifying your Kubernetes deployment to point to the latest digest hash.
For instance, in a deployment YAML file:
image: myimage#sha256:digest0
becomes
image: myimage#sha256:digest1
and then kubectl apply would be one way to do it.
You can read more about using container images with Kubernetes here.

Related

Is there a way to give classpath in Kubernetes deployment/pod definition?

I have an application war which reads an API implementation jar file to load some data in memory. What I am doing currently is that I COPY that jar inside my "application-war/lib" directory using docker file while generating my applications images.
The downside of this is that whenever the jar needs to be changed; I need to recreate my application war docker image.
Is there a way that I can externalize this jar file location, that I just need to restart of my running pod rather creating a new image each time.
I mean someway if I can give an additional CLASSPATH which my pods container can read while starting up.
Thanks
You are already doing it the right way.
The docker image should always be created for each build so you have history as well. It won't be a good practice to change something inside a running pod and restart it somehow.
If you still want to do it that way, you could mount an external volume to your pod and configure your application server to read war file from that location. In this case you will still need access to that volume some other way which allows you to place the file there.
To provide a bit more context.
Everything what was said by #Hazim is correct and I fully agree with him about you doing the current build the correct way as it's allows you to see image history and quickly switch if needed.
As for using external files inside your image.
You need to setup a PV - persistent volume, which will be utilized by PVC - persistent volume claim.
A really detailed description with exampled is available on Configure a Pod to Use a PersistentVolume for Storage.
It shows how to create a folder on your node place a file in it which later will be loaded into a pod. You won't be loading that file into a pod but using the path in your Dockerfile to load the .jar file.
If your .jar file is composed of key=value entries you could also use ConfigMap instead of PV. This is nicely explained on a Redis application which you can see here DOCKER / KUBERNETES - CONFIGURE A POD TO USE A CONFIGMAP.
I hope this provides all needed information.

Rolling update with shared folders/files Docker

I'm looking for a way to share files/folders between Docker containers. Especially sharing a file gives me issues. I want to use Docker in production with docker-compose and use a deployment technic that gives me zero downtime (like green/blue or something else).
What I have so far, is to deploy the new source code by checking out git source first. I keep the old container running, until the new one is up. Then I stop the old one and remove it.
The problem I'm running into with shared files is that Docker doesn't lock files. So when two containers with the same application are up and writing the same file shared_database.db this causes data corruption.
Folder structure from root looks like this:
/packages (git source)
/www (git source)
/shared_database.db (file I want to share across different deployments)
/www/public/files (folder i want to share across different deployments)
I've tried:
symlinks; unfortunately Docker doesn't support symlinks
mounting shared files/folders within the docker-compose file under volumes section, but since Docker doesn't lock files this causes data corruption
If I need to make myself more clear or need to provide more info i'd be happy to. Let me know.

Kubernetes - File missing inside pods after reboot

In Kubernetes I create a deployment with 3 replica and it is creating 3 pods.
After Pod creation I create a property file which has all the key/value that are required for my application (on all 3 pods).
If I reboot the machine the property file inside the pods is missing.So I am creating it manually every time if the machine reboots.
Is there any way to save the property file inside the pod?
What you do depends on what the file is for and where it needs to be located.
If a config file, you may want to use a config map and mount the config map into the container.
If it needs to be long lived storage for data, create a persistent volume claim and then mount the volume into the container.
Something like this is necessary as the container file system is otherwise ephemeral and anything written to it will be lost when the container is shutdown.

Storing and updating config files for my applications with kubernetes

My application uses a config file. How to push updates to it? How they should be stored for convenient updates? In volumes?
The pipeline for the app is Git -> CI -> deb repo -> docker registry. So the updates to it is just to tell kubernetes to select a new image.
What to do for the config file? Maybe the same chain and then just spin up an container with NFS on it? Also, the app has to be notified about the parameters change via a SIGHUP. How to add that hook?
You can use kubernetes configMaps for configs. Don't bake the configs inside the container image.
You can expose the configs as evironment variable or can be mounted as volumes inside pod.
ConfigMap can be generated from a file as well.
In your case it seems like you are reading from config file which is at specific location so you can use configMap and then mount this configMap at the same location from where your app will read, so you don't need to make any changes in your app.
And when you need to update the config just update the configMap and then the new pods that come up will start reading the config. I don't know how to update the config in running pod, what I have tried is scale up and then scale down.
configMaps: https://kubernetes.io/docs/user-guide/configmap/
HTH

Docker: where should I put container configuration and war file

Recently I was trying to figure out how a docker workflow looks like.
What I thought is, devs should push images locally and in other environments servers should just directly pull that image and run it.
But I could see a lot of public images allows people to put configurations outside the container.
For example, in official elasticsearch image, there is a command as follows:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
So what is the point of putting configuration outside the container instead of running local containers quickly?
My argument is
if I put configuration inside a custom image, in testing environment or production, the server just need to pull that same image which is already built.
if I put configuration outside the image, in other environments, there will be another process to get that configuration from somewhere. Sure we could use git to source control that, but is this a tedious and useless effort to manage it? And installing third party libraries is also required.
Further question:
Should I put the application file (for example, war file) inside web server container or outside it?
When you are doing development, configuration files may change often; so rather than keep rebuilding the containers, you may use a volume instead.
If you are in production and need dozens or hundreds of the same container, all with slightly different configuration files, it is easy to have one single image and have diverse configuration files living outside (e.g. use consul, etcd, zookeeper, ... or VOLUME).

Resources