In Kubernetes I create a deployment with 3 replica and it is creating 3 pods.
After Pod creation I create a property file which has all the key/value that are required for my application (on all 3 pods).
If I reboot the machine the property file inside the pods is missing.So I am creating it manually every time if the machine reboots.
Is there any way to save the property file inside the pod?
What you do depends on what the file is for and where it needs to be located.
If a config file, you may want to use a config map and mount the config map into the container.
If it needs to be long lived storage for data, create a persistent volume claim and then mount the volume into the container.
Something like this is necessary as the container file system is otherwise ephemeral and anything written to it will be lost when the container is shutdown.
Related
Is there a way to make Kubernetes Pods aware of the new file changes ?
Lets say, I have an Kubernetes(K8) pod running with 4 replicas created, also I have an K8 PV created and attached to the external file system where we can modify the files. Lets consider K8 pod is running
a tomcat server with an application name test_app which is located in the following directory inside the container
tomcat/webapps/test_app/
Inside the test_app directory, i have few sub-directories like below
test_app/xml
test_app/properties
test_app/jsp
All these sub-directories are attached to an volume and it is mounted to an external file system. Anyone who have access to the external file system, will be updating xml / properties / jsp files.
When these files are changed in the external file system, it will get reflected inside the sub-directories test_app/xml, test_app/properties, test_app/jsp as well as we have an PV attached. But these changes will not reflected in th web application unless we restart the tomcat server. To restart the tomcat server, we need to restart the pod.
So whenever someone make any changes to the files exist in the external file system, how do i make K8 aware that there is some new changes which require Pods needs to be restarted ?
is it even possible in Kubernetes right now ?
If you are referring to file changes meaning changes to your application, the best practice is to bake a container image with your application code, and push a new container image when you need to deploy new code. You can do this by modifying your Kubernetes deployment to point to the latest digest hash.
For instance, in a deployment YAML file:
image: myimage#sha256:digest0
becomes
image: myimage#sha256:digest1
and then kubectl apply would be one way to do it.
You can read more about using container images with Kubernetes here.
I have an application war which reads an API implementation jar file to load some data in memory. What I am doing currently is that I COPY that jar inside my "application-war/lib" directory using docker file while generating my applications images.
The downside of this is that whenever the jar needs to be changed; I need to recreate my application war docker image.
Is there a way that I can externalize this jar file location, that I just need to restart of my running pod rather creating a new image each time.
I mean someway if I can give an additional CLASSPATH which my pods container can read while starting up.
Thanks
You are already doing it the right way.
The docker image should always be created for each build so you have history as well. It won't be a good practice to change something inside a running pod and restart it somehow.
If you still want to do it that way, you could mount an external volume to your pod and configure your application server to read war file from that location. In this case you will still need access to that volume some other way which allows you to place the file there.
To provide a bit more context.
Everything what was said by #Hazim is correct and I fully agree with him about you doing the current build the correct way as it's allows you to see image history and quickly switch if needed.
As for using external files inside your image.
You need to setup a PV - persistent volume, which will be utilized by PVC - persistent volume claim.
A really detailed description with exampled is available on Configure a Pod to Use a PersistentVolume for Storage.
It shows how to create a folder on your node place a file in it which later will be loaded into a pod. You won't be loading that file into a pod but using the path in your Dockerfile to load the .jar file.
If your .jar file is composed of key=value entries you could also use ConfigMap instead of PV. This is nicely explained on a Redis application which you can see here DOCKER / KUBERNETES - CONFIGURE A POD TO USE A CONFIGMAP.
I hope this provides all needed information.
I have defined job in my kubernetes cluster which suppose to create some folder with some data. Now I would like to share this folder between all other pods, because they need to use this data.
Currently other pods are not running if above mentioned job is not finished.
So I think about volumes. Let's say - result of the job is mounted folder, which is accessible from other pods when job is finished.
Other pods in cluster needs only environment variable - path to this mounted folder.
Could you please how I could define this?
ps. I know this is not a very good use case, however I Have legacy monolit application with lots of dependencies.
I'm assuming that the folder you are referring to is a single folder at some disk that can be mounted by multiple clients.
Check here , or at your volume plugin documentation reference if the access mode you are requesting is supported.
Create the persistent volume claim that your pods will use, no need for the matching volume to exists yet. You can use label/expression matching to make sure that this PVC will only be satisfied by the persistent volume you will be creating at your job.
At your job add a final task that creates the persistent volume claim that satisfies the PVC.
Create your pods adding the PVC as volume. I don't think pod presets are needed, plus they are alpha, not enabled by default, and not widely used, but depending on your case you might want to take a look at them.
I'm ramping up on Docker and k8s, and am running into an issue with a 3rd party application I'm containerizing where the application is configured via flat text files, without override environment variables.
What is the best way to dynamically configure this app? I'm immediately leaning towards a sidecar container that accepts environment variables and writes the text file config, writes it to a shared volume in the pod, and then the application container will read the config file. Is this correct?
What is the best practice here?
Create a ConfigMap with this configuration file. Then, mount the ConfigMap into the pod. This will create the configuration file in mounted directory. Then, you can use this configuration file as usual.
Here are related example:
Create ConfigMap from file.
Mount ConfigMap as volume.
What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.