wwwroot directory and sub directory and files not found in docker - docker

I have developed asp.net core 6 based application and created docker image and published in google kubernetes cluster.
I am facing issue while trying to access wwwroot files.
As ex, i had some files in below structure
wwwroot/Content/Static/{.vm files}
I need to access this files in source. Process working fine in local.
But docker image running in google kubernetes cluster not found these files.
Do i need to include any additional syntax to copy these wwwroot folder strcuture along with sub directory and sub files?

Related

include external files in processmaker 4 php scripts

I am using processmaker 4 comunity, I need to run a PHP script that needs to include some external files with classes I defined. Since the php executor runs in a container it does not have access to the file system in my computer, how can I make the included files available in the executor container?
I understand that one of the option would be to create a new executor with a dockerfile that will copy the required php files in the docker's executor image, but that means rebuilding the image each time I make a change on the included files.
It would be great if I could mount a directory of my local host to the executors container, but have not figurer out if I can do that in processmaker.
Any suggestions or information on how to work arround this?

How to make the Kubernetes pod aware of new file changes?

Is there a way to make Kubernetes Pods aware of the new file changes ?
Lets say, I have an Kubernetes(K8) pod running with 4 replicas created, also I have an K8 PV created and attached to the external file system where we can modify the files. Lets consider K8 pod is running
a tomcat server with an application name test_app which is located in the following directory inside the container
tomcat/webapps/test_app/
Inside the test_app directory, i have few sub-directories like below
test_app/xml
test_app/properties
test_app/jsp
All these sub-directories are attached to an volume and it is mounted to an external file system. Anyone who have access to the external file system, will be updating xml / properties / jsp files.
When these files are changed in the external file system, it will get reflected inside the sub-directories test_app/xml, test_app/properties, test_app/jsp as well as we have an PV attached. But these changes will not reflected in th web application unless we restart the tomcat server. To restart the tomcat server, we need to restart the pod.
So whenever someone make any changes to the files exist in the external file system, how do i make K8 aware that there is some new changes which require Pods needs to be restarted ?
is it even possible in Kubernetes right now ?
If you are referring to file changes meaning changes to your application, the best practice is to bake a container image with your application code, and push a new container image when you need to deploy new code. You can do this by modifying your Kubernetes deployment to point to the latest digest hash.
For instance, in a deployment YAML file:
image: myimage#sha256:digest0
becomes
image: myimage#sha256:digest1
and then kubectl apply would be one way to do it.
You can read more about using container images with Kubernetes here.

Azure batch working directory issue

I am newbie to Azure batch as well as docker. The problem i am facing is i have created an image based on another custom image in which some files and folders are created at the root level/directory of the container and every thing works fine but when the same image is running in Azure batch task, i dont know where these files and folders are being created because the wd (working directory) folder is empty. Any suggestions please? Thank you. I know the Azure batch does something with the directory structure but i am not clear about it.
As you're no doubt aware, Batch maps directories into the container (from the docs):
All directories recursively below the AZ_BATCH_NODE_ROOT_DIR (the root of Azure Batch directories on the node) are mapped into the container
so for example if you have a resource file on the task, this ends up in the working directory within the container. However this doesn't mean the container is allowed to write back to the same location on the host (but only within the container). I would suggest that you take whatever results/output you have generated and upload them into Azure Blob Storage via a Shared Access Signature - this is the usual way to get results from a Batch job even without using Docker.

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

Make notebook .txt or .json generated files within static folder persistent in bluemix

I've been following the twitter tutorial for bluemix and watson and I've read that notebooks are short-lived but what about the .txt or .json files generated by running a notebook and saved under 'static' folder?
Whenever I re-run my application, those .txt and .json files are no longer there. Is there any way to make them persistent?
Thanks.
You have to save the files to your git repository if you are using the IBM DevOps as described in the tutorial. Using this approach a brand new container for your application will be created every time you run Build and Deploy in your devops environment.
Or you can save a local copy to your root project directory and then it will be copied to your application container when you use cf push to deploy your application.
Cloud foundry applications are ephemeral so you should avoid writing to the local disk. As I mentioned in the first paragraph a brand new container is created every time your application is redeployed.
You can find more details here.

Resources