Reading files from sub directories of SFTP using VFS transport in WSO2 - vfs

We have a scenario in which we need to process all the files in current folder and all the sub folders in it. The sub folders name is dynamic.
Is this possible to achieve using vfs in wso2?

Related

wwwroot directory and sub directory and files not found in docker

I have developed asp.net core 6 based application and created docker image and published in google kubernetes cluster.
I am facing issue while trying to access wwwroot files.
As ex, i had some files in below structure
wwwroot/Content/Static/{.vm files}
I need to access this files in source. Process working fine in local.
But docker image running in google kubernetes cluster not found these files.
Do i need to include any additional syntax to copy these wwwroot folder strcuture along with sub directory and sub files?

How to configure a writable folder inside an application published to Azure App Service using Docker for Windows

I'm working in an application to obtain some data from a web service, create a text file in the local filesystem send a command to a command line application, obtain the result and then send the results back via the web service.
I need to be able to write to the local file system, read from it and then delete the temporary file. I was reading about bind mounts and volumes but this folder can be delete if a new version of the image is uploaded is just a staging area.
Any ideas how this can be done, thanks.
When using containers in App Service, I believe you will have to link a storage account and mount file shares accordingly. Depending on the OS (windows / linux), the steps vary a bit.
If you are not using containers, then you should be able to access the temporary file locations for file-based requirements. Do note that the storage available this way is limited and not shared across site instances.

Azure batch working directory issue

I am newbie to Azure batch as well as docker. The problem i am facing is i have created an image based on another custom image in which some files and folders are created at the root level/directory of the container and every thing works fine but when the same image is running in Azure batch task, i dont know where these files and folders are being created because the wd (working directory) folder is empty. Any suggestions please? Thank you. I know the Azure batch does something with the directory structure but i am not clear about it.
As you're no doubt aware, Batch maps directories into the container (from the docs):
All directories recursively below the AZ_BATCH_NODE_ROOT_DIR (the root of Azure Batch directories on the node) are mapped into the container
so for example if you have a resource file on the task, this ends up in the working directory within the container. However this doesn't mean the container is allowed to write back to the same location on the host (but only within the container). I would suggest that you take whatever results/output you have generated and upload them into Azure Blob Storage via a Shared Access Signature - this is the usual way to get results from a Batch job even without using Docker.

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

How to correctly configure server for Symfony (on shared hosting)?

I've decided to learn Symfony and right now I am reading through the very start of the "Practical Symfony" book. After reading the "Web Server Configuration" part I have a question.
The manual is describing how to correctly configure the server: browser should have access only to web/ and sf/.../ directories. The manual has great instructions regarding this and being a Linux user I had no problem following them and making everything work on my local machine. However that involves editing VirtualHost entries which normally is not easy to do on common shared hosting servers. So I wonder what is the common technique that Symfony developers use to get the same results in shared hosting environment? I think I can do that by adding "deny from all" in the root and then overwriting that rule in the allowed directories. However I am not sure if that's the easiest way and the way that is normally used.
If you can add files outside public_html directory, put all the directories there and put on the public_html directory all the files inside your web directory (put your sf directory if it was needed by your app), In this case only the web files are accessible on the public, however if you can only access the public_html directory and cannot add directory outside it, you can put all your files to a folder inside the public_html and secure it (I think .htaccess can do the trick), the web files should be in the public_html also but you must change the require_once(dirname(__FILE__).'/../config/ProjectConfiguration.class.php'); of your index.php to point to the new location of the ProjectConfiguration file.
But since this is a shared hosting environment, it is still possible that others may have access to your files but this is mostly on how the hosting provider setup their servers.

Resources