include external files in processmaker 4 php scripts - processmaker

I am using processmaker 4 comunity, I need to run a PHP script that needs to include some external files with classes I defined. Since the php executor runs in a container it does not have access to the file system in my computer, how can I make the included files available in the executor container?
I understand that one of the option would be to create a new executor with a dockerfile that will copy the required php files in the docker's executor image, but that means rebuilding the image each time I make a change on the included files.
It would be great if I could mount a directory of my local host to the executors container, but have not figurer out if I can do that in processmaker.
Any suggestions or information on how to work arround this?

Related

Separate shell scripts from application in docker container?

I have a ftp.sh script that downloads files from an external ftp to my host. And I have another (java) application that imports the downloaded content into a database. Currently, both runs on the host triggered by a cronjob as follows:
importer.sh:
#!/bin/bash
source ftp.sh
java -jar app.jar
Now I'd like to move my project to docker. From the design point of view: would the .sh script and the application reside both in a separate container? Or should both be bundled into one container?
I can think of the following approaches:
Run the ftp script on host, but java app in docker container.
Run the ftp script in its own docker container, and the java app in another docker container.
Bundle both the script and java app in a docker container. Then call a wrapper script with: ENTRYPOINT["wrapper.sh"]
So the underlying question is: should each docker container serve only on purpose (either download files, or import them)?
Sharing files between containers is tricky and I would try to avoid doing that. It sounds like you are trying to set up a one-off container that does the download, then does the import, then exits, so "orchestrating" this with a shell script in a single container will be much easier than trying to set up multiple containers with shared storage. If the ftp.sh script sets some environment variables then it will be basically impossible to export them to a second container.
From your workflow it doesn't sound like building an image with the file and the import tool is the right approach. If it was, I could envision a work flow where you ran ftp.sh on the host or as the first part of a multi-stage build and then COPY the file into the image. For a workflow that's "download a file then import it into a database, where the file changes routinely" I don't think that's what you're after.
The setup you have now should work fine just packaging it into a container. I'd give you some generic advice in a code-review context but your last option of stuffing it also into one image and running the wrapper script as the main container process makes sense.

Docker for custom build process

I have an executable that performs a number of tasks such as:
Copy .NET source code to a directory
Run another executable that modifies the source code
Run MSBuild to build the code
Publish the code
Run add-migration to create database
Run another executable that populates the database from files
etc.
I've set this up on my laptop, and everything works correctly, and now I want to publish this to the cloud.
Is it possible to create a docker image that does all these kinds of things, and run it on Azure Container Instances? Or do I need to run this kind of system on a VM?
I'm new to docker so don't know what it's capable of, but if I can run it on ACI as-needed that would be great, so I'm not paying for a VM 24/7 when this process only happens a few times a day
Docker is an open source centralised plateform design to create, Deploy and run Application.Its uses OS level of virtualization.Docker uses container on host to run the application.Container is also like a Virtaul Machine but its advantage, there is no preallocation of RAM as we have in VM.
Is it possible to create a docker image that does all these kinds of
things, and run it on Azure Container Instances? Or do I need to run
this kind of system on a VM?
Yes it is possible to create docker images using docker compose yaml file . In that yaml file you need to write all the task you want to peroform and then build an images of that file and push it container registery and create a conatainer instance of it.
You can take reference of these thread for Copy source code and to add it into docker image using Dockerfile.and how to create database in docker container using yaml file

shared folder for a docker container

It is certainly a basic question but I don't know how to deal with this issue.
I am creating a simple docker image executing Python scripts and which be deployed on differents users's Windows laptops. It needs specific shared folder in order to write outputs at the end of the process.
Users are not able to manage any informatic stuff like docker or simple terminal.
So they run it with a bat file where I indicate the docker command with -v option.
But obviously users path are differents on each laptop. How to create a standard image which could avoid this specific mounting path
thanks a lot

Using Transfer for on-premises option to transfer files

[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job

How to place files on shared volume from within Dockerfile?

I have a Dockerfile which builds an image that provides for me a complicated tool-chain environment to compile a project on a mounted volume from the host machines file system. Another reason is that I don't have a lot of space on the image.
The Dockerfile builds my tool-chain in the OS image, and then prepares the source by downloading packages to be placed on the hosts shared volume. And normally from there I'd then log into the image and execute commands to build. And this is the problem. I can download the source in the Dockerfile, but how then would I get it to the shared volume.
Basically I have ...
ADD http://.../file mydir
VOLUME /home/me/mydir
But then of course, I get the error 'cannot mount volume over existing file ..."
Am I going about this wrong?
You're going about it wrong, but you already suspected that.
If you want the source files to reside on the host filesystem, get rid of the VOLUME directive in your Dockerfile, and don't try to download the source files at build time. This is something you want to do at run time. You probably want to provision your image with a pair of scripts:
One that downloads the files to a specific location, say /build.
Another that actually runs the build process.
With these in place, you could first download the source files to a location on the host filesystem, as in:
docker run -v /path/on/my/host:/build myimage fetch-sources
And then you can build them by running:
docker run -v /path/on/my/host:/build myimage build-sources
With this model:
You're trying to muck about with volumes during the image build process. This is almost never what you want, since data stored in a volume is explicitly excluded from the image, and the build process doesn't permit you to conveniently mount host directories inside the container.
You are able to download the files into a persistent location on the host, where they will be available to you for editing, or re-building, or whatever.
You can run the build process multiple times without needing to re-download the source files every time.
I think this will do pretty much what you want, but if it doesn't meet your needs, or if something is unclear, let me know.

Resources