I want to run these below jobs in a docker
Install and start MarkLogic in my local.
Create range index for System axes.
Create temporal axes.
Create unitemporal collection using the temporal axes.
Job no 1 is done.
For the other remaining jobs, I want my docker file to execute xqueries .
Is it possible to execute a xquery file inside a bash file ?
Related
I use custom docker images (mostly based on phusion) for GitLab CI alright. But sometimes an image requires sourcing a shell file to work properly (set PATH, LD_LIBRARY_PATH, etc.).
When running an interactive shell from the docker image (e.g. docker run -it <image_name> /bin/bash), this can be fixed by simply adding the appropriate source command to /etc/profile or whatever. But it looks like the scripts in GitLab CI are not run in an interactive shell, and then the paths are not properly set up. I work around this by adding the source (or .) command to the GitLab CI script itself, but this is something image-specific, that should be in the image, not in the script.
Is there anything I can do that will effectively source the file directly on the image (or at least when GitLab CI runs the script on the image)? I could manually inspect what environment changes the sourced file introduces and put them in ENV instructions, but I'm looking for something less fragile when rebuilding the image from possibly updated sources.
I am building Docker containers using gcloud:
gcloud builds submit --timeout 1000 --tag eu.gcr.io/$PROJECT_ID/dockername Dockerfiles/folder_with_dockerfile
The last 2 steps of the Dockerfile contain this:
COPY script.sh .
CMD bash script.sh
Many of the changes I want to test are in the script. So the Dockerfile stays intact. Building those Docker files on Linux with Docker-compose results in a very quick build because it detects nothing has changed. However, doing this on gcloud, I notice the complete Docker being re-generated whereas only a minor change in the script.sh has been created.
Any way to prevent this behavior?
Your local build is fast because you already have all remote resouces cached locally.
Looks like using kaniko-cache would speed a lot your build. (see https://cloud.google.com/cloud-build/docs/kaniko-cache#kaniko-build).
To enable the cache on your project run
gcloud config set builds/use_kaniko True
The first time you build the container it will feed the cache (for 6h by default) and the rest will be faster since dependencies will be cached.
If you need to further speed up your build, I would use two containers and have both in my local GCP container registry:
The fist one as a cache with all remote dependencies (OS / language / framework / etc).
The second one is the one you need with just the COPY and CMD using the cache container as base.
Actually, gcloud has a lot to do:
The gcloud builds submit command:
compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
uploads the files to a storage bucket;
initiates a build using the uploaded files as input;
tags the image using the provided name;
pushes the built image to Container Registry.
Therefore the compete build process could be time consuming.
There are recommended practices for speeding up builds such as:
building leaner containers;
using caching features;
using a custom high-CPU VM;
excluding unnecessary files from upload.
Those could optimize the overall build process.
I want my Cloud Build to push an image to a registry with an incremented tag. So, when the trigger arrives from GitHub, build the image, and if the latest tag was 1.10, tag the new one 1.11. Similarly, the 1.11 value will serve in multiple other steps in the build.
Reading the registry and incrementing the tag is easy (in a bash Cloud Build step), but Cloud Build has no way to pass parameters. (Substitutions come from outside the Cloud Build process, for example from the Git tags, and are not generated inside the process.)
This StackOverflow question and this article say that Cloud Build steps can communicate by writing files to the workspace directory.
That is clumsy. But worse, this requires using shell steps exclusively, not the native docker-building steps, nor the native image command.
How can I do this?
Sadly you can't. The Cloud Builder image have each time their own sandbox and only the /workspace directory is mounted. By the way, all the environment variable, binaries installed and so, doesn't persist from one container to the next one.
You have to use the shell script each time :( The easiest way is to have a file in your /workspace directory (for example env.var file)
# load the environment variable
source /workspace/env.var
# Add variable
echo "NEW=Variable" >> /workspace/env.var
For this, Cloud Build is boring...
I'd like to be able to control the source of a file (Java Archive) in a Dockerfile which is either a download (with curl) or a local file on the same machine I build the Docker image.
I'm aware of ways to control RUN statements, e.g. Conditional ENV in Dockerfile, but since I need access to the filesystem outside the Docker build image, a RUN statement won't do. I'd need a conditional COPY or ADD or a workaround.
I'm interested in built-in Docker functions/features which avoid the use of more than one Dockerfile or wrapping the Dockerfile in a script using templating software (those just workarounds popping into my head).
you can use multi-stage build which is rather new in docker:
https://docs.docker.com/develop/develop-images/multistage-build/
How can I Run the multiple JMX files in one docker image and all of the result should show in single place.
Without Taurus is this possible?
or
Can I run the all JMX files one by one in docker with the help of dynamic name of JMX files
I am using Linux
You can run several JMX files at the same time using i.e. GNU Parallel program, the syntax would be something like:
parallel ::: "./jmeter -n -t test1.jmx -l result.jtl" "./jmeter -n -t test2.jmx -l result.jtl"
Another option is using different result files for each .jmx script and once test is finished combine them using Merge Results Tool. You can install Merge Results Tool using JMeter Plugins Manager