To use the live reload functionality i have to use entry point as gradle quarkusDev to build and run my app/service on a docker container.
But as I am working with a monorepo there are a lot of dependencies that gets downloaded as part of build and a lot tasks that was already completed on my local gradle.
Is there any way that i can mount my .gradle local folder to .gradle on docker container so that it can use the pre processed tasks or loaded dependencies.
Related
I have a multi-container application, with nginx as web server and reverse-proxy, and a simple 'Hello World' Streamlit app.
It is available on my Gitlab.
I am totally new to DevOps, and would therefore like to leverage Gitlab's Auto DevOps so as to make it easy.
By default Gitlab's Auto DevOps expects one Dockerfile only, and at the root of the project (source)
Surprisingly, I only found one ressource on my multi-container use case, that aimed to answer this issue : https://forum.gitlab.com/t/auto-build-for-multiple-docker-containers/46949
I followed the advice, and made only slights changes to the .gitlab-ci.yml for the path to my dockerfiles.
But then I have an issue with the Dockerfiles not recognizing the files in its folder :
App's Dockerfile doesn't find the requirements.txt :
And Nginx's Dockerfile doesn't find the project.conf
It seems that the DOCKERFILE_PATH: src/nginx/Dockerfile variable gives only acess to the Dockerfile in itself, but doesn't understand this path as the location for the build.
How can I customize this .gitlab-ci.yml so that the build passes correctly ?
Thank you very much !
The reason the files are not being found is due to how docker's context works. Since you're running docker build from the root, your context will be within the root as opposed to from the path for your dockerfile. That means that your docker build command is trying to find /requirements.txt instead of src/app/requirements.txt. You can fix this relatively easily by just executing a cd to change to your /src/app directory before you run docker build, and removing the -f flag from your docker build (since you no longer need to specify the folder).
Since each job executes in an isolated container, you don't need to worry about CDing back to your build root, since your job never runs any other non-docker commands.
I am building Docker containers using gcloud:
gcloud builds submit --timeout 1000 --tag eu.gcr.io/$PROJECT_ID/dockername Dockerfiles/folder_with_dockerfile
The last 2 steps of the Dockerfile contain this:
COPY script.sh .
CMD bash script.sh
Many of the changes I want to test are in the script. So the Dockerfile stays intact. Building those Docker files on Linux with Docker-compose results in a very quick build because it detects nothing has changed. However, doing this on gcloud, I notice the complete Docker being re-generated whereas only a minor change in the script.sh has been created.
Any way to prevent this behavior?
Your local build is fast because you already have all remote resouces cached locally.
Looks like using kaniko-cache would speed a lot your build. (see https://cloud.google.com/cloud-build/docs/kaniko-cache#kaniko-build).
To enable the cache on your project run
gcloud config set builds/use_kaniko True
The first time you build the container it will feed the cache (for 6h by default) and the rest will be faster since dependencies will be cached.
If you need to further speed up your build, I would use two containers and have both in my local GCP container registry:
The fist one as a cache with all remote dependencies (OS / language / framework / etc).
The second one is the one you need with just the COPY and CMD using the cache container as base.
Actually, gcloud has a lot to do:
The gcloud builds submit command:
compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
uploads the files to a storage bucket;
initiates a build using the uploaded files as input;
tags the image using the provided name;
pushes the built image to Container Registry.
Therefore the compete build process could be time consuming.
There are recommended practices for speeding up builds such as:
building leaner containers;
using caching features;
using a custom high-CPU VM;
excluding unnecessary files from upload.
Those could optimize the overall build process.
I am basically trying to create build and release pipelines for a react js app.The tasks in the build pipeline include npm install,npm run build,then build and push a docker image using dockerfile(nginx serving the build folder).Then in the release pipeline I want to do a kubectl apply on the nginx yaml.
My problem is that the task npm run build is not creating the build folder in the azure repos which is where I pushed my code into.
I tried removing the line "#production /build" from the file gitignore from the azure repos.
Dockerfile used for building image
FROM nginx
COPY /build /usr/share/nginx/html
since the build folder was not created in the azure repos,the build docker image task in the build pipeline keeps failing.Please help
Here is a contributor in a case with similar issue giving a solution.
His solution is:
For "npm build" task, the custom command (In question above, tried
"build" and "npm run-script build") should be "run-script build". The
build has successfully created the dist folder.
For details ,you can refer to this case.
I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c
Is there a way to only download the dependencies but do not compile source.
I am asking because I am trying to build a Docker build environment for my bigger project.
The Idear is that during docker build I clone the project, download all dependencies and then delete the code.
Then use docker run -v to mount the frequently changing code into the docker container and start compiling the project.
Currently I just compile the code during build and then compile it again on run. The problem ist that when a dependencie changes I have to build from scratch and that takes a long time.
Run sbt's update command. Dependencies will be resolved and retrieved.