We have our base images set up on Dockerhub, and they will rebuild whenever the upstream repositories change.
In our Dockerfiles we install several packages with apt-get.
We'd like to have the most recent versions of these packages at all times. Since these packages have nothing to do with the upstream repo's, we'd need to reinstall them into our base image regularly.
One seemingly simple solution would be to have a scheduled rebuild of our images, for example daily or hourly. Each run would pull in the latest versions and bake it into the base image.
However, I can't find any way to do this. There's no option for it in the Dockerhub UI, and I can't find any reference for an API call or webhook that I can trigger from a cron job.
Has anyone come across a way to set up scheduled builds, or a reason why something this (seemingly) straightforward is unsupported?
There are Build triggers (Trigger your Automated Build by sending a POST to a specific endpoint. in Configure Automated Builds. Unfortunately, this feature was changed recently and I'm not able to find current documentation. There was option to POST some data, e.g. 'docker_tag=dev' to trigger specific build by docker tag/branch/...
Related
In CI/CD, I would like to know if it's possible to create a Pull request CI/CD trigger, that runs on changes to specific files/folders only?
I know that google cloud build does not support this(some other popular tools do not support too). If that's the case, then what can be my alternates?
This is not the complete answer but few cents from me, can you put a check into some shell script to look for a path/file from workspace in Jenkins or whatever build tool u are using, if that path/file is part of git commit then proceed with build.
I'v been sending new versions of app in docker images to GCP Container Registry, now I wanted to create a jenkins job that when picked Build with Parameters gives you a list of let's say last 15 tagged images, and let's you choose which version of app to deploy - the entire logic how to deploy etc. I can handle, but I'v been thinking how to live display those versions, anyone have any idea ? I know about the Active Choice Parameter and there is a Groovy Script section - I used it before to list GIT Branches - but not sure how to authenticate to GCP and run
gcloud container images list-tags gcr.io/my/image
inside the Groovy Script or some other alternative
EDIT:
I see the Amazon version of plugin that can do it, but no sight of GCP one. So I guess good old scripts are the only option.
I think I Know the answer and will just have to create Service Account for it but maybe someone will come with better solution
Gitlab-ci's default mode is to use git clone in every job in a pipeline.
This is time-consuming, especially since after cloning we need to install/update all dependencies.
I'd like to flip the order of our pipelines, and start with git clone + docker build, then run all subsequent jobs using that image without cloning and rebuilding for each job.
Am I missing something?
Is there any reason that I'd want to clone the repo separately for each job if I already have an image with the current code?
You are not missing anything. If you know what you are doing, you don't need to clone your repo for each stage in your pipeline. If you set the GIT_STRATEGY variable to none, your test jobs, or whatever they are, will run faster and you can simply run your docker pull commands and the tests that you need. Just make sure that you use the correct docker images, even if you start many parallel jobs. You could for example use CI_COMMIT_REF_NAME as part of the name of the docker image.
As to why GitLab defaults to using git clone, my guess is that this is the least surprising behavior. If you imagine someone new to GitLab and new to CI, it will be much easier for them to get up and running if each job simply clones the whole repo. You also have to remember that not everyone builds docker images in their jobs. I would guess that the most common way this is set up is either with programming languages that doesn't need to be compiled, for example python, or that there is a build job that produces binaries, and then a test job that runs the binaries. They can then use artifacts to send the binaries from the build job to the test job.
This is easy and it works. When people then realize that a lot of the time of their test jobs is spent just cloning the repository, they might look into how to change the GIT_STRATEGY, and to do other things to optimize their particular build.
One of the reasons of using a CI is to execute your repo in a fresh state. This cannot be done if you skip the git clone process in certain jobs. A job may modify the repo's state by deleting its file or generating new ones; only the artifacts which are explicitly documented in the pipeline should be shared between jobs-nothing else.
I have a repository that I can create a release for. I have jenkins setup and since the jenkins is hosted inside a firewall that restricts any communication from outside the network, github-webhook doesnt work. Also getting the reverse proxy to work is a bit of a challenge for me. I understand that the github webhook sends out a json payload and I can qualify it based on release. But as I previously mentioned, this won't work because jenkins and github cannot talk with each other.
Therefore, I tried this solution; Filtering the branches or tags that the jenkins will build on. The following are the things I tried and they all didnt work. Everytime I run a build, jenkins just builds it.
I also tried the below mentioned regex,
:refs\/tags\/(\d+\.\d+\.\d+)
I also tried [0-9] instead of d. It build it every single time.
Am I missing something ? Or is that how jenkins work ? Even though we qualify the builds to run only on certain tags or releases, if we click on build now, it just runs it every single time ?
My requirement is very simple. I want jenkins build to run only on the release I created even if the release is 'n' commits behind the master. How can I achieve this ?
I run a docker image for data processing on a windows server 2016 (single vm, On Premises). My image is stored in a Azure Container Registry. The code does not change often. To get security Updates I like to get a rebuild and release after the microsoft/windowsservercoreis updated.
Is there a Best Practice Way to do this?
I thought about 3 ways of solving this:
Run a scheduled build every 24h, pull the microsoft/windowsservercore, pull my custom image, run powershell to get the build dates and compare then (or use some of the histroy ids). If a rebuild is needed, build the new image and tag the build. Configure the Release to run only on this tag.
Run a Job to check the update time of the docker image and trigger the build with a REST request.
Put a basic Dockerfile on github. Set up automated Build with a trigger to microsoft/windowsservercore and configure the webhook to a WebService, which start the Build with REST.
But I really like non of these Ideas. Is there a better option?
You can use Azure Container Registry webhooks directly, the simple workflow:
Build a Web Api project to queue build per to detail request (webhook request) through Queue a build Rest API
Create an Azure Container Registry webhook to call Web API (step1)
I choose option three. Therefore I set up a github repository with a one line Dockerfile:
FROM alpine
I used the alpine image and not the windowsservercore, because automated build does currently does not support windows images. I configured a automated build in the docker hub and add a Linked Repositories to microsoft/windowsservercore.
Then I set up a MS Flow with a HTTP Request Trigger to start the Build. Add the Flow URL to a new webhook on the automated build.
For me this are to many moving parts that has to configured and work together, but I know no better way.