I'm currently using jenkins with clustered slaves.
There is a multi-configuration job (triggered by a git hooks), which do nothing, it purpose is just to pull a git repository on all slaves. This repository stores all the needed scripts for other jenkins jobs. Thank to this, we can update the job's build script through git and ease the maintenance of our jenkins instance (I'm the only one familiar with Jenkins, and so setup this mess for my team).
However, the scripts aren't pulled in the same place on each slave:
In two slaves, the repository is pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name> and /builds/workspace/<multi-configuration job's name>.
In the last slave, the repository is only pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name>.
So, I have two questions :
Is it a good idea to use such multi-configuration job to synchronize build scripts in all the slave ?
Why the multi-configuration job doesn't pull the repository in the same place for all slaves ?
About the configuration:
The source code management and build triggers are configured to allow triggering from git hooks (Poll SCM is checked).
There is no build step.
Here is the configuration matrix:
(The master node is unchecked)
Related
I'm currently trying to set up a build job connected to multiple (3) svn repositories with single pipeline, with Poll SCM scheme. All the contents from the repositories build into a single artifact. Repositories are, e.g., say:
https://myrepo.com/mainSrc/main
https://myrepo.com/libs/library1
https://myrepo.com/libs/library2
When I set all this repos inside Pipeline configuration, the build job successfully checkouts all the paths to the workspace.
However, the point is, I need to track which repository started the build job, to write some conditional steps. Is there a solution for this? I googled for it, there are some ways to obtain svn revision for a path, however it's not what I'm looking for.
I've only been working with Jenkins so far. We have configured a Multibranch Pipeline job to automatically build and test software. The tasks are written in Groovy and stored as Jenkinsfile in the root directory of our git repository.
Recently, we have decided to add another mechanism to automatically generate documentation. The generation of documentation (but this could be any other task) has been realized using GitLab CI.
Both pipelines are practically independent - and both are triggered by a git commit/push. What I do not understand is: why and how is the Jenkins pipeline execution associated with the GitLab CI pipeline? In the following screenshot a new column "External" appears - representing the Jenkins pipeline job.
That's not really a big issue. But as both pipelines should be independent - the results of the runs should not influence each other. However, it seems that when the Jenkins job fails, i.e. "External", the GitLab CI pipeline also fails:
Is there a way to better decouple those pipelines, i.e. let them fail or succeed individually?
This is because the Gitlab Branch Source Plugin automatically notify Gitlab about then Jenkins pipeline status. This allow you to see the result of a build directly in Gitlab. If you want to have only the result of the Gitlab CI pipeline in Gitlab, you can disable this feature :
Additional Traits:
These traits can be selected by selecting Add in
the Behaviours section.
[...]
Skip pipeline status notifications - Disable notifying GitLab server
about the pipeline status.
[...]
So in yout Gitlab group, just go the Configure > Projects > Gitlab Group > Add and select Skip pipeline status notifications.
why and how is the Jenkins pipeline execution associated with the GitLab CI pipeline? In the following screenshot a new column "External" appears - representing the Jenkins pipeline job.
In general, "External" statuses are created using the commit build status API -- Jenkins uses this API to report the Jenkins pipeline build status to GitLab CI.
This external status for Jenkins appears in your GitLab pipeline because you have configured your Jenkins server/project to report build statuses to GitLab or you have setup a webhook integration with Jenkins in GitLab (note these may be set at the group level or by an administrator, not necessarily the project level)
To remove this from your pipeline, you should disable any existing integration configurations and setup your Jenkins project independently of any GitLab integration. e.g. using git polling to trigger jenkins builds and remove any updateGitlabCommitStatus calls in your groovy scripts / build stages.
I have automated Jenkins master and slaves deployment and redeployment successfully.
I know how to manually create pipeline jobs and add github repos to use their Jenkinsfiles for the steps.
my issue is how can I automate the pipeline jobs addition to jenkins after its been destroyed and redeployed without having to manually create the pipeline jobs and point to Jenkinsfile each time.
I have seen this done before in a container environment with chef and docker when redeployed or updated it re-adds all the pipelines automatically again.
I want to not use the UI at all only to confirm job status progress and verify settings.
I would recommend looking at the JobDSL Plugin to create jobs, using a seed job to create them on initial Jenkins startup. The Jenkins Configuration-as-Code plugin can be used to setup any other configuration outside the jobs.
I am currently using Cloudbees Jenkins Coreas my Jenkins solution.
I am using Jenkins Pipelines to write our Jenkins job configuration. These pipelines are stored in GitHub repositories. Each Jenkins job when created is connected to a GitHub Repository where the source code is pulled from, and that's where the Jenkinsfile is stored and Jenkins reads from.
Below are some high-level photos for how our Jenkins jobs are configured.
The advantage of the way these jobs are configured is the Jenkinsfile is always read from the master branch. Meaning if a rouge developer tries to remove stages from the Jenkinsfile from within there own branch, it doesn't matter because the Jenkinsfile is always read from the master branch (which is always protected).
However, the one massive drawback to this - is how do teams and developers who are devops engineerings make changes to the Jenkinsfile? For example, let's say a developer creates a branch called feature-jenkins-search and they edit the Jenkinsfile adding a new stage in the pipeline. Whenever they push these changes to GitHub to test - they can't test as it's always read from the master branch? Meaning devops engineerings have to work directly on the master branch? Surely this is not the best way to go and there is a better configuration to set?
We do want to still provide the security that if a developer is rougue and
You should really look into the Jenkins multi-branch pipeline feature. The Jenkins multi-branch pipeline allows to create a single configuration item in Jenkins (a bit like a folder) that can detect all the branches and pull requests in a GitHub repository with a Jenkinsfile and build them using automatically created jobs. Inside this multi-branch pipeline object when it is configured in Jenkins, you will find a number of jobs to build the various branches and pull-requests in the GitHub repository.
So your developers should maintain a Jenkinsfile in every branch they work on in GitHub to build that branch in your Jenkins server.
It is possible to make the Jenkinsfile do branch specific handling if required with conditional stages / when conditions in the Jenkinsfile pipelines in each branch.
You can lock down the master branch so that code and Jenkinsfile changes from other branches can only be merged with an approved PR (pull request). There is good integration between Jenkins and GitHub such that you can configure the master branch to only allow a PR to be merged if the PR is buildable in Jenkins. So if developers add new stages / processing to a Jenkinsfile on a branch being merged to master, it should be validated so that builds of your master branch are not broken.
There is a lot of configurability in the Jenkins multi-branch pipeline object for detection and handling of branches and it may be necessary to experiment to get it right for what you need with your team. If you cannot find this feature in Jenkins, it is probably because the correct Jenkins pipeline and GitHub related plugins are not installed.
You could also have a look at a similar Jenkins feature called the Jenkins GitHub Organization Folder which allows to detect and build all repos and branches at a GitHub Organization level. But when starting out, I would suggest to look into the multi-branch pipeline at the single repo level first.
These features are discussed in the Jenkins pipeline documentation. We use these features with our internal GitHub and Jenkins server and it works very well.
I think you will find the idea of using a single Jenkinsfile in the master branch to be used for building all branches is unworkable, as you have seen!
We're running tests and producing build files on a jenkins master and a jenkins slave for extra parallellisation, our RCPTT tests takes ages.
Our problem is that Jenkins -> -> Show workspace only shows the workspace on the master, so we have no way to get the builds except copying files manually over ssh.
We don't want duplication since different patches run either on master or slave, and we want to be able to get the files from both master and slave nodes.
You can use "Copy To Slave Plugin" to copy any files from master to slave.
If you use the Jenkins pipeline plugin https://jenkins.io/doc/book/pipeline/
then you can use stash / unstash:
https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-stash-code-stash-some-files-to-be-used-later-in-the-build
Her an example: https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins