I have the following build scenario:
Project C depends on Project B which depends on Project A
Project X depends on C, Project Y depends on B, Project Z depends on A.
There are many other modules with dependencies, but the most "complex" build relationship is with project X. To set this up in Jenkins, I've used the Parameterized Trigger Plugin, with the "block until triggered project has finished building" option. Each buildable module gets its own Jenkins job, and the plugin is used to block on the necessary dependent build job.
I've run into 2 problems with this setup.
Projects A and B get built many times over as they are in the transitive dependency chain for projects X, Y, Z, etc.
Jenkins, sometimes, gets completely blocked building projects because builds for jobs X, Y, Z, etc. have filled up all available executors (being triggered by scm changes), and the dependent projects are in the queue waiting to be built.
I'm looking for advice on how to configure Jenkins for such a build environment. I'm new to Jenkins so I don't know exactly what options are available to resolve this issue.
EDIT:
All jobs are triggered by SCM changes. Dependent projects are also triggered by the Parameterized Trigger Plugin; the "block until triggered projects finish their builds" options.
Jenkins has multiple ways of triggering builds. Most common is by watching for repository changes.
However, you can also automatically trigger a build after another job finishes building. This is built into Jenkins. Take a look under Build Triggers and select the checkbox Build after other projects are built, or in the Post-Build section, select Build Other Projects post-build action. And, you can have multiple triggers on a build, so a build could happen if a dependency from another project changes or if the source files change.
If you use Maven or Ivy, there are plugins that can be used to trigger a build if a Maven/Ivy jar dependency that the project uses changes.
One more useful plugin is the Copy Artifacts Plugin. This allows you to copy a build artifact from Project "X" to Project "Y" for Project "Y" to use for building.
Related
New to Jenkins and and a bit confused which job to pick for my project (freestyle, pipeline, multi-config, etc)
What I need: I have two repos, one for the frontend (project A), and one for the backend (project B). I need jenkins to be able to build project A and if build is successful, build project B, if both are successful, ship the code to either staging or live servers.
Which Jenkins job would be best suited for this kind of workflow?
Pipelines ones, without doubt. Job configuration as code, easier to modify, better maintainability with shared libraries
I have a Project A. Project A is listed as dependency in Project B. Whenever I kick off build of project A in Jenkins, once the build is successful, it is kicking of Project B automatically. Is there any way I can stop this or can add a parameter for this step?
the company where I work for is evaluating jenkins 2.71, in particular the pipeline and blue ocean plugins. We already tested also GoCD and we need, as in GoCD, a way for a pipeline to automatically fetch the artifacts from 2 other pipelines (taking the last successful result of each one of them), here our case.
We have these initial pipelines (build & run tests), which reflect 2 projects:
frontend, ~ 15 minutes
backend, ~10 minutes
I created a pipeline called configure (~1 minute), with e.g. a parameter called customer-name, which takes backend and frontend files and puts them together, then applies specific customer specific configurations and customizations and produces deployable artifacts. Instead of "customer-name" I could also parallelize this job to create all the artifacts for each customer at once, separated in different directories.
The next pipeline would be to deploy them on different test servers separated for each customer. This could be also part of the same configure pipeline, we still have to see how to put things together in jenkins...
Ideally, I need configure pipeline to be triggered automatically (or also on demand) after each frontend or backend success and take as input the last successful artifacts from these 2 pipelines, but not just having the last successful build, we need as dependency the git branch name.
E.g. we have:
backend branches:
master
release/2017.2
frontend braches:
master
release/2017.2
In the pipeline editor, I found a Build Triggers option and set it as follows: Build after other projects are built > Projects to watch: frontend, backend > Check Trigger only if build is stable or better in my test environment full of failures Trigger even if the build is unstable.
Searching further, I found Copy Artifact Plugin
But now the big question, how to fetch the last successful artifacts from these pipelines with the same git branch name?
Because we don't want to mix e.g. a backend build of "release/2017.2" with frontend "master", it has to find as the last successful build having the same relationship or parameter or whatever you wanna call it, in our case the association is the git branch name.
Is it possible to achieve this? If yes, how?
The copy artifact plugin seems to work in a freestyle project. Would it work in a pipeline? That's also a concern...
Thanks
Yes, the Copy Artifact plugin does work in both freestyle and pipeline projects; pipeline uses the copyArtifact function that I referenced in my comment. Note that if you go to the Pipeline Syntax link, it's kind of hidden: you have to first select "step: General Build Step" from the drop-down, then it will give you the Copy Artifact pipeline command builder.
I'm going to assume that your frontend and backend projects are built as multi-branch pipelines, as that would probably be easiest to maintain so that you don't have to keep creating new projects for every release. You can reference these projects from other projects by referencing <project name>/<branch name> (sometimes I've had to replace the / with %2f instead, I think mostly on freestyle projects). You could then set up your configure project as a parameterized build (either pipeline or freestyle), say with a string parameter of PROJECT_BRANCH_NAME. Then put in the following in your frontend/backend project pipeline scripts to trigger a build of your configure project
build job: 'configure', parameters: [[$class: 'StringParameterValue', name: 'PROJECT_BRANCH_NAME', value: ${env.BRANCH_NAME}]]
Then you should just be able to make your configure project reference the frontend/%PROJECT_BRANCH_NAME% and backend/%PROJECT_BRANCH_NAME% (or ${env.PROJECT_BRANCH_NAME} in a pipeline script) when copying the artifacts.
Also, is there a particular reason why you're evaluating specifically Jenkins 2.7? 2.7 is a year old now, and there have been a few new LTS releases since then. I'd recommend staying reasonably up-to-date unless you know there's a specific reason you want 2.7.
I have some reports that I would like to push up to Github after my main project target builds.
These reports should be pushed up whether the first project succeeds or not. Can Jenkins do either of the following:
Specify multiple tasks (like Bamboo).
Build another project after the first, even if the first fails.
Reading your comment on the other Answers I propose this solution for you Jasper
Keep you existing project that builds and generates your reports and create a new project that you might call "report uploader" which only uploads your reports to your git.
1) Main Project Build
this will build you system, run it and test it (resulting in some reports, let's call them REPORT.o)
this project might or might not fail
REPORT.o should be archived as artifacts then the build is finished
the job should always trigger the "report uploader" job - Use the parameterized build trigger plugging
make sure this job has the checkbox wait for downstream jobs (else a new job might be started and overwrite the report files)
2) Report uploader Build
this will project will take arguments from the upstream job to find the artifacts
fetch them and upload them to whatever server you like to have
The same concept is loosely described on jenkins wiki
hope this fits your need
Yes.
With Git publisher you can push to a branch and choose whether or not to only do so when the build succeeds.
There is also a post-build action where you can build other projects and an option to do so even if the build fails.
I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.