Can I set up dependency builds in Jenkins similar to TeamCity? - jenkins

I have not found information about that, I want to trigger build but before it executes, I want it to trigger some other pipelines and wait for a file created by these other pipelines to pass to a main Pipeline, can I do this in Jenkins?

You can, yes, in multiple ways depending on your actual use case.
The simplest way would be to create the job you want to call and then add a step for calling that job, copy artifacts from the job and then continue with the pipeline.
Using Jenkins Build step:
stage ('Child job') {
steps {
build(job: 'foo', wait: true, propagate: true, parameters: [parameters_list])
}
}
The parameter wait makes it so your pipeline waits for the child to complete run before continuing and propagate means that the result of the job is shown in the parent pipeline as well. The parameters section is needed only if your child job requires parameters. See the parameter types in the Build step documentation.
You can also use the Jenkins pipeline snippet generator to create the call properly in your own Jenkins instance.
To get any build artifacts from the child job, the easiest way to go is to use the Copy Artifact plugin. You'll need to archiveArtifacts in the first job and then
copyArtifacts fingerprintArtifacts: true, projectName: 'foo', selector: lastCompleted()
When the artifacts are needed.

Related

Jenkins scripted pipeline - multiple execution with passing artifacts between stages

I was searching and could not find proper information on how to resolve the issue I have with copying the artifacts to jobs that are being executed multiple times in parallel.
I have defined scripted pipeline which executes predefined jobs in stages some which are run in parallel as follows:
the main pipeline job is located in this structure: /jenkins/workspace/<main_job>
these jobs are also preparing artifacts and I later copy them to different stage/jobs in same pipeline with the build id of the executed job.
node() {
stage("Creating Build") {
def stages = [:]
failFast: true
stages["Core"] = {
copyArtifacts(projectName: <job to copy from>, flatten: true, target: '../' + coreBuildJob)
buildCore = build job: coreBuildJob
}
}
stages["Content"] = {
copyArtifacts(projectName: <job to copy from>, flatten: true, target: '../' + contentBuildJob)
buildContent = build job: contentBuildJob
}
parallel(stages)
}
I am using the CopyArtifact Plugin to copy the artifacts that were created but it appears that:
it copies the file to main job folder in the instance.
because of the different workspace/project location I was needed to define the 'target' location to properly copy the artifacts to required job that I execute in the script prior the jobs execution.
e.g for coreBuildJob in the Core stage:
`copyArtifacts(projectName: <job to copy from>, flatten: true, target: '../' + <job_for_execution>)`
This does helps me to resolve the issue with copying the required artifacts by these jobs but I end up with another issue in this case:
When I want this scripted pipeline job to be executed multiple times with different parameters.
The issue is that when the pipeline is executed for the 2nd time and the job that is run in one of the stages runs the 2nd time it creates the following path on the local machine:
`/jenkins/workspace/test_jobs/<job_for_execution>#2`
That means that what I have in my script is not correct, because it copies the files to:
`/jenkins/workspace/test_jobs/<job_for_execution>`
it does not copy the artifacts to proper location and they are not accessible from the executed job.
I thought of having the copyArtifacts part to be executed during the 'build job' command(as you can define in Jenkins UI with passing BUILD_ID as variable to copy artifacts like that) but I cant find any details regarding this to achieve the same behavior with the script.
How can this issue be resolved?
You can use stash/unstash.
After running your build, you can stash:
stash name:'data', includes: './*'
where data is the name (an identifier) and includes can be a directory, subdirectory or single file.
Then, in the stages you want to have the output of your build, use unstash:
unstash 'data'
After doing unstash, the files will be also in respective folder and you can run your other steps.
Refer to https://www.jenkins.io/doc/pipeline/examples/ for more information.

Aggregating test results in Jenkins

I've got a Jenkinsfile that invokes multiple sub-jobs via calls to build job: .... Each of those sub-jobs runs a bunch of unit tests. Is there some way to collect all of those test results and make them part of the test results for the job that's invoking them? Essentially if I have 3 jobs, with 10 tests each, I'd like the result of this to have 30 test results. I thought perhaps propagate might do this, but it does not.
You can archive build test results as artifacts and copy them into your main job using copyArtifact build step provided by a Copy Artifact plugin.
No-brainer example of main job Jenkinsfile:
build 'sub-job'
node('master') {
copyArtifacts projectName: 'sub-job', filter: 'results.xml', selector: lastCompleted()
junit 'results.xml' // or whatever you use to publish test results
}
From the build step documentation
propagate (optional)
If set, then if the downstream build is anything but successful (blue
ball), this step fails. If disabled, then this step succeeds even if
the downstream build is unstable, failed, etc.; use the result
property of the return value as needed.
The thing that could work though is Copy Artifact Plugin
stages {
stage('Copy Archive') {
steps {
script {
step ([$class: 'CopyArtifact',
projectName: 'Create_archive',
filter: "packages/infra*.zip",
target: 'Infra']);
}
}
}
Just be sure to filter for reports.

Pipeline within a pipeline?

I have three jobs on my jenkins server. One of them triggers the other two, and all three of them run one after another in a sequence. All are free style jobs.
Now, I want to convert all three jobs into pipeline jobs. So in my case, all three jobs will have their own separate pipelines and there would be an outer pipeline that will show me the three jobs running one after another. Is it possible at all to have a situation where the first job completes building in pipeline, then triggers the second job which runs in its own pipeline stages and then the third job also completes all the stages in its pipeline?
From the outside there would be a larger open pipeline : Job1->Job2->Job3
and on the inside there'd be smaller pipelines with each stage of each job like Clone->Build->Report Generation->.....
Please help.
at the end of Job1 pipline when it is success put this code
build 'JOB_NAME_2', propagate: false, wait: false
then at the end of Job2 pipline put this code
build 'JOB_NAME_3', propagate: false, wait: false
shape propogate and wait according to your needs.
to check please refer to Pipeline Syntax in your jenkins instance
/pipeline-syntax/

Jenkinsfile how to mimic 2 separate test and build jobs

In the old configuration we had 2 jobs, test and build.
The build ran after test had run successfully, but we could manually trigger build if we want to skip the tests.
After we switched to multiple pipeline using Jenkinsfile, we had to put those 2 build jobs in to the same file:
stage('Running tests'){
...
}
stage('Build'){
...
}
So now the build step is only triggered after running tests successfully, and we cannot manually trigger build, without commenting out the test steps and commit to the repository.
I am wondering if there is a better approach/practise to utilise the Jenkinsfile to overcome this limitation?
Using pipeline and Jenkinsfile is becoming the standard and preferred way of running jobs on Jenkins now a days. So using a Jenkinsfile is certainly the way to go.
One way to solve the problem is to make the job parameterized:
// Set the parameter properties, this will be done at the first run so that we can trigger with parameters manually
properties([parameters([booleanParam(defaultValue: true, description: 'Testing will be done if this is checked', name: 'DO_TEST')])])
stage('Running tests'){
// Putting the check inside of the stage step so that we don't confuse the stage view
if (params['DO_TEST']) {
...
}
}
stage('Build'){
...
}
The first time the job runs, it will add a parameter to the job. After that we can trigger manually and select whether tests should run. The default value will be used when it's triggered by SCM.

How to pass output from one pipeline to another in jenkins

I'm new to Jenkins and I've been tasked with a simple task of passing the output from one pipeline to the other.
Lets say that the first pipeline has a script that says echo HelloWorld, how would i pass this output to another pipeline so it displays the same thing.
I've looked at parameterized triggers and couple of other answers but I was hoping if someone could layout the step by step procedure to me.
If you want to implement it purely with Jenkins pipeline code - what I do is have an orchestrator pipeline job that builds all the pipeline jobs in my process, waits for them to complete then gets the build number:
Orchestrator job
def result = build job: 'jobA'
def buildNumber = result.getNumber()
echo "jobA build number : ${buildNumber}"
In each job like say 'jobA' I arrange to write the output to a known file (a properties file for example) which is then archived:
jobA
writeFile encoding: 'utf-8', file: 'results.properties', text: 'a=123\r\nb=foo'
archiveArtifacts 'results.properties'
Then after the build of each job like jobA, use the build number and use the Copy Artifacts plugin to get the file back into your orchestrator job and process it however you want:
Orchestrator job
step([$class : 'CopyArtifact',
filter : 'results.properties',
flatten : true,
projectName: 'jobA',
selector : [$class : 'SpecificBuildSelector',
buildNumber: buildNumber.toString()]])
You will find these plugins useful to look at:
Copy Artifact Plugin
Pipeline Utility Steps Plugin
If you are chaining jobs instead of using an orchestrator - say jobA builds jobB builds jobC etc - then you can use a similar method. CopyArtifacts can copy from the upstream job or you can pass parameters with the build number and name of the upstream job. I chose to use an orchestrator job after changing from chained jobs because I need some jobs to be built in parallel.

Resources