I'm learning Jenkins,
I want to save Jenkins pipeline result, and then visit the result by Jenkins REST API.
Can this be achieved?
For example:
pipeline {
agent any
stages {
stage("calculate 1+1") {
script{
def result = 0
result = 1+1
}
}
}
How should I save result and then visit it? (I can use Jenkins python package: Jenkins.get_build_info)
The result of a pipeline usually is a build somewhere in $WORKSPACE.
You can make Jenkins archive such file(s) using archiveArtifacts and later retrieve them like Downloading artifacts from Jenkins using wget or curl
If instead you intend to store more metadata about the build (which would reside in build.xml) this is another story. Usually you need plugins that perform such work for you - I am not aware of generic pipeline steps that would do that.
Related
I want to use the Jenkins "PRQA" plugin, which seems not to have the option to use it from a pipeline. The plugin would run static code analysis and publish the results.
In my case, it requires some preparations that are already done in a pipelinejob. Because of that, I want to include the job into that pipeline, but on the same executor with the data prepared by the pipeline as some kind of inlined job-step.
I have tried to create a job for the PRQA-Plugin-Step and execute this with the build step from the pipeline. But this tries to start the job on a new executor (and stalls because I have only one executor).
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Prepare'
}
}
stage('SCA') {
steps {
//Run this without using a new executor with the Environment that exists now
build 'PRQA_Job'
}
}
}
}
What is the correct way to run the job on the same executor with the current working directory.
With specified build 'PRQA_Job' it's not possible to run second job on the same executor (1 job = 1 executor), since main job just waiting for a triggered job to be finished. But you can run another job on the same agent with more than 1 executor to reach workspace from main job.
For a test porpose specify agent name in both jobs: agent 'agent_name_here'
If you want to use plugin functionality for a plugin, which has no native pipeline support, you could try using "step: General Build step" feature for Jenkins Pipelines. You can use the Pipeline Syntax wizzard linked in the Job configuration windows to generate the needed Pipeline description.
If the plugin does not show up in the "step: General Build step" part of Jenkins you can use a separate Job. To copy all the needed files/Data into this second Job you will require to use Archive Artifact/Copy Artifact functionality of Jenkins to save files from your Pipeline build.
For more information on how to sue Archive Artifact/Copy Artifact see https://plugins.jenkins.io/copyartifact/ and
https://www.jenkins.io/doc/pipeline/tour/tests-and-artifacts/
In a nutshell:
How can I access the location of the produced artifacts within a shell script started in a build or post-build action?
The longer story:
I'm trying to setup a jenkins job to automate the building and propagation of debian packages.
So far, I was already successfull in using the debian-pbuilder plugin to perform the build process, such that jenkins presents the final artifacts after successfully finishing the job:
mypackage_1+020200224114528.NOREV.4_all.deb
mypackage_1+020200224114528.NOREV.4_amd64.buildinfo
mypackage_1+020200224114528.NOREV.4_amd64.changes
mypackage_1+020200224114528.NOREV.4.dsc
mypackage_1+020200224114528.NOREV.4.tar.xz
Now I would like to also automate the deployment process into the local reprepro repository, which would actually just require a simple shell script invocation, I've put together.
My problem: I find no way to determine the artifact location for that deployment script to operate on. The "debian-pbuilder" plugin generates the artifacts in a temporary directory ($WORKSPACE/binaries.tmp15567690749093469649), which changes with every build.
Since the artifacts are listed properly in the finished job status view, I would expect that the artifact details are provided to the script (e.g. by environment variables). But that is obvously not the case.
I've already search extensively for a solution, but didn't find anything helpful.
Or is it me (still somewhat a Rookie in Jenkins), following a wron approach here?
You can use archiveArtifacts. You have binaries.tmp directory in the Workspace and you can use it, but before execute clear workspace using deleteDir().
Pipeline example:
pipeline {
agent any
stages {
stage('Build') {
steps {
deleteDir()
...
}
}
}
post {
always {
archiveArtifacts artifacts: 'binaries*/**', fingerprint: true
}
}
}
You can also check https://plugins.jenkins.io/copyartifact/
I am building Jenkins build pipeline and I was wondering if it is possible to somehow tag/visualize the build branch in Jenkins in the similar way as it is automatically possible in TeamCity.
I am using declarative pipeline defined in separate git repository and Jenkins 2.46.3.
From the picture it is not obvious that the last 2 builds were executed on a separate branch:
Thanks
You can modify the current build's display name and description using the following code:
currentBuild.displayName = env.BRANCH_NAME
currentBuild.description = 'Final Release'
This was recently highlighted in the BlueOcean 1.1 announcement, which shows both of them, in contrast to the regular interface, which only shows the displayName.
An example of a modified displayName from our public instance looks as follows:
You can find the code which generates this in our shared library here and here, essentially it is:
currentBuild.displayName = "#${currentBuild.getNumber()} - ${newVersion} (${increment})"
As you are mentioning Declarative Pipelines, let add that you have to wrap this code in a script block, of course. So probably (untested):
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
script {
currentBuild.displayName = env.BRANCH_NAME
}
}
}
}
}
Alternatively, you can extract it into a separate function.
I have an open-source project, that resides in GitHub and is built using a build farm, controlled by Jenkins.
I want to build it branch-wise using a pipeline, but I don't want to store Jenkinsfile inside the code. Is there a way to accomplish this?
I have encountered the same issue as you. While the idea of having the build process as part of the code is good, there is information that the Jenkinsfile would include that are not intrinsic to the project build itself, but rather are specific to the build environment instance, which may change.
The way I accomplished this is:
Encapsulate the core build process in a single script (build.py or build.sh). This may call specific build tools like Make, CMake, Ant, etc.
Tell Jenkins via the Jenkinsfile to call a function defined in a single global library
Define the global Jenkins build function to call the build script (e.g. build.py) with appropriate environment settings. For example, using custom tools and setting up the PATH.
So for step 2, create a Jenkinsfile in your project containing just the line
build_PROJECTNAME()
where PROJECTNAME is based on the name of your project.
Then use the Pipeline Shared Groovy Libraries Plugin and create a Groovy script in the shared library repository called vars/build_PROJECTNAME.groovy containing the code that sets up the environment and calls the project build script (e.g. build.py):
def call() {
node('linux') {
stage("checkout") {
checkout scm
}
stage("build") {
withEnv([
"PATH+CMAKE=${tool 'CMake'}/bin",
"PATH+PYTHON=${tool 'Python-3'}",
"PATH+NINJA=${tool 'Ninja'}",
]) {
execute 'python build.py'
}
}
}
}
First of all, why do you not want a Jenkinsfile in your code? The pipeline is just as much part of the code as would be your build file.
Other then that, you can load groovy files to be evaluated as a pipeline script. You can do this either from a different location with the from SCM option and then checkout the actual code. But this will force you to manually take care of the branch builds.
Another option would be to have a very basic Jenkinsfile that merely checkouts an external pipeline.
You would get something like this:
node{
deleteDir()
git env.flowScm
def flow = load 'pipeline.groovy'
stash includes: '**', name: 'flowFiles'
stage 'Checkout'
checkout scm // short hand for checking out the "from scm repository"
flow.runFlow()
}
Where the pipeline.groovy file would contain the actual pipeline would look like this:
def runFlow() {
// your pipeline code
}
// Has to exit with 'return this;' in order to be used as library
return this;
I am creating a CI/CD pipeline. I am trying to create a groovy function in order to deploy a build to udeploy.
I know I will need to pass the parameters used in to the function such as:
udeployServer,
component,
artifactDirectory,
version,
deployApplication,
environment and
deployProcess.
I was wondering has anyone tried to implement this or has anyone any idea how I should approach this?
Thanks
I don't know anything about udeploy servers but I do know there is no pipeline plugin for udeploy, which means that you will not have a function such as :
udeploy: server=yourserver component=yourcomponent artifactDirectory=...
However Jenkins allow you to use shell commands inside your groovy pipeline, so you should be able to do pretty much everything you need. So I guess the real question is how do you usually deploy a build to udeploy ? Do you do it via a REST API, do you push a file via FTP, ... ?
Jenkins build will be pretty straightforward, have a look at how to checkout and build using Jenkins pipeline.
An example pipeline could look like :
{
stage 'Build'
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn clean install"
//... Some other stages as needed...
stage 'Deploy'
sh "execute sh deploy script here..."
}
... where you deploy stage could use other plugins to copy files to your server, run REST API requests, etc. While writing a pipeline, have a look at Pipeline Syntax link for a Snippet Generator giving more detailed information about existing plugins.