Jenkins pipeline - Trigger a new build for a given job - docker

I have two Jenkins pipelines, named A and B. Both use the same docker container, named C1. The job A looks like this:
pipeline {
agent {
node {
label 'c1'
}
}
stages {
...
}
post {
always {
script {
echo "always"
}
}
success {
script {
echo "success"
}
}
failure {
script {
echo "failure"
}
}
unstable {
script {
echo "unstable"
}
}
}
}
The only difference is in the job B. This, in the post action always calls the job A, like this:
build job: 'A/master', parameters: [ string(name: 'p1', value: params["p1"]) ]
The error occures by starting the job A, saying:
Error when executing success post condition:
hudson.AbortException: No item named A/master found
And also, by listing the parent folders, it is evident that there is no folder for job A.
How could I solve this problem?
KI

The solution for this problem is the folder name in which these jobs are located. So, the name of the job is test/A/master.

From your comments I've read you're using the Gitea plugin to receive the repositories from your Gitea instance. Gitea repositories are organized per user/organization like Github does.
Once you've added a Gitea project (user/organization) via the Jenkins plugin the full name of one of the jobs will be:
<user_or_orga>/<repository_name>/<branch>
For example for the user name "crash", where the repository "stackoverflow" belongs to, it would be (for the master branch):
crash/stackoverflow/master
To get a list of all items including all full job names you can run this in the script console (under Manage Jenkins):
Jenkins.instance.getAllItems(AbstractItem.class).each {
println(it.fullName)
};
P.S. for the Bitbucket plugin and others it's the same rule.

Related

Jenkins Pipeline Examples for Selecting Different Jenkins Node

Our Jenkins setup consists of master nodes and different / dedicated worker nodes for running jobs in dev, test and prod environment. How do I go about creating a scripted pipeline code that allows users to select environment (possibly from master node) and depending upon the environment selected would execute the rest of the job in the node selected? Here is my initial thought:
stage('Select environment ') {
script {
def userInput = input(id: 'userInput', message: 'Merge to?',
parameters: [[$class: 'ChoiceParameterDefinition', defaultValue: 'strDef',
description:'describing choices', name:'Env', choices: "dev\ntest\nprod"]
])
println(userInput);
}
echo "Environment here ${params.Env}" // prints null here
stage("Build") {
node(${params.Env}) { // schedule job based upon the environment selected earlier
echo "My test here"
}
}
}
I am in the right path or should I be looking at something else?
Another follow up question is that the job that is running on the worker node also requires additional user input. Is there a way to combine the user input in one go such that the users would not be prompted with multiple user screens?
If you pass the environment as a build parameter when kicking off the job, and you have appropriate labels on your nodes, you could do something like:
agent = params.WHAT_NODE
agentLabels = "deploy && ${agent}"
pipeline {
agent { label agentLabels }
....
}
Ended up doing the following for scripted pipeline:
The code for selecting environment can be run on any node (whether master or slaves with agent running). The parameter can be injected into an environment variable: env..
node {
stage('Select Environment'){
env.Env = input(id: 'userInput', message: 'Select Environment',
parameters: [[$class: 'ChoiceParameterDefinition',
defaultValue: 'strDef',
description:'describing choices',
name:'Env',
choices: "jen-dev-worker\njen-test-worker\njen-prod-worker"]
])
println(env.Env);
}
stage('Display Environment') {
println(env.Env);
}
}
The following code snippet ensures that script would be executed on the environment selected in the last step. Requires Jenkins workers with labels: jen-dev-worker, jen-test-worker, jen-prod-worker) available.
node (env.Env) {
echo "Hello world, I am running on ${env.Env}"
}

Creating a Jenkins Pipeline build from a pipeline

I'm trying to automate the creation of a Jenkins Pipeline build from within a pipeline.
I have a pipeline which creates a Bitbucket repository and commits some code to it, including a Jenkinsfile.
I need to add another step to this pipeline to then create the Pipeline build for it, which would run the steps in the Jenkinsfile.
I think the Jobs DSL should be able to handle this but the documentation I've found for it has been very sparse, and I'm still not entirely sure if it's possible or how to do it.
Any help would be appreciated. The generated Pipeline build I would imagine just needs to have a link to the repository and be told to run the Jenkinsfile there?
Yes, Job DSL is what you need for your use case.
See this and this to help you get started.
EDIT
pipeline {
agent {
label 'slave'
}
stages{
stage('stage'){
steps {
// some other steps
jobDsl scriptText: '''pipelineJob(\'new-job\') {
def repo = \'https://xxxxx#bitbucket.org/xxxx/dummyrepo.git\'
triggers {
scm(\'H/5 * * * *\')
}
definition {
cpsScm {
scm {
git {
remote {
url(repo)
credentials('bitbucket-jenkins-access')
}
branches(\'master\')
scriptPath(\'Jenkinsfile\')
extensions { }
}
}
}
}
}'''
}
}
}
}
Documentation - https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob-scm-git
By using this python library jenins-job-builder you can easily create your expected pipeline or free-style job from another pipeline or from any other remote location.
Example:
steps-1
python3 -m venv .venv
source .venv/bin/activate
pip install --user jenkins-job-builder
steps-2
Once you have done above, Create 2 file, one with name config.ini and the other one is job.yml. Please note - there are no strict rules about the file name. It can be up to you.
The config.ini file contain can looks like
[job_builder]
allow_duplicates = False
keep_descriptions = False
ignore_cache = True
recursive = False
update = all
[jenkins]
password = jenkins-password
query_plugins_info = False
url = http://jenkins-url.net
user = jenkins-username
If you are creating a pipeline job , then your job.yml file can look like
- job:
name: pipeline01
display-name: 'pipeline01'
description: 'Do not edit this job through the web!'
project-type: pipeline
dsl: |
node(){
stage('hello') {
sh 'echo "Hellow World!"'
}
}
steps-3
after all the above. Invoke below command
jenkins-jobs --conf config.ini update job.yml
Note- jenkins-jobs command can only be available if you have followed steps-1

Multiple Jenkinsfiles, One Agent Label

I have a project which has multiple build pipelines to allow for different types of builds against it (no, I don't have the ability to make one build out of it; that is outside my control).
Each of these pipelines is represented by a Jenkinsfile in the project repo, and each one must use the same build agent label (they need to share other pieces of configuration as well, but it's the build agent label which is the current problem). I'm trying to put the label into some sort of a configuration file in the project repo, so that all the Jenkinsfiles can read it.
I expected this to be simple, as you don't need this config data until you have already checked out a copy of the sources to read the Jenkinsfile. As far as I can tell, it is impossible.
It seems to me that a Jenkinsfile cannot read files from SCM until the project has done its SCM step. However, that's too late: the argument to agent{label} is read before any stages get run.
Here's a minimal case:
final def config
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
checkout scm // we don't need all the submodules here
echo "Reading configuration JSON"
script { config = readJSON file: 'buildjobs/buildjob-config.json' }
echo "Read configuration JSON"
}
}
stage('Build and Deploy') {
agent {
label config.agent_label
}
steps {
echo 'Got into Stage 2'
}
}
}
}
When I run this, I get:
java.lang.NullPointerException: Cannot get property 'agent_label' on null object I don't get either of the echoes from the 'Configure' stage.
If I change the label for the 'Build and Deploy' stage to 'master', the build succeeds and prints out all three echo statements.
Is there any way to read a file from the Git workspace before the agent labels need to be set?
Please see https://stackoverflow.com/a/52807254/7983309. I think you are running into this issue. label is unable to resolve config.agent_label to its updated value. Whatever is set in the first line is being sent to your second stage.
EDIT1:
env.agentName = ''
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
script {
env.agentName = 'slave'
echo env.agentName
}
}
}
stage('Finish') {
steps {
node (agentName as String) { println env.agentName }
script {
echo agentName
}
}
}
}
}
Source - In a declarative jenkins pipeline - can I set the agent label dynamically?

How to get the Jenkins master IP/hostname inside a pipeline stage executing on a slave?

I have a Jenkins declarative pipeline in which I build in one stage and test in another, on different machines. I also have a Selenium hub running on the same machine as the Jenkins master.
pipeline {
agent none
stages {
stage('Build') {
agent { node { label 'builder' } }
steps {
sh 'build-the-app'
stash(name: 'app', includes: 'outputs')
}
}
stage('Test’) {
agent { node { label 'tester' } }
steps {
unstash 'app'
sh 'test-the-app'
}
}
}
}
I'd like for the Selenium tests that run on at the Test stage to connect back to the Selenium hub on the Jenkins master machine, and that means that I need to get the IP address or hostname of the Jenkins master machine from the slave.
Is there a way to do this? The Jenkins master URL / hostname isn't in the environment variables and I'm uncertain how else to get the Jenkins master's IP address.
Not sure if there are better ways to do this, I am able to run
def masterIP = InetAddress.localHost.hostAddress
println "Master located at ${masterIP}"
in my Jenkinsfile. The first time I ran this code in my Jenkinsfile, the build failed with
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException:
Scripts not permitted to use method java.net.InetAddress getHostAddress
at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:178)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor$6.reject(SandboxInterceptor.java:243)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:363)
at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
I had to approve the method signature in Jenkins by navigating to Manage Jenkins > In-process Script Approval.
Inspired from #kayvee for using BUILD_URL, the following worked from me:
def getJenkinsMaster() {
return env.BUILD_URL.split('/')[2].split(':')[0]
}
This returns the master's hostname or IP as it would appear in the build URL. If you also require the port number remove the second split().
To get current slave host :
Jenkins.getInstance().getComputer(env['NODE_NAME']).getHostName()
To get master host :
Jenkins.getInstance().getComputer('').getHostName()
You can simply do it like this way:
stage("SomeStageName") {
agent { label 'exampleRunOnlyOnLinuxNode' }
steps {
script {
println "\n\n-- Running on machine: " + "hostname".execute().text
}
}
}
and "hostname -i".execute().text will print the IP
Try this below shell command
def host= sh(returnStdout: true, script: 'echo ${BUILD_URL/https:\\/\\/} | cut -d "/" -f1').trim()
println("Hostname : ${host}")

Can I "import" the stages in a Jenkins Declarative pipeline

I have several pipeline jobs, which are configured very similarly.
They all have the same stages (of which there are about 10).
I am now I am thinking about moving to the declarative pipeline (https://jenkins.io/blog/2016/09/19/blueocean-beta-declarative-pipeline-pipeline-editor/).
But I do not want to define the ~10 stages in every pipeline. I want to define them at one place, and "import" them somehow.
Is this possible with declarative pipelines at all? I see that there are Libraries, but it does not seem like I could include the stage definition using them.
You will have to create a shared-library to implement what i am about to suggest. For shared-library implementation, you may check the following posts:
Using Building Blocks in Jenkins Declarative Pipeline
Upload file in Jenkins input step to workspace (Mainly for images so one can easily figure out things)
Now if you want to use a Jenkinsfile (kind of a template) which can be reused across multiple projects (jobs), then that is indeed possible.
Once you have created a shared-library repository with vars directory in it, then you just have to create a Groovy file (let's say, commonPipeline.groovy) inside vars directory.
Here's an example that works because I have used it earlier in multiple jobs.
$ cat shared-lib/vars/commonPipeline.groovy
// You can create function(s) as shown below, if required
def someFunctionA() {
// Your code
}
// This is where you will define all the stages that you want
// to run as a whole in multiple projects (jobs)
def call(Map config) {
pipeline {
agent {
node { label 'slaveA || slaveB' }
}
environment {
myvar_Y = 'apple'
myvar_Z = 'orange'
}
stages {
stage('Checkout') {
steps {
deleteDir()
checkout scm
}
}
stage ('Build') {
steps {
script {
check_something = someFunctionA()
if (check_something) {
echo "Build!"
# your_build_code
} else {
error "Something bad happened! Exiting..."
}
}
}
}
stage ('Test') {
steps {
echo "Running tests..."
// your_test_code
}
}
stage ('Deploy') {
steps {
script {
sh '''
# your_deploy_code
'''
}
}
}
}
post {
failure {
sh '''
# anything_you_need_to_perform_in_failure_step
'''
}
success {
sh '''
# anything_you_need_to_perform_in_success_step
'''
}
}
}
}
With above Groovy file in place, all you have to do now is to call it in your various Jenkins projects. Since you might already be having an existing Jenkinsfile (if not, create it) in your Jenkins project, you just have to replace the existing content of that file with the following:
$ cat Jenkinsfile
// Assuming you have named your shared-library as `my-shared-lib` & `Default version` to `master` branch in
// `Manage Jenkins` » `Configure System` » `Global Pipeline Libraries` section
#Library('my-shared-lib#master')_
def params = [:]
params=[
jenkins_var: "${env.JOB_BASE_NAME}",
]
commonPipeline params
Note: As you can see above, I am calling commonPipeline.groovy file. So, all your bulky Jenkinsfile will get reduced to just five or six lines of code, and those few lines are also going to be common across all those projects. Also note that I have used jenkins_var above. It can be any name. It's not actually used but is required for pipeline to run. Some Groovy expert can clarify that part.
Ref: https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Resources