I would like to run my Jenkins multibranch pipeline regulary during a week. Therefore I tried to set the Cron property for my pipeline directly in the Jenkinsfile as follows:
#Library('pipelines#master) _
properties([pipelineTriggers([cron('*/5 * * * *')])])
runPipeline()
In can see in the output of the build, that the properties step has been executed, but the pipeline does not start automatically.
Did I configure the Cron trigger correctly? How can I check if the trigger has been configured?
You can set the cron trigger in declarative way like below.
pipeline {
agent any
triggers{
cron('*/5 * * * *')
}
stages {
stage ("Test Stage1"){
steps {
script {
echo "Hello Test Stage1"
}
}
}
stage("Test Stage2"){
steps{
echo "Hello Test Stage2"
}
}
}
}
Related
I am running Jenkins pipeline script.
From this pipeline script we are running one script which keeps on producing output,so this process does not end.
So because of this my Jenkins job is not getting completed but as all the steps completed here i somehow want to mark this job complete after say 10 mins.
Is there any way to complete jenkins job from pipeline scipt after say 10 mins.
Below is my pipeline script and runspbt.sh is that never ending script.
pipeline {
agent {label 'Executionmachine3089'}
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 /home/rxx11pp/runspbt.sh"
}
}
}}
you can wait and check for some success output (or maybe a marker file in the workspace) .
steps{
sh "sleep 600s"
script{
if(testSuccessMethod){
currentBuild.result = "SUCCESS"
return
}else{currentBuild.result = "FAILURE"
return
}
}
You may want to use linux's timeout command:
pipeline {
agent { label 'Executionmachine3089' }
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 timeout 90 /home/rxx11pp/runspbt.sh"
}
}
}
}
This will start the script and signal it to exit after 90 seconds.
Note this is not related to Jenkins.
I have a CI Seed Job DSL Job in jenkins which creates a jenkins pipeline job.Is there any possibilities to get parameters in CI Seed Job and pass the input parameters given in Seed job to the jenkins pipeline script.
Here in below example, how the schema name given as parameter can be updated in jenkins pipeline job when creating it
CI Seed Job Setup:
1. Configured one string parameter `SchemaName`
2. DSL Script for jenkins pipeline job creation
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo $SchemaName
}
}
stage('Stage 2') {
steps {
echo 'logic'
}
}
}
}
}
'''.stripIndent())
sandbox()
}
}
}
I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run.
pipeline {
triggers { pollSCM '0 0 * * 0' }
agent { dockerfile { label 'jenkins-slave'
filename 'Dockerfile'
}
}
stages{
stage('1'){
steps{ sh "blah" }
} // stage
} // stages
} // pipeline
What I need to do now is run a new stage on a different slave, NOT in docker.
I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave.
stage('test new slave') {
agent { node { label 'e2e-aws' } }
steps {
sh "ifconfig"
} // steps
} // stage
I get the following error message
13:14:23 unknown flag: --workdir
13:14:23 See 'docker exec --help'.
I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues
1. My post actions show an error
2. The stages that have parallel stages also had an error.
I can't find any examples that are similar to what I am doing.
You can use the node block to select a node to run a particular stage.
pipeline {
agent any
stages {
stage('Init') {
steps {
node('master'){
echo "Run inside a MASTER"
}
}
}
}
}
I have Jenkins pipeline Job with parameters (name, group, taskNumber)
I need to write pipeline script which will call groovy script (this one?: https://github.com/peterjenkins1/jenkins-scripts/blob/master/add-job.groovy)
I want to create new job (with name name_group_taskNamber) every times when I build main Pipeline Job.
I don't understand:
Where do I need to put may groovy script ?
How does Pipeline script should look like? :
node{
stage('Build'){
def pipeline = load "CreateJob.groovy"
pipeline.run()
}
}
You can use and configure a shared library like here (a git repo): https://github.com/lvthillo/shared-library . You need to configure this in your Jenkins global configuration.
It contains a folder vars/. Here you can manage pipelines and groovy scripts like my slackNotifier.groovy. The script is just a groovy script to print the build result in Slack.
In the jenkins pipeline job we will import our shared library:
#Library('name-of-shared-pipeline-library')_
mavenPipeline {
//define parameters
}
In the case above also the pipeline is in the shared library but this isn't necessary.
You can just write your pipeline in the job itself and call only the function from the pipeline like this:
This is the script in the shared library:
// vars/sayHello.groovy
def call(String name = 'human') {
echo "Hello, ${name}."
}
And in your pipeline:
final Lib= library('my-shared-library')
...
stage('stage name'){
echo "output"
Lib.sayHello.groovy('Peter')
}
...
EDIT:
In new declarative pipelines you can use:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('test') {
steps {
sh 'echo "execute say hello script:"'
sayHello("Peter")
}
}
}
post {
always {
cleanWs()
}
}
}
def sayHello(String name = 'human') {
echo "Hello, ${name}."
}
output:
[test] Running shell script
+ echo 'execute say hello script:'
execute say hello script:
[Pipeline] echo
Hello, Peter.
[Pipeline] }
[Pipeline] // stage
We do it by using the https://wiki.jenkins.io/display/JENKINS/Jobcopy+Builder+plugin, try build another step in pipeline script and pass the parms which are to be considered
Since there's a limitation in Jenkins Pipeline's that you cannot add a manual build step without hanging the build (see for example this stackoverflow question) I'm experimenting with a combination of Jenkins Pipeline and Build Pipeline Plugin using the Job DSL plugin.
My plan was to create a Job DSL script that first executes the the Jenkins Pipeline (defined in a Jenkinsfile) and then create a downstream job that deploys to production (this is the manual step). I've created this Job DSL script as a test:
pipelineJob("${REPO_NAME} jobs") {
logRotator(-1, 10)
def repo = "https://path-to-repo/${REPO_NAME}.git"
triggers {
scm('* * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
publishers {
buildPipelineTrigger("${REPO_NAME} deploy to prod") {
parameters {
currentBuild()
}
}
}
}
freeStyleJob("${REPO_NAME} deploy to prod") {
}
buildPipelineView("$REPO_NAME Build Pipeline") {
selectedJob("${REPO_NAME} jobs")
}
where REPO_NAME is defined as an environment variable. The Jenkinsfile looks like this:
node {
stage('build'){
echo "building"
}
stage('run tests'){
echo "running tests"
}
stage('package Docker'){
echo "packaging"
}
stage('Deploy to Test'){
echo "Deploying to Test"
}
}
The problem is that the selectedJob points to "${REPO_NAME} jobs" which doesn't seem to be a valid option as "Initial Job" in the Build Pipeline Plugin view (you can't select it manually either).
Is there a workaround for this? I.e. how can I use a Jenkins Pipeline as the "Initial Job" for the Build Pipeline Plugin?
From the documentation on yourDomain.com/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.views.NestedViewsContext.envDashboardView
It shows that buildPipelineView can only be used within a View block which is inside of a Folder block.
Folder {
View {
buildPipelineView {
}
}
}