node('Docker3') {
try {
cleanWs()
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {
git branch: "${params.branch}", url: "${params.scmUrl}"
gitCommitHash = sh(script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
load("jenkins_modules/moduleMain.groovy")
}
} catch (all) {
print(all)
}
}
I have this code which I am using in jenkinsfile. But I want to schedule jenkins job from script. Can anyone please help me to tell how can I use that above code with,this following code.
pipeline {
agent any
triggers {
cron('H * * * *')
}
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
}
I tried above code like this following way,
**pipeline {
agent any
triggers {
cron('H */4 * * 1-5')
}
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
}
node('Docker3') {
try {
cleanWs()
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {
git branch: "${params.branch}", url: "${params.scmUrl}"
gitCommitHash = sh(script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
load("jenkins_modules/moduleMain.groovy")
}
} catch (all) {
print(all)
}
}**
it is working, but i dont think so it is correct way.
Related
I want to execute multiple jobs from a single pipeline using declarative syntax in parallel. Can this be possible!! I know we can make a declarative parallel pipeline using "parallel" parameter.
pipeline {
agent any
parallel{
stages {
stage('Test1') {
steps {
sh 'pip install -r requirements.txt'
}
}
stage('Test2') {
steps {
echo 'Stage 2'
sh 'behave -f allure_behave.formatter:AllureFormatter -o allure-results features/scenarios/**/*.feature'
}
}
stage('Test3') {
steps {
script {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'allure-results']]
])
}
}
}
}
}
}
Below image will show you the proper flow that I want. Any approach how to do it?
// Pipeline project: SO-69680107-1-parallel-downstream-jobs-matrix
pipeline {
agent any
stages {
stage('Clean Workspace') {
steps {
cleanWs()
}
}
stage('Job matrix') {
matrix {
axes {
axis {
name 'job'
values 'SO-69680107-2', 'SO-69680107-3', 'SO-69680107-k' // , ...
}
}
stages {
stage('Run job') {
steps {
build "$job"
copyFiles( "$WORKSPACE\\..\\$job", "$WORKSPACE")
}
} // stage 'Run job'
}
} // matrix
} // stage 'Job matrix'
stage('List upstream workspace') {
steps {
bat "#dir /b \"$WORKSPACE\""
}
}
} // stages
}
def copyFiles( downstreamWorkspace, upstreamWorkspace ) {
dir("$downstreamWorkspace") {
bat """
#set prompt=\$g\$s
#echo Begin: %time%
dir /b
xcopy /f *.* \"$upstreamWorkspace\\\"
#echo End: %time%
"""
}
}
Template for downstream projects SO-69680107-2, SO-69680107-3, SO-69680107-k:
// Pipeline project: SO-69680107-X
pipeline {
agent any
stages {
stage('Stage X') {
steps {
sh 'set +x; echo "Step X" | tee SO-69680107-X.log; date; sleep 3; date'
}
}
}
}
I am working on a groovy script for a Jenkins pipeline and am struggling to find how to pass a variable across stages when the variable is obtained from a remote ssh connection.
I found Example 1 and Example 2 on this site and I want to merge them together as seen in "My attempt" below. Note that the output of the file on the remote server is 4. I'm trying pass 4 to a_var.
Example 1: works fine. SSH connection. This reads the file and outputs value to the Jenkins console
def sshCredId = 'myid_cred'
def sshUser = 'myid'
def sshServer = 'myserver'
pipeline {
agent { label 'docker-maven-slave' }
stages {
stage('one') {
steps {
script {
sshagent([sshCredId]){
sh "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv"
}
}
}
}
stage('two') {
steps {
echo "something"
}
}
stage('three') {
steps {
echo "do stuff"
}
}
}
}
Example 2: works fine. This passes a parameter across stages
pipeline {
agent {
label 'docker-maven-slave'
}
parameters {
string(name: 'a_var', defaultValue: '')
}
stages {
stage("one") {
steps {
script {
tmp_param = sh (script: 'echo something', returnStdout: true).trim()
env.a_var = tmp_param
}
}
}
stage("two") {
steps {
echo "${env.a_var}"
}
}
}
}
**My attempt: stage two output is null. I'm expecting '4' **
def sshCredId = 'myid_cred'
def sshUser = 'myid'
def sshServer = 'myserver'
pipeline {
agent { label 'docker-maven-slave' }
parameters {
string(name: 'a_var', defaultValue: 'nothing')
}
stages {
stage('one') {
steps {
script {
tmp_param=sshagent([sshCredId]){
sh "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv"
}
env.a_var=tmp_param
}
}
}
stage('two') {
steps {
echo "${env.a_var}"
}
}
stage('three') {
steps {
echo "do stuff"
}
}
}
}
Update the answer based on comments and feedback from MayJoAnneBeth
Try below snippet
sshagent([sshCredId]){
env.a_var = sh (script: "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv", returnStdout: true).trim()
}
can someone help me to convert the below Jenkins scripted pipeline to declarative pipeline
node('agent') {
if ( ! "${GIT_BRANCH}".isEmpty()) {
branch="${GIT_BRANCH}"
} else {
echo 'The git branch is not provided, exiting..'
sh 'exit 1'
}
version = extract_version("${GIT_BRANCH}")
if ( "${GIT_BRANCH}".contains("feature")) {
currentBuild.displayName = "${version}-SNAPSHOT-${env.BUILD_ID}"
}
else {
currentBuild.displayName = "${version}-${env.BUILD_ID}"
}
}
I am trying to check if git branch has been provided and setup jenkins build id dynamically based on git branch
pipeline {
agent {
label 'agent'
}
stages{
stage('stag1'){
steps {
script {
if ( ! "${GIT_BRANCH}".isEmpty()) {
branch="${GIT_BRANCH}"
} else {
echo 'The git branch is not provided, exiting..'
sh 'exit 1'
}
version = extract_version("${GIT_BRANCH}")
if ( "${GIT_BRANCH}".contains("feature")) {
currentBuild.displayName = "${version}-SNAPSHOT-${env.BUILD_ID}"
}
else {
currentBuild.displayName = "${version}-${env.BUILD_ID}"
}
}
}
}
}
}
I am trying to fail a build step in Jenkinsfile with failed results = failure. Once the step is failed it triggers my rollback job. Tried many different things, but had no luck. Any help would be greatly appreciated.
pipeline {
agent any
stages {
stage('Git Checkout') {
steps {
script {
git 'somegit-repo'
sh'''
mvn package
'''
echo currentBuild.result
catchError {
build 'rollback'
}
}
}
}
}
One way is to use a shell script and with exit 1 statement
e.g.
sh "exit 1"
Or you can use error step
error('Failing build because...')
See https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#error-error-signal
Use a try catch block
node {
stage("Run scripts") {
try {
<some command/script>
} catch (error) {
<rollback command/script>
}
}
}
Thank you so much. This seems to work!
stages {
stage("some test") {
steps{
script {
git 'mygitrepo.git'
try {
sh''' mvn test '''
} catch (error) {
script {
def job = build job: 'rollback-job'
}
}
}
}
}
If you check the cleaning and notifications page
You can do a post step and get rid of all the try/catch stuff and get a cleaner Jenkinsfile
pipeline {
agent any
stages {
stage('No-op') {
steps {
sh 'ls'
}
}
}
post {
always {
echo 'One way or another, I have finished'
deleteDir() /* clean up our workspace */
}
success {
echo 'I succeeeded!'
}
unstable {
echo 'I am unstable :/'
}
failure {
echo 'I failed :('
}
changed {
echo 'Things were different before...'
}
}
}
Currently we have a jenkins pipeline with 4 stages. Setup, Build, Deploy, Teardown. Deploy and Teardown prompt for manual user input. Because of this, we don`t want manual user input to take up an executor. So, we want to use agent none. However, when resuming, there is no guarentee we get the same jenkins workspace. Stash/unstash says it uses alot of resources, so if you have large files not to use it. Is there a way to get the exact slave, and when resuming, run back on that same slave?
I have something like this now I also tried agent gcp at top level, and putting agent none in manual input
pipeline {
agent none
environment {
userInput = false
}
stages {
stage('Setup') {
agent { node { label 'gcp' } }
steps {
deleteDir()
dir('pipelines') {
checkout scm
}
dir('deployment_pipelines'){
git branch: __deployment_scripts_code_branch, credentialsId: 'jenkins', url: __deployment_scripts_code_repo
}
dir('gcp_template_core'){
git branch: __gcp_template_code_branch, credentialsId: 'jenkins', url: __gcp_template_code_repo
}
dir('control_repo'){
git branch: _control_repo_branch, credentialsId: 'jenkins', url: _control_repo
}
// Copy core templates to the project
sh('bash deployment_pipelines/deployment/setup.sh gcp_template_core/gcp_foundation/ control_repo')
}
}
stage('Build') {
agent { node { label 'gcp' } }
steps {
sh('printenv') //TODO: Remove. Debug only
sh('python deployment_pipelines/deployment/build.py control_repo --env ${_env_type_long}')
}
}
stage('Deploy') {
agent { node { label 'gcp' } }
steps {
sh('python deployment_pipelines/deployment/deploy.py control_repo --env ${_env_type_short}')
}
}
stage('Release') {
steps {
agent none
script {
sh('python deployment_pipelines/deployment/set_manual_approvers.py deployment_pipelines/config/production-release-approvers.yaml -o approver.txt')
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Release?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
agent { node { label 'gcp' } }
if(userInput)
{
sh("echo 'Do Release'")
}
}
}
}
stage('Teardown'){
agent { node { label 'gcp' } }
steps {
script {
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Teardown?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
if(userInput)
{
sh("echo 'Do Teardown'")
}
}
}
}
}
post {
always {
echo 'DO TEARDOWN REGARDLESS'
}
}
}
agent none should be above step block in stage('Release'). You can refer https://jenkins.io/doc/book/pipeline/syntax/#agent for syntax and flow