How to run Ansible playbook with user input within Jenkins? - jenkins

I have a playbook with section "pause" and prompt. If I create job within Jenkins with Pipeline plugin and run this job I get
[WARNING]: Not waiting from prompt as stdin is not interactive
and job is failed. The question is how I can run job in interactive mode or how I can pause playbook within exact task and push combination Ctrl+c+c (because ansible module 'pause' is working only like that)? I have googled a few time and tried to do that with
def userInput = input(
id: 'Password', message: 'input your input: ', ok: 'ok',
parameters: [string(defaultValue: '', description: '.....', name: 'INPUT_TEST')])
But I can't push keys combination and can't understand how I can pause jenkins job on specific ansible task within playbook.
Pipeline example:
pipeline {
agent { label 'master' }
environment {
WORKDIR = '/home/jenkins/'
}
stages {
stage('Checkout') {
agent { label 'master' }
steps {
sh '''cd $WORKDIR
ansible-playbook -vvvv manual_playbooks/test.yml'''
}
}
stage ('Echo') {
agent { label 'master' }
steps {
sh 'echo something'
}
}
}
}
Playbook example:
---
- name: test
hosts: localhost
tasks:
- name: Echo start
shell: echo 'start playbook'
- pause:
prompt: "do you want to continue?"
echo: yes
private: no
register: prompt_status
- name: Continue tasks
shell: echo 'Continue full flow'
register: reset_account_response
when: prompt_status.user_input is defined and
prompt_status.user_input == "yes"
- fail:
msg: "Unexpected user input while prompting approval"
when: prompt_status.user_input is defined and
prompt_status.user_input != "yes"
Many thanks.

Why not put a when clause on your pause task to check for the existence of some other variable and then pass that in using Ansible's -e option when running the playbook through Jenkins.

Related

Avoid printing passwords in Jenkins console

We have Jenkins stage which is calling a "GetVMPassword" function from library. The function returns credential and it will be used to login remote server. We dont want to print the "ssh command" and "calling a funtion command" and its reponse on stage logs. So we used ā€˜#!/bin/sh -e \nā€™ before every command. Because if we print, this could reveal the remote server credentials in the stage log. This was working when we don't use "parallel execution" block.
When we include "ssh command" and "calling a function command" inside "parallel execution" block, passwords are printed in stage logs.
How can we avoid printing in stage logs the library command and its response when we use "parallel execution" block ?
This is snippet of my stage and parallel execution block.
Jenkins Version: 2.235.3
#Library ('MyLib_API') _
pipeline{
agent {
label 'master'
}
stages{
stage('BuildAll'){
steps{
script{
def executions = APPSERVERS.split(',').collectEntries {APPS ->
["Execution ${APPS}": {
stage(APPS) {
APP_USERNAME = "ubuntu"
response = getPassword("${APPS}","${APP_USERNAME}")
sh '#!/bin/sh -e \n' + "sshpass -p '${response}' ssh -o StrictHostKeyChecking=no ${APP_USERNAME}#${APPS} 'ls'"
sleep 2
}
}]
}
parallel executions
}
}
}
}
}
"getPassword" is the function in library used to get the vm password dynamically.
"APPSERVERS" values we are getting from Active choice parameters option.This has list of IP's of servers.
Please help me to hide those library commands and responses from stage logs.
We have tried below options.
Used set +x and it is not worked for us.
Password masking plugin will not work. Since response from the command will get print for our case.
We tried routing all the execution of commands to file and tried fetching it from there. In this option, also while parsing the file logs are printed in stage logs.
Try starting your script with set +x, if not use password masking plugins
as mentioned here - https://issues.jenkins.io/browse/JENKINS-36007
You can use input to pass the credential and mask it in log.
Here is a detailed answer stackoverflow credentials masking
you can use this as well it works for me.
node('Node Name'){
println('Please enter the username')
def userName = input(
id: 'userName', message: 'VPN Username', parameters: [
[$class: 'hudson.model.TextParameterDefinition', defaultValue :'', name: 'Username', description: 'Please enter your username']
])
println('Please enter the password')
def userPassword = input(
id: 'userPassword', message: 'VPN Password', parameters: [
[$class: 'hudson.model.PasswordParameterDefinition', defaultValue :'', name: 'Password', description: 'Please enter your password']
])
connectToClient = bat(returnStdout: true, script: 'start Forticlient connect -h v3 -u ' + userName+ ':' + userPassword)
stage('Deploy (Test)'){
withCredentials([usernamePassword(credentialsId: 'IH_IIS_JENKINS', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
bat"msdeploy command"
}
}
}

Run jenkins job based on previous job status

I have scenario where i need job to run only if the previous build was success .
if previous build had failed i need user to wait for admin apporval.
if previous build had success user can run the job.
can anyone help me how to go about it .
Below is the pipeline script which will check the previous build and ask for the approval if the pervious build is failed or sucess
pipeline{
agent any
stages{
stage('Previous Status check'){
steps{
echo "Checking"
sleep 10
}
}
stage('Deploy approval'){
when {
expression {
// When last build has failed
!hudson.model.Result.SUCCESS.equals(currentBuild.rawBuild.getPreviousBuild()?.getResult()) == true
}
}
steps {
input(message: 'last build was failed please check target group and approve', ok: 'Release!' , submitter: "ritesh.mahajan")
}
}
stage('Building code on server'){
steps{
script {
def inputConfig
def inputTest
// Get the input
def userInput = input(
id: 'userInput', message: 'Enter the branch to build',
parameters: [
string(defaultValue: 'Master',
description: 'Enter the branch',
name: 'Branch'),
])
inputbranch=userInput
echo "${inputbranch}"
echo "here we can execute script in remote machine by default it will build master ..we can also accept parameter"
}
}
}
}
}

How to check if ansible playbook failed inside Jenkins pipeline script

Below is my Jenkins pipeline script. I wish to call ex("ansible-failed") function whenever the ansible-playbook test.yml fails.
Below is my pipeline script.
def ex(param)
{
echo "ABORT due to:" + param
}
pipeline
{
stages
{
stage('first')
{
steps
{
script
{
def user = "user1"
}
echo "Calling ansible"
ansiblePlaybook(
playbook: '/app/test.yml'
extraVars: [ app_ip: "10.0.0.12,10.0.0.13" ]
)
}
}
stage('second')
{
steps
{
script
{
println "Second Play"
}
}
}
}
}
The above Jenkins pipeline script invokes ansible-playbook test.yml however, I do not know how to detect if the ansible play succeeded or failed. If it failed; then I wish to call the ex() function.
Incase ansible-playbook run succeeds then I wish to simply continue and execute stage('second')
Can you please suggest how we can check the condition if ansible run succeeded or failed inside the Jenkins pipeline script?
You can use try{}catch(){} for that

How can I rename Jenkins' pull request builder's "status check" display name on GitHub

We have a project on GitHub which has two Jenkins Multibranch Pipeline jobs - one builds the project and the other runs tests. The only difference between these two pipelines is that they have different JenkinsFiles.
I have two problems that I suspect are related to one another:
In the GitHub status check section I only see one check with the following title:
continuous-integration/jenkins/pr-merge ā€” This commit looks good,
which directs me to the test Jenkins pipeline. This means that our build pipeline is not being picked up by GitHub even though it is visible on Jenkins. I suspect this is because both the checks have the same name (i.e. continuous-integration/jenkins/pr-merge).
I have not been able to figure out how to rename the status check message for each Jenkins job (i.e. test and build). I've been through this similar question, but its solution wasn't applicable to us as Build Triggers aren't available in Multibranch Pipelines
If anyone knows how to change this message on a per-job basis for Jenkins Multibranch Pipelines that'd be super helpful. Thanks!
Edit (just some more info):
We've setup GitHub/Jenkins webhooks on the repository and builds do get started for both our build and test jobs, it's just that the status check/message doesn't get displayed on GitHub for both (only for test it seems).
Here is our JenkinsFile for for the build job:
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
stage('Initialize') {
echo 'Initializing...'
def node = tool name: 'node-lts', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
env.PATH = "${node}/bin:${env.PATH}"
}
stage('Checkout') {
echo 'Getting out source code...'
checkout scm
}
stage('Install Dependencies') {
echo 'Retrieving tooling versions...'
sh 'node --version'
sh 'npm --version'
sh 'yarn --version'
echo 'Installing node dependencies...'
sh 'yarn install'
}
stage('Build') {
echo 'Running build...'
sh 'npm run build'
}
stage('Build Image and Deploy') {
echo 'Building and deploying image across pods...'
echo "This is the build number: ${env.BUILD_NUMBER}"
// sh './build-openshift.sh'
}
stage('Upload to s3') {
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
if(env.BRANCH_NAME == "PRODUCTION"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'E6JRLLPORMHNH', paths:['/*']);
}
};
}
}
Try to use GitHubCommitStatusSetter (see this answer for declarative pipeline syntax). You're using a scripted pipeline syntax, so in your case it will be something like this (note: this is just prototype, and it definitely must be changed to match your project specific):
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
// ...
stage('Upload to s3') {
try {
setBuildStatus(context, "In progress...", "PENDING");
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
// ...
} catch (Exception e) {
setBuildStatus(context, "Failure", "FAILURE");
}
setBuildStatus(context, "Success", "SUCCESS");
}
}
void setBuildStatus(context, message, state) {
step([
$class: "GitHubCommitStatusSetter",
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: context],
reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/my-org/my-repo"],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: message, state: state]] ]
]);
}
Please check this and this links for more details.
You can use the Github Custom Notification Context SCM Behaviour plugin https://plugins.jenkins.io/github-scm-trait-notification-context/
After installing go to the job configuration. Under "Branch sources" -> "GitHub" -> "Behaviors" click "Add" and select "Custom Github Notification Context" from the dropdown menu. Then you can type your custom context name into the "Label" field.
This answer is pretty much like #biruk1230's answer. But if you don't want to downgrade your github plugin to work around the bug, then you could call the API directly.
void setBuildStatus(String message, String state)
{
env.COMMIT_JOB_NAME = "continuous-integration/jenkins/pr-merge/sanity-test"
withCredentials([string(credentialsId: 'github-token', variable: 'TOKEN')])
{
// 'set -x' for debugging. Don't worry the access token won't be actually logged
// Also, the sh command actually executed is not properly logged, it will be further escaped when written to the log
sh """
set -x
curl \"https://api.github.com/repos/thanhlelgg/brain-and-brawn/statuses/$GIT_COMMIT?access_token=$TOKEN\" \
-H \"Content-Type: application/json\" \
-X POST \
-d \"{\\\"description\\\": \\\"$message\\\", \\\"state\\\": \\\"$state\\\", \
\\\"context\\\": \\\"${env.COMMIT_JOB_NAME}\\\", \\\"target_url\\\": \\\"$BUILD_URL\\\"}\"
"""
}
}
The problem with both methods is that continuous-integration/jenkins/pr-merge will be displayed no matter what.
This will be helpful with #biruk1230's answer.
You can remove Jenkins' status check which named continuous-integration/jenkins/something and add custom status check with GitHubCommitStatusSetter. It could be similar effects with renaming context of status check.
Install Disable GitHub Multibranch Status plugin on Jenkins.
This can be applied by setting behavior option of Multibranch Pipeline Job on Jenkins.
Thanks for your question and other answers!

Jenkins Pipeline Conditional Stage based on Environment Variable

I want to create a Jenkins (v2.126) Declarative syntax pipeline, which has stages with when() clauses checking the value of an environment variable. Specifically I want to set a Jenkins job parameter (so 'build with parameters', not pipeline parameters) and have this determine if a stage is executed.
I have stage code like this:
stage('plan') {
when {
environment name: ExecuteAction, value: 'plan'
}
steps {
sh 'cd $dir && $tf plan'
}
}
The parameter name is ExecuteAction. However, when ExecuteAction is set via a Job "Choice" parameter to: plan, this stage does not run. I can see the appropriate value is coming in via environment variable by adding this debug stage:
stage('debug') {
steps {
sh 'echo "ExecuteAction = $ExecuteAction"'
sh 'env'
}
}
And I get Console output like this:
[Pipeline] stage
[Pipeline] { (debug)
[Pipeline] sh
[workspace] Running shell script
+ echo 'ExecuteAction = plan'
ExecuteAction = plan
[Pipeline] sh
[workspace] Running shell script
+ env
...
ExecuteAction=plan
...
I am using the when declarative syntax from Jenkins book pipeline syntax, at about mid-page, under the when section, built-in conditions.
Jenkins is running on Gnu/Linux.
Any ideas what I might be doing wrong?
Duh! You need to quote the environment variable's name in the when clause.
stage('plan') {
when {
environment name: 'ExecuteAction', value: 'plan'
}
steps {
sh 'cd $dir && $tf plan'
}
}
I believe you need to use params instead of environment. Try the following:
when {
expression { params.ExecuteAction == 'plan' }
}

Resources