Terraform Jenkins Pipeline User input Failure - jenkins

I am facing a problem with Jenkins Terraform pipeline where terraform asks for a user input and I have no ability to provide the input and as a result the build fails. Here is my configuration:
Jenkinsfile:
pipeline {
agent any
stages {
stage ('Terraform Image version') {
agent{
docker {
//args 'arg1' // optional
label 'jenkins-mytask'
reuseNode true
alwaysPull true
registryUrl 'https://docker-xyz-virtual.artifactory.corp'
image 'docker-xyz-virtual.artifactory.corp/jenkins/slaves/terraform:0.12.15'
}
}
steps {
sh 'terraform --version'
sh 'ls -ltr'
sh 'terraform init -no-color'
sh 'terraform apply -no-color'
}
}
}
}
Error output from Jenkins:
Do you want to migrate all workspaces to "local"?
Both the existing "s3" backend and the newly configured "local" backend
support workspaces. When migrating between backends, Terraform will copy
all workspaces (with the same names). THIS WILL OVERWRITE any conflicting
states in the destination.
Terraform initialization doesn't currently migrate only select workspaces.
If you want to migrate a select number of workspaces, you must manually
pull and push those states.
If you answer "yes", Terraform will migrate all states. If you answer
"no", Terraform will abort.
Enter a value:
Error: Migration aborted by user.
I need to understand if it is possible to handle such user input event in Jenkins pipeline.

Assuming that you do want to migrate your Terraform state, then you must update the flags in the sh step method to provide non-interactive inputs for these prompts:
sh 'terraform init -no-color -input=false -force-copy'
or if you do not want to migrate the state:
sh 'terraform init -no-color -input=false -reconfigure'
Heads up that the next sh step method also needs to be similarly modified:
sh 'terraform apply -no-color -input=false -auto-approve'
You also probably want to set the normal environment variable in the environment directive:
environment { TF_IN_AUTOMATION = true }

Related

Jenkins Pipeline: Run the step when you see a new file or when it changes

I have a laravel application that requires the "yarn" command at initialization, and later only if certain files are changed.
Using the code below, I manage to detect when that file has changed, but I need a suggestion to be able to run it at the first initialization (practically, that file together with all the others seem to be new files from the perspective of the Jenkinsfile).
Thanks!
Current try:
stage("Install NodeJS dependencies") {
when {
changeset "package.json"
}
agent {
docker {
image 'node:14-alpine'
reuseNode true
}
}
steps {
sh 'yarn --silent'
sh 'yarn dev --silent'
}
}

How to do user input in Jenkisfile to carry on with terraform apply?

I'm running a Jenkins pipeline job using Jenkinsfile. The primary purpose is to run terraform <plan|apply>, based on the choice parameter to select either plan or apply, like this:
stages {
stage('tf_run') {
steps {
sh '''#!/usr/bin/env bash
terragrunt ${Action} --terragrunt-source "/var/temp/tf_modules//${tfm}"
'''
}
}
}
Where Action is the choice-parameter variable, it's all good for the plan but failing for apply as it asks for the confirmation whether to proceed or not, and the job is falling instantly. What can I do here so that users get to type yes/no (or select from the list), which then can be passed on to the terraform apply?
I got stuck in the middle, and I'd appreciate it if anyone could put me in the right direction. I appreciate any help you can provide.
-S
To fit the use case, the Jenkins Pipeline will have three steps:
Generate the plan file
Query user input for plan approval
Apply the plan file if approved
Assumption: you claim the pipeline is successful for plan, which implies to me that Action and tfm are environment variables (i.e. env.Action), because otherwise the String argument to the sh step method is invalid. Given that assumption:
(answer now modified upon request to demonstrate tfm as a pipeline parameter and no longer is in the env object)
parameters {
string(name: 'tfm', description: 'Terraform module to act upon.')
}
stages {
stage('TF Plan') {
steps {
// execute plan and capture plan output
sh(
label: 'Terraform Plan',
script: "terragrunt plan -out=plan.tfplan -no-color --terragrunt-source '/var/temp/tf_modules//${params.tfm}'"
)
}
}
stage('TF Apply') {
// only execute stage if apply is desired
when { expression { return env.Action == 'apply' } }
steps {
// query for user approval of plan
input(message: 'Click "proceed" to approve the above Terraform Plan')
// apply the plan if approved
sh(
label: 'Terraform Apply',
script: 'terraform apply -auto-approve -input=false -no-color plan.tfplan'
)
}
}
}
You may also want to add the equivalent of env.TF_IN_AUTOMATION = true to the environment directive. This can be helpful when executing Terraform in a pipeline.
If you also modify the pipeline agent to be e.g. the Terraform CLI image running as a container, then the plan output file will also need to be preserved between stages.
You can use terraform apply -auto-approve within your Jenkins Job.
See Docs
Tip: You can add condition in Jenkins stage() when a user choose parameter plan than there will be no -auto-approve option added automatically, else the command will append -auto-approve option.
stage(plan&apply){
if ${USER_INPUT} == "plan"{
terraform plan
}
else{
terraform apply -auto-approve
}
}
Note: Above Jenkins code might not match to proper Ans but can be taken as example.

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

Terraform cannot pull modules as part of jenkins pipeline

I have a jenkinsfile that was working and able to deploy some infrastructure automatically with terraform. Unfortunately after adding a terraform module with a git source it stopped working with the following error:
+ terraform init -input=false -upgrade
Upgrading modules...
- module.logstash
Updating source "git::https://bitbucket.org/*****"
Error downloading modules: Error loading modules: error downloading 'https://bitbucket.org/*****': /usr/bin/git exited with 128: Cloning into '.terraform/modules/34024e811e7ce0e58ceae615c545a1f8'...
fatal: could not read Username for 'https://bitbucket.org': No such device or address
script returned exit code 1
The urls above were obfuscated after the fact. Below is the cut down module syntax:
module "logstash" {
source = "git::https://bitbucket.org/******"
...
}
Below is the Jenkinsfile:
pipeline {
agent {
label 'linux'
}
triggers {
pollSCM('*/5 * * * *')
}
stages {
stage ('init') {
steps {
sh 'terraform init -input=false -upgrade'
}
}
stage('validate') {
steps {
sh 'terraform validate -var-file="production.tfvars"'
}
}
stage('deploy') {
when {
branch 'master'
}
steps {
sh 'terraform apply -auto-approve -input=false -var-file=production.tfvars'
}
}
}
}
I believe this to be a problem with terraform internally using git to checkout the module but Jenkins has not configured the git client within the pipeline job itself. Preferably I would be able to somehow pass the credentials used by the multibranch pipeline job into the job itself and configure git but I am at a loss of how to do that. Any help would be appreciated.
So I found a non-ideal solution that requires you to specify the credentials inside your Jenkinsfile rather than automatically using the credentials used by the job for checkout.
withCredentials([usernamePassword(credentialsId: 'bitbucketcreds', passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')]) {
sh "git config --global credential.helper '!f() { sleep 1; echo \"username=${env.GIT_USER}\\npassword=${env.GIT_PASS}\"; }; f'"
sh 'terraform init -input=false -upgrade'
sh 'git config --global --remove-section credential'
}
The trick is to load the credentials into environment variables using the withCredentials block and then I used the answer from this question to set the credential helper for git to read in those creds. You can then run terraform init and it will pull down your modules. Finally it clears the modified git settings to hopefully avoid contaminating other builds. Note that the --global configuration here is probably not a good idea for most people but was required for me due to a quirk in our Jenkins agents.
If anyone has a smoother way of doing this I would be very interested in hearing it.

Pass variables between Jenkins stages

I want to pass a variable which I read in stage A towards stage B somehow. I see in some examples that people write it to a file, but I guess that is not really a nice solution. I tried writing it to an environment variable, but I'm not really successful on that. How can I set it up properly?
To get it working I tried a lot of things and read that I should use the """ instead of ''' to start a shell and escape those variables to \${foo} for example.
Below is what I have as a pipeline:
#!/usr/bin/env groovy
pipeline {
agent { node { label 'php71' } }
environment {
packageName='my-package'
packageVersion=''
groupId='vznl'
nexus_endpoint='http://nexus.devtools.io'
nexus_username='jenkins'
nexus_password='J3nkins'
}
stages{
// Package dependencies
stage('Install dependencies') {
steps {
sh '''
echo Skip composer installation
#composer install --prefer-dist --optimize-autoloader --no-interaction
'''
}
}
// Unit tests
stage('Unit Tests') {
steps {
sh '''
echo Running PHP code coverage tests...
#composer test
'''
}
}
// Create artifact
stage('Package') {
steps {
echo 'Create package refs'
sh """
mkdir -p ./build/zpk
VERSIONTAG=\$(grep 'version' composer.json)
REGEX='"version": "([0-9]+.[0-9]+.[0-9]+)"'
if [[ \${VERSIONTAG} =~ \${REGEX} ]]
then
env.packageVersion=\${BASH_REMATCH[1]}
/usr/bin/zs-client packZpk --folder=. --destination=./build/zpk --name=${env.packageName}-${env.packageVersion}.zpk --version=${env.packageVersion}
else
echo "No version found!"
exit 1
fi
"""
}
}
// Publish ZPK package to Nexus
stage('Publish packages') {
steps {
echo "Publish ZPK Package"
sh "curl -u ${env.nexus_username}:${env.nexus_password} --upload-file ./build/zpk/${env.packageName}-${env.packageVersion}.zpk ${env.nexus_endpoint}/repository/zpk-packages/${groupId}/${env.packageName}-${env.packageVersion}.zpk"
archive includes: './build/**/*.{zpk,rpm,deb}'
}
}
}
}
As you can see the packageVersion which I read from stage Package needs to be used in stage Publish as well.
Overall tips against the pipeline are of course always welcome as well.
A problem in your code is that you are assigning version of environment variable within the sh step. This step will execute in its own isolated process, inheriting parent process environment variables.
However, the only way of passing data back to the parent is through STDOUT/STDERR or exit code. As you want a string value, it is best to echo version from the sh step and assign it to a variable within the script context.
If you reuse the node, the script context will persist, and variables will be available in the subsequent stage. A working example is below. Note that any try to put this within a parallel block can be of failure, as the version information variable can be written to by multiple processes.
#!/usr/bin/env groovy
pipeline {
environment {
AGENT_INFO = ''
}
agent {
docker {
image 'alpine'
reuseNode true
}
}
stages {
stage('Collect agent info'){
steps {
echo "Current agent info: ${env.AGENT_INFO}"
script {
def agentInfo = sh script:'uname -a', returnStdout: true
println "Agent info within script: ${agentInfo}"
AGENT_INFO = agentInfo.replace("/n", "")
env.AGENT_INFO = AGENT_INFO
}
}
}
stage("Print agent info"){
steps {
script {
echo "Collected agent info: ${AGENT_INFO}"
echo "Environment agent info: ${env.AGENT_INFO}"
}
}
}
}
}
Another option which doesn't involve using script, but is just declarative, is to stash things in a little temporary environment file.
You can then use this stash (like a temporary cache that only lives for the run) if the workload is sprayed out across parallel or distributed nodes as needed.
Something like:
pipeline {
agent any
stages {
stage('first stage') {
steps {
// Write out any environment variables you like to a temporary file
sh 'echo export FOO=baz > myenv'
// Stash away for later use
stash 'myenv'
}
}
stage ("later stage") {
steps {
// Unstash the temporary file and apply it
unstash 'myenv'
// use the unstashed vars
sh 'source myenv && echo $FOO'
}
}
}
}

Resources