Terraform cannot pull modules as part of jenkins pipeline - jenkins

I have a jenkinsfile that was working and able to deploy some infrastructure automatically with terraform. Unfortunately after adding a terraform module with a git source it stopped working with the following error:
+ terraform init -input=false -upgrade
Upgrading modules...
- module.logstash
Updating source "git::https://bitbucket.org/*****"
Error downloading modules: Error loading modules: error downloading 'https://bitbucket.org/*****': /usr/bin/git exited with 128: Cloning into '.terraform/modules/34024e811e7ce0e58ceae615c545a1f8'...
fatal: could not read Username for 'https://bitbucket.org': No such device or address
script returned exit code 1
The urls above were obfuscated after the fact. Below is the cut down module syntax:
module "logstash" {
source = "git::https://bitbucket.org/******"
...
}
Below is the Jenkinsfile:
pipeline {
agent {
label 'linux'
}
triggers {
pollSCM('*/5 * * * *')
}
stages {
stage ('init') {
steps {
sh 'terraform init -input=false -upgrade'
}
}
stage('validate') {
steps {
sh 'terraform validate -var-file="production.tfvars"'
}
}
stage('deploy') {
when {
branch 'master'
}
steps {
sh 'terraform apply -auto-approve -input=false -var-file=production.tfvars'
}
}
}
}
I believe this to be a problem with terraform internally using git to checkout the module but Jenkins has not configured the git client within the pipeline job itself. Preferably I would be able to somehow pass the credentials used by the multibranch pipeline job into the job itself and configure git but I am at a loss of how to do that. Any help would be appreciated.

So I found a non-ideal solution that requires you to specify the credentials inside your Jenkinsfile rather than automatically using the credentials used by the job for checkout.
withCredentials([usernamePassword(credentialsId: 'bitbucketcreds', passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')]) {
sh "git config --global credential.helper '!f() { sleep 1; echo \"username=${env.GIT_USER}\\npassword=${env.GIT_PASS}\"; }; f'"
sh 'terraform init -input=false -upgrade'
sh 'git config --global --remove-section credential'
}
The trick is to load the credentials into environment variables using the withCredentials block and then I used the answer from this question to set the credential helper for git to read in those creds. You can then run terraform init and it will pull down your modules. Finally it clears the modified git settings to hopefully avoid contaminating other builds. Note that the --global configuration here is probably not a good idea for most people but was required for me due to a quirk in our Jenkins agents.
If anyone has a smoother way of doing this I would be very interested in hearing it.

Related

Terraform Jenkins Pipeline User input Failure

I am facing a problem with Jenkins Terraform pipeline where terraform asks for a user input and I have no ability to provide the input and as a result the build fails. Here is my configuration:
Jenkinsfile:
pipeline {
agent any
stages {
stage ('Terraform Image version') {
agent{
docker {
//args 'arg1' // optional
label 'jenkins-mytask'
reuseNode true
alwaysPull true
registryUrl 'https://docker-xyz-virtual.artifactory.corp'
image 'docker-xyz-virtual.artifactory.corp/jenkins/slaves/terraform:0.12.15'
}
}
steps {
sh 'terraform --version'
sh 'ls -ltr'
sh 'terraform init -no-color'
sh 'terraform apply -no-color'
}
}
}
}
Error output from Jenkins:
Do you want to migrate all workspaces to "local"?
Both the existing "s3" backend and the newly configured "local" backend
support workspaces. When migrating between backends, Terraform will copy
all workspaces (with the same names). THIS WILL OVERWRITE any conflicting
states in the destination.
Terraform initialization doesn't currently migrate only select workspaces.
If you want to migrate a select number of workspaces, you must manually
pull and push those states.
If you answer "yes", Terraform will migrate all states. If you answer
"no", Terraform will abort.
Enter a value:
Error: Migration aborted by user.
I need to understand if it is possible to handle such user input event in Jenkins pipeline.
Assuming that you do want to migrate your Terraform state, then you must update the flags in the sh step method to provide non-interactive inputs for these prompts:
sh 'terraform init -no-color -input=false -force-copy'
or if you do not want to migrate the state:
sh 'terraform init -no-color -input=false -reconfigure'
Heads up that the next sh step method also needs to be similarly modified:
sh 'terraform apply -no-color -input=false -auto-approve'
You also probably want to set the normal environment variable in the environment directive:
environment { TF_IN_AUTOMATION = true }

Copy key file to folder using Jenkingfile

I am using jenkins scripted file.
I have .key file stored in jenkins files ( Where all env files are present ).
And I need to copy that file to code folder.
Like i want to store device.key in src/auth/keys.
Then will run test on code in pipeline.
I am using scripted Jenkinsfile. And i am unable to find any way to this.
node{
def GIT_COMMIT_HASH
stage('Checkout Source Code and Logging Into Registry') {
echo 'Logging Into the Private ECR Registry'
checkout scm
sh "git rev-parse --short HEAD > .git/commit-id"
GIT_COMMIT_HASH = readFile('.git/commit-id').trim()
# NEED TO COPY device.key to /src/auth/key
}
stage('TEST'){
nodejs(nodeJSInstallationName:'node'){
sh 'npm install'
sh 'npm test'
}
}
}
How I solved this:
I installed Config File Provider Plugin
I added the files as custom files for each environment
In the JenkinsFile I replace the configuration file from the project with the one comming from jenkins:
stage('Add Config files') {
steps {
configFileProvider([configFile(fileId: 'ID-of-Jenkins-stored-file', targetLocation: 'relative-path-to-destination-file-in-the-project')]) {
// some block , maybe a friendly echo for debugging
} } }
Please see the plugin doc as it is capable of replacing tokens in XML and json files and many others.

Jenkins Multibranch Pipeline: How to checkout only once?

I have created very basic Multibranch Pipeline on my local Jenkins via BlueOcean UI. From default config I removed almost all behaviors except one for discovering branches. The config looks line follows:
Within Jenkinsfile I'm trying to setup following scenario:
Checkout branch
(optionally) Merge it to master branch
Build Back-end
Build Front-end
Snippet from my Jenkinsfile:
pipeline {
agent none
stages {
stage('Setup') {
agent {
label "master"
}
steps {
sh "git checkout -f ${env.BRANCH_NAME}"
}
}
stage('Merge with master') {
when {
not {
branch 'master'
}
}
agent {
label "master"
}
steps {
sh 'git checkout -f origin/master'
sh "git merge --ff-only ${env.BRANCH_NAME}"
}
}
stage('Build Back-end') {
agent {
docker {
image 'openjdk:8'
}
}
steps {
sh './gradlew build'
}
}
stage ('Build Front-end') {
agent {
docker {
image 'saddeveloper/node-chromium'
}
}
steps {
dir ('./front-end') {
sh 'npm install'
sh 'npm run buildProd'
sh 'npm run testHeadless'
}
}
}
}
}
Pipeline itself and building steps works fine, but the problem is that Jenkins adds "Check out from version control" step before each stage. The step looks for new branches, fetches refs, but also checks out current branch. Here is relevant output from full build log:
// stage Setup
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh git checkout -f my-branch
// stage Merge with master
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh git checkout -f origin/master
sh git merge --ff-only my-branch
// stage Build Back-end
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh ./gradlew build
// stage Build Front-end
> git checkout -f f067047bbdd3a5d5f9d1f2efae274bc175829595
sh npm install
sh npm run buildProd
sh npm run testHeadless
So as you see it effectively resets working directory to particular commit before every stage git checkout -f f067...595.
Is there any way to disable this default checkout behavior?
Or any viable option how to implement such optional merging to master branch?
Thanks!
By default, git scm will be executed in a Jenkins pipeline. You can disable it by doing:
pipeline {
agent none
options {
skipDefaultCheckout true
}
...
Also, I'd recommend take a look to other useful pipeline options https://jenkins.io/doc/book/pipeline/syntax/#options

Deploy to Heroku staging, then production with Jenkins

I have a Rails application with a Jenkinsfile which I'd like to set up so that a build is first deployed to staging, then if I am happy with the result, it can be built on production.
I've set up 2 Heroku instances, myapp-staging and myapp-production.
My Jenkinsfile has a node block that look like:
node {
currentBuild.result = "SUCCESS"
setBuildStatus("Build started", "PENDING");
try {
stage('Checkout') {
checkout scm
gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()
shortCommit = gitCommit.take(7)
}
stage('Build') {
parallel 'build-image':{
sh "docker build -t ${env.BUILD_TAG} ."
}, 'run-test-environment': {
sh "docker-compose --project-name myapp up -d"
}
}
stage('Test') {
ansiColor('xterm') {
sh "docker run -t --rm --network=myapp_default -e DATABASE_HOST=postgres ${env.BUILD_TAG} ./ci/bin/run_tests.sh"
}
}
stage('Deploy - Staging') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from staging
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-staging.git staging')
}
setBuildStatus("Staging build complete", "SUCCESS");
}
stage('Sanity check') {
steps {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from master
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-production.git HEAD:refs/heads/master')
}
setBuildStatus("Production build complete", "SUCCESS");
}
}
My questions are:
Is this the correct way to do this or is there some other best practice? For example do I need two Jenkins pipelines for this or is one project pipeline enough?
How can I use Jenkins' BRANCH_NAME variable to change dynamically depending on the stage I'm at?
Thanks in advance!
for the first question, using one Jenkinsfile to describe the complete project pipeline is desirable. it keeps the description of the process all in one place, and shows you the process flow in one UI, so your Jenkinsfile seems great in that regard.
for the second question, you can wrap steps in if conditions based on branch. so if you wanted to, say, skip the prod deployment and the step that asks the user if staging looks ok (since you're not going to do the prod deployment) if the branch is not master, this would work.
node('docker') {
try {
stage('Sanity check') {
if (env.BRANCH_NAME == 'master') {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
echo 'deploy check'
if (env.BRANCH_NAME == 'master') {
echo 'do prod deploy stuff'
}
}
} catch(error) {
}
}
i removed some stuff from your pipeline that wasn't necessary to demonstrate the idea, but i also fixed what looked to me like two issues. 1) you seemed to be mixing metaphors between scripted and declarative pipelines. i think you are trying to use a scripted pipeline, so i made it full scripted. that means you cannot use steps, i think. 2) your try was missing a catch.
at the end of the day, the UI is a bit weird with this solution, since all steps will always show up in all cases, and they will just show as green, like they passed and did what they said they would do (it will look like it deployed to prod, even on non-master branches). there is no way around this with scripted pipelines, to my knowledge. with declarative pipelines, you can do the same conditional logic with when, and the UI (at least the blue ocean UI) actually understands your intent and shows it differently.
have fun!

Pipeline step having trouble resolving a file path

I am having trouble getting a shell command to complete in a stage I have defined:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
}
}
}
}
The shell command issues a protractor call which takes a config file argument, but this file fails to be found when protractor tries to retrieve it.
If I take a look at the workspace directory for where the repo is checked out to from the checkout scm step I can see the test directory is present with the config file present the sh step is referencing.
So I'm unsure why the file cannot be found.
I thought about trying to verify the files that can be seen around the time the protractor command is being issued.
So something like:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
def files = findFiles(glob: 'test/**/*.conf.js')
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
echo """${files[0].name} ${files[0].path} ${files[0].directory} ${files[0].length} ${files[0].lastModified}"""
}
}
}
}
But this doesnt work, I dont think findFiles can be used inside a step?
Can anyone offer any suggestions about what may be going on here?
Thanks
to do the debugging you were attempting (to see if the file is actually there) you could wrap the findFiles in a script (making sure your echo is before the step that fails) or use a basic find in an "sh" step like this:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
// you could use the unix find command instead of groovy's findFiles
sh 'find test -name *.conf.js'
// if you're using a non-dsl-step (like findFiles), you must wrap it in a script
script {
def files = findFiles(glob: 'test/**/*.conf.js')
echo """${files[0].name} ${files[0].path} ${files[0].directory} ${files[0].length} ${files[0].lastModified}"""
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
}
}
}
}
}

Resources