Terraform ignore parallelism flag when running through Jenkins - docker

I am running Terraform job using Jenkins pipeline. Terraform refresh is taking too long 10m~, using -parallelism=60 (local)terraform runs much faster 2.5m~.
When running the same config through Jenkins salve with parallelism I don't see any improve in running time.
Jenkins ver. 2.154
Jenkins Docker plugin 1.1.6
SSH Agent plugin 1.19
Flow: Jenkins master creates job -> Jenkins slave running Docker image of terraform
def terraformRun(String envName, String terraformAction, String dirName = 'env') {
sshagent(['xxxxxxx-xxx-xxx-xxx-xxxxxxxx']) {
withEnv(["ENV_NAME=${envName}", "TERRAFORM_ACTION=${terraformAction}", "DIR_NAME=${dirName}"]) {
sh '''
#!/bin/bash
set -e
ssh-keyscan -H "bitbucket.org" >> ~/.ssh/known_hosts
AUTO_APPROVE=""
echo terraform "${TERRAFORM_ACTION}" on "${ENV_NAME}"
cd "${DIR_NAME}"
export TF_WORKSPACE="${ENV_NAME}"
echo "terraform init"
terraform init -input=false
echo "terraform refresh"
terraform apply -refresh-only -auto-approve -parallelism=60 -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars # Refresh is working but it seems to ignore parallelism
echo "terraform ${TERRAFORM_ACTION}"
if [ ${TERRAFORM_ACTION} = "apply" ]; then
AUTO_APPROVE="-auto-approve"
fi
terraform ${TERRAFORM_ACTION} -refresh=false -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars ${AUTO_APPROVE}
echo "terraform ${TERRAFORM_ACTION} on ${ENV_NAME} finished successfully."
'''
}
}
}

Related

ssh-agent not working on jenkins pipeline

I am newbie and trying to implement CI/CD for my hello world reactive spring project. After releasing the image to docker repo, the next step is to connect to aws ec2 and run the created image. I have already installed ssh agen plugin and tested positive in the ssh connection configured in Mangejenkins->configuration system->ssh client.
Also My system env variabes has path=C:\Windows\System32\OpenSSH\ssh-agent.exe
In the last step I am getting :
Could not find ssh-agent: IOException: Cannot run program "ssh-agent": CreateProcess error=2, The system cannot find the file specified
Check if ssh-agent is installed and in PATH
[ssh-agent] FATAL: Could not find a suitable ssh-agent provider
My Pipelien code:
pipeline {
agent any
tools {
maven 'maven'
jdk 'jdk1.8'
}
environment {
registry ="my-registry"
registryCredential=credentials('docker-credentials')
}
stages {
stage('SCM') {
steps {
git branch: 'master',
credentialsId: 'JenkinsGitlab',
url:'https://www.gitlab.com/my-repo/panda-app'
}
}
stage('Build') {
steps {
bat 'mvn clean package spring-boot:repackage'
}
}
stage('Dockerize') {
steps {
bat "docker build -t ${registry}:${BUILD_NUMBER} ."
}
}
stage('Docker Login') {
steps{
bat "docker login -u ${registryCredential_USR} -p ${registryCredential_PSW}"
}
}
stage('Release to Docker hub') {
steps{
bat "docker push ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to AWS') {
steps {
sshagent(['panda-ec2']) {
bat "ssh -o StrictHostKeyChecking=no ubuntu#my-aws-host sudo docker run -p 8080:8080 ${registry}:${BUILD_NUMBER}"
}
}
}
}}
The build-in SSH-agent of Windows is incompatible with Jenkins SSH-Agent plugin.
I'm using the SSH-agent from the Git installation. Make sure to insert the directory(!) path of Git ssh-agent.exe before any other path, to prevent the use of Windows SSH-agent.
With a default Git for Windows installation, you can set the PATH environment variable like this:
path=c:\Program Files\Git\usr\bin;%path%
For me it didn't work to set the env var from within Jenkins UI. I added it through the settings app. When doing so, make sure to insert it before "%SystemRoot%\system32\OpenSSH".

Jenkins Kubernetes Plugin - sshagent to git clone terraform module

I am attempting to use sshagent in Jenkins to pass my private key into the terraform container to allow terraform to source a module in a private repo.
stage('TF Plan') {
steps {
container('terraform') {
sshagent (credentials: ['6c92998a-bbc4-4f27-b925-b50c861ef113']){
sh 'ssh-add -L'
sh 'terraform init'
sh 'terraform plan -out myplan'
}
}
}
}
When running the job it fails with the following output:
[ssh-agent] Using credentials (id_rsa_jenkins)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent"
exit
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20; export SSH_AUTH_SOCK;
SSH_AGENT_PID=21; export SSH_AGENT_PID;
echo Agent pid 21;
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20
SSH_AGENT_PID=21
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/agent/workspace/demo#tmp/private_key_2729797926.key (user#workstation.local)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh-add -L
ssh-rsa REDACTED user#workstation.local
[Pipeline] sh
+ terraform init
[0m[1mInitializing modules...[0m
- module.demo_proj
Getting source "git::ssh://git#bitbucket.org/company/terraform-module"
[31mError downloading modules: Error loading modules: error downloading 'ssh://git#bitbucket.org/company/deploy-kickstart-project': /usr/bin/git exited with 128: Cloning into '.terraform/modules/e11a22f40c64344133a98e564940d3e4'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[0m[0m
[Pipeline] }
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent" "-k"
exit
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 21 killed;
[ssh-agent] Stopped.
I've triple checked and I am for sure using the correct key pair. I am able to git clone locally from my mac to the repo with no issues.
An important note is that this Jenkins deployment is running within Kubernetes. The Master stays up and uses the Kubernetes plugin to spawn agents.
What does the Host key verification failed. error mean? From my research it can be due to known_hosts not properly being set. Is ssh-agent responsible for that?
Turns out it was an issue with known_hosts not being set. As a workaround we added this to our jenkinsfile
environment {
GIT_SSH_COMMAND = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
}

How to Create Jenkins Input thats no blocking, and based on previous command output

I have 2 issues, that are both part of the same problem. I am running terraform inside a JenkinsFile, this is all happening on a docker container that runs on a specific node. I have a few different environments with the ec2_plugin, that are labeled 'environment_ec2'. Its done this way since we use ansible, and I want to be able to execute ansible locally in the VPC.
1) How do you create an input and stage that are only executed if a previous command returns a specific output?
2) How can I make this non blocking?
node('cicd_ec2') {
stage('Prepare Environment'){
cleanWs()
checkout scm
}
withAWSParameterStore(credentialsId: 'jenkin_cicd', naming: 'relative', path: '/secrets/cicd/', recursive: true, regionName: 'us-east-1') {
docker.image('jseiser/jenkins_devops:0.7').inside {
stage('Configure Git Access') {
sh 'mkdir -p ~/.ssh'
sh 'mv config ~/.ssh/config'
sh 'chmod 600 ~/.ssh/config'
sh "echo '$BITBUCKET_CLOUD' > ~/.ssh/bitbucket_rsa"
sh 'chmod 600 ~/.ssh/bitbucket_rsa'
sh "echo '$CICD_CODE_COMMIT_KEY' > ~/.ssh/codecommit_rsa"
sh 'chmod 600 ~/.ssh/codecommit_rsa'
sh "echo '$IDAUTO_CICD_MGMT_PEM' > ~/.ssh/idauto-cicd-mgmt.pem"
sh 'chmod 600 ~/.ssh/idauto-cicd-mgmt.pem'
sh 'ssh-keyscan -t rsa bitbucket.org >> ~/.ssh/known_hosts'
sh 'ssh-keyscan -t rsa git-codecommit.us-east-1.amazonaws.com >> ~/.ssh/known_hosts'
}
stage('Terraform'){
sh './init-ci.sh'
sh 'terraform validate'
sh 'terraform plan -detailed-exitcode -out=create.tfplan'
}
input 'Deploy stack?'
stage ('Terraform Apply') {
sh 'terraform apply -no-color create.tfplan'
}
stage('Ansible'){
sh 'ansible-galaxy -vvv install -r requirements.yml'
sh 'ansible-playbook -i ~/ vpn.yml'
}
}
}
}
I only want to run the input and terraform apply, if the result of the below command is == 2.
terraform plan -detailed-exitcode
Since this all has to run on a ec2 instance, and it all has to use this container, I am not sure how I can do this input outside of a node like its recommended. Since if the input sits long enough, this instance may go down and the rest of the code would be run on a new instance/workspace and the information I need from the git repo's and the terraform plan would not be present. The git repo that I checkout contains the terraform configurations, the ansible configurations, and some configuration for SSH so that terraform and ansible are able to pull in their modules/roles from private git repos. The 'create.tfplan' that I would need to use IF terraform has a change would also need to be passed around.
Just really confused how I can get a good input, only get that input if I really need to run terraform apply, and how I can make it non blocking.
I had to adopt this from my work-in-progess which is based on declarative pipeline, but I hope it still mostly works..
def tfPlanExitCode
node {
stage('Checkout') {
checkout scm
}
stage('Plan') {
tfPlanExitCode = sh('terraform plan -out=create.tfplan -detailed-exitcode', [returnStatus: true])
stash 'workspace'
}
}
if (tfPlanExitCode == "2") {
input('Deploy stack?')
stage('Apply') {
node {
unstash 'workspace'
sh 'terraform apply -no-color create.tfplan'
}
}
}
The building blocks are:
don't allocate an executor while the input is waiting (for hours..)
stash your workspace contents (you can optionally specify which files to copy) and unstash later on the agent that continues the build
The visualization might be a bit screwed up, when some builds have the Apply stage and some don't. That's why I'm using the declarative pipelines, which allows to nicely/explicitly skip stages.

How can I add another git pull from one jenkins pipeline stage

I would like to ask on how can I add another step for pulling another repo on the jenkins pipeline I created. You see from the jenkins pipeline settings I already specified a repo for pulling the ui to build. Then after the build is made I need to pull another repo for api and build it as one docker image. I already tried this doc however I'm getting issue on getting the the ui files to combine on the api, here is my pipeline script used.
pipeline {
agent { label 'slave-jenkins'}
stages {blah blah
}
stage('Workspace Cleanup') {
steps {
step([$class: 'WsCleanup'])
checkout scm
}
}
stage('Download and Build UI Files') {
steps {
sh '''#!/bin/bash
echo "###########################"
echo "PERFORMING NPM INSTALL"
echo "###########################"
npm install
echo "###########################"
echo "PERFORMING NPM RUN BUILD"
echo "###########################"
npm run build
echo "###########################"
echo "Downloading API Repo"
echo "###########################"
**git branch: 'master',**
**credentialsId: 'XXXXXXXXXXXX',**
**url: 'ssh://git#XXXXXXe:7999/~lXXXXXX.git'**
echo ""
'''
}
}
}
You shouldn't include Jenkins Pipeline DSL into shell script - it must be a separate step, for example
steps {
// first step to run shell script
sh '''#!/bin/bash
echo "###########################"
echo "PERFORMING NPM INSTALL"
echo "###########################"
npm install
echo "###########################"
echo "PERFORMING NPM RUN BUILD"
echo "###########################"
npm run build
echo "###########################"
echo "Downloading API Repo"
echo "###########################"
'''
// second step to checkout git repo
git branch: 'master',
credentialsId: 'XXXXXXXXXXXX',
url: 'ssh://git#XXXXXXe:7999/~lXXXXXX.git'
}
But mixing two repos in one workspace (and two responsibilities in one job) probably isn't a good idea. It would be better if you split your continuous delivery process into multiple jobs:
job to build UI;
job to build API;
job to create Docker image.
and then chain this jobs, so that they are executed one by one and pass build artifacts to each other. So each part of your CD process will satisfy the principle of
single responsibility.

pipeline script to execute jobs on different nodes (slave) configured in Jenins

I have a jenkins server (version 2.34.2) installed in a VM linux ubuntu 16.04. I would like executing different jobs on two different slaves with a pipeline script. I already add my slave by configuring nodes in Jenkins.
I try with this code:
node('MASTER'){
stage 'Initialisation Socket MASTER'
echo "INITIALISATION SOCKET MASTER....."
sh 'javac /home/user/TCPEchoServer.java'
sh 'cd /home/user java TCPEchoServer'
}
node('slave'){
stage 'Initialisation Socket SLAVE'
echo "INITIALISATION SOCKET SLAVE....."
sh 'javac /home/user/TCPEchoClient.java'
sh 'cd /home/user java TCPEchoClient 160.98.34.85 2007'
}
Here is the configuration of my node "slave"
Node - Slave - configuration
But I can't executing some job on this node. Someone can help me please ?
THANKS

Resources