I need to launch few instance on AWS using terraform script, i am automating the whole process using jenkins
pipeline{
agent any
tools {
terraform 'terraform'
}
stages{
stage('Git Checkout'){
steps{
git branch: 'main', credentialsId: 'gitlab id', url: 'https://gitlab.com/ndey1/kafka-infra'
}
}
stage('Terraform init'){
steps{
sh 'cd terraform-aws-ec2-with-vpc'
sh 'terraform init'
}
}
stage('Terraform plan'){
steps{
sh 'terraform plan'
}
}
stage('Terraform apply'){
steps{
sh 'terraform apply --auto-approve'
}
}
}
}
but while running jenins jobs ( pipeline ) it throws the error
+ cd terraform-aws-ec2-with-vpc
[Pipeline] sh
+ terraform init
[0m[1mTerraform initialized in an empty directory![0m
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.[0m
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Terraform plan)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ terraform plan
[31m╷[0m[0m
[31m│[0m [0m[1m[31mError: [0m[0m[1mNo configuration files[0m
[31m│[0m [0m
[31m│[0m [0m[0mPlan requires configuration to be present. Planning without a configuration
[31m│[0m [0mwould mark everything for destruction, which is normally not what is
[31m│[0m [0mdesired. If you would like to destroy everything, run plan with the
[31m│[0m [0m-destroy option. Otherwise, create a Terraform configuration file (.tf
[31m│[0m [0mfile) and try again.
[31m╵[0m[0m
though all terraform cod is in the same dir named "kafka-infra" but still its saying no configuration file in dir but terraform init runs successfully , the error comes in the stage of " terraform plan"
The answer was edited as per #NoamHelmer suggestions from the comments.
You can use the dir option and set it to the directory of the cloned repo, as by default Jenkins is using something called a workspace directory.
stage('Terraform init'){
steps{
dir("terraform-aws-ec2-with-vpc") { // this was added
sh 'terraform init'
}
}
}
The same line should be added to all the stages.
Or you could alternatively use multiline shell scripts:
steps{
sh '''
cd terraform-aws-ec2-with-vpc
terraform init
'''
}
As for the style of the configuration, there are probably multiple (better) ways of doing it. For example, you could use environment variables instead of hardcoding the directory you want to use to execute Terraform code etc.
[1] https://www.jenkins.io/doc/pipeline/tour/environment/
[2] https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#dir-change-current-directory
Related
I am using Jenkins and I would like to deploy my application according to the git branch. Before adding the "when" statements, it runs successfully but as soon as I add the "when" statements, Jenkins runs successfully but returns a clause that makes the deployment not complete and it is shown thus:
Stage "Deploy to Dev" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy to Staging)
[Pipeline] input
Deploy staging deployment?
Proceed or Abort
Approved by admin
Stage "Deploy to Staging" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy to Production)
[Pipeline] input
Deploy production deployment?
Proceed or Abort
Approved by admin
Stage "Deploy to Production" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Also, my Jenkins file part for the dev environment is shown thus:
stage('Deploy to Dev') {
when {branch 'dev'}
environment {
KUBECONFIG = credentials('kubeconfig')
}
steps {
sh 'kubectl --kubeconfig=${KUBECONFIG} --namespace=${DEV_ENVIRONMENT} --record deployment/api set image deployment/api api=wizelinedevops/samuel:${BUILD_NUMBER}'
}
}
stage('Deploy to Staging') {
when {branch 'dev'}
input{message "Deploy staging deployment?"}
environment {
KUBECONFIG = credentials('kubeconfig')
}
steps {
sh 'kubectl --kubeconfig=${KUBECONFIG} --namespace=${STAGING_ENVIRONMENT} --record deployment/api set image deployment/api api=wizelinedevops/samuel:${BUILD_NUMBER}'
}
}
stage('Deploy to Production') {
when {branch 'master'}
input{message "Deploy production deployment?"}
environment {
KUBECONFIG = credentials('kubeconfig')
}
steps {
sh 'kubectl --kubeconfig=${KUBECONFIG} --namespace=${PROD_ENVIRONMENT} --record deployment/api set image deployment/api api=wizelinedevops/samuel:${BUILD_NUMBER}'
}
}
The screenshot is shown below:
screenshot
You have to use a multibranch pipeline, see documentation.
branch Execute the stage when the branch being built matches the branch pattern (ANT style path glob) given, for example: when { branch
'master' }. Note that this only works on a multibranch Pipeline.
I have a Jenkins pipeline that periodically pull from gitlab and build different repos, build a multi-component platform, run and test it. Now I installed a sonarqube server on the same machine (Ubuntu 18.04) and I want to connect my Jenkins to sonarqube.
In Jenkins:
I set up the sonarqube scanner at Global Tool Configuration as below:
I generated a token in sonarqube and in Jenkins at configuration I set up the server as below BUT I couldn't find any place to insert the token (and I think this is the problem):
In the jenkins pipeline this is how I added a stage for sonarqube:
stage('SonarQube analysis') {
steps{
script {
scannerHome = tool 'SonarQube';
}
withSonarQubeEnv('SonarQube') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
But this fails with below logs and ERROR: script returned exit code 127:
[Pipeline] { (SonarQube analysis)
[Pipeline] script
[Pipeline] {
[Pipeline] tool
Invalid tool ID
[Pipeline] }
[Pipeline] // script
[Pipeline] withSonarQubeEnv
Injecting SonarQube environment variables using the configuration: SonarQube
[Pipeline] {
[Pipeline] sh
+ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner
/var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: 1: /var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner: not found
[Pipeline] }
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succeeded?
[Pipeline] // withSonarQubeEnv
[Pipeline] }
[Pipeline] // stage
And when I check my jenkinstools on the disk sonnar plugin is not there:
$ ls /var/lib/jenkins/tools/
jenkins.plugins.nodejs.tools.NodeJSInstallation
Can someone please let me know how I can connect Jenkins to sonarqube?
Create and add token to be able to connect to SonarQube.
You have create project in SonarQube and use it as a parameter:
sh """
${scannerHome}/bin/sonar-scanner \
-Dsonar.projectKey=your_project_key_created_in_sonarqube_as_project \
-Dsonar.sources=. \
"""
I am trying to build a job by pipeline to my other slave in the master
the pipeline is like this
pipeline {
agent {
label "virtual"
}
stages {
stage("test one") {
steps {
echo " test test test"
}
}
stage("test two") {
steps {
echo " testttttttttt "
}
}
}
}
they syntax not getting the error but it doesn't build on my slave server,
but when I run on freestyle job by Restrict where this project can be run with that label then execute sheel by echo "test test"
it was built on my slave server,
what is wrong with my pipeline ? do I missing something?
after build
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on virtual in /home/virtual/jenkins/workspace/demoo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test one)
[Pipeline] echo
test test test
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (test two)
[Pipeline] echo
testttttttttt
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Add the path you want in the Remote root directory (yellow column) as shown below:-
The build works like you did it already. The steps will be executed on the slave. If you add something like clone a repository to your step, your workspace directory will be created.
Pipeline and Freestylejobs are working here different. The Freestylejob will make the directory in workspace as soon as it runs at the first time. The Pipelinejob will create the directory as soon as it needs this this directory.
I created a simple Pipeline:
pipeline {
agent {
label "linux"
}
stages {
stage("test one") {
steps {
sh "echo 'test test test' > text.txt"
}
}
}
}
I converted your echo to a sh command because my Slave is a linux slave. The sh step creates a text.txt file. As soon as I run this job, the directory will be created:
[<user>#<server> test-pipeline]$ pwd
/var/lib/jenkins/workspace/test-pipeline
[<user>#<server> test-pipeline]$ ls -l
total 4
-rw-r----- 1 <user> <group> 15 Oct 7 16:49 text.txt
I have a jenkins pipeline file where i need to call an sh file
node {
stage("Stage1") {
checkout scm
sh '''
echo "Invoking the sh script"
valueNeedstobepassed = "test"
'''
}
stage ('stage2') {
Need to refer the "valueNeedstobepassed" varaible in my
pipleline step
}
}
I am not able to refer the variable "valueNeedstobepassed" on stage 2
Any help please?
I'm trying to set up a Jenkinsfile to run our CI pipeline. One of the steps will involve collecting files from across our directory tree and copying them into a single directory, for zipping up.
I'm attempting to do this using the Jenkins sh step and using glob patterns, but I can't seem to get this to work.
A simple example Jenkinsfile would be:
pipeline {
agent any
stages {
stage('List with Glob'){
steps{
sh 'ls **/*.xml'
}
}
}
}
I would expect that to list any .xml files in the workspace, but instead I receive:
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (List with Glob)
[Pipeline] sh
[jenkinsfile-pipeline] Running shell script
+ ls '**/*.xml'
ls: cannot access **/*.xml: No such file or directory
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
Finished: FAILURE
I think i'm missing something with Groovy string interpolation, but I need some help for this specific case (running in a Jenkins pipeline via a Jenkinsfile)
Any help much appreciated!
As far as I can tell **/*.xml' isn't a valid glob pattern (see this). Instead what you have there is an ant naming pattern, which, as far as I know, isn't supported by bash (or sh). Instead what you wan't to do is to use find:
pipeline {
agent any
stages {
stage('List with find'){
steps{
sh "find . -type f -name '*.xml'"
}
}
}
}