So I had to do a lot of different renditions of this with no success unless the environment was set before the stages. I am trying to define the environment for aws creds depending on the branch im in. qa then use qa creds for the env BUT it does not set when its inside the stage phase
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
stages {
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
environment {
TF_VAR_aws_access_key = credentials('dev-aws-access-key-id')
TF_VAR_aws_secret_key = credentials('dev-aws-secret-access-key')
AWS_ACCESS_KEY_ID = credentials('dev-aws-access-key-id')
AWS_SECRET_ACCESS_KEY = credentials('dev-aws-secret-access-key')
AWS_REGION = "us-west-2"
}
steps {
sh 'terraform init -backend-config="bucket=${GIT_BRANCH}-terraform-state" -backend-config="dynamodb_table=${GIT_BRANCH}-terraform-state-locking" -backend-config="region=$AWS_REGION" -backend-config="key=${GIT_BRANCH}-terraform-state/terraform.tfstate"'
}
}
IF i obviously set it before the stage phase in the pipeline of course it works.
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
environment {
TF_VAR_aws_access_key = credentials('dev-aws-access-key-id')
TF_VAR_aws_secret_key = credentials('dev-aws-secret-access-key')
AWS_ACCESS_KEY_ID = credentials('dev-aws-access-key-id')
AWS_SECRET_ACCESS_KEY = credentials('dev-aws-secret-access-key')
AWS_REGION = "us-west-2"
}
stages {
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
steps {
sh 'terraform init -backend-config="bucket=${GIT_BRANCH}-terraform-state" -backend-config="dynamodb_table=${GIT_BRANCH}-terraform-state-locking" -backend-config="region=$AWS_REGION" -backend-config="key=${GIT_BRANCH}-terraform-state/terraform.tfstate"'
}
}
My question is , is there a way to set the environment variables before the stages phase BUT conditionally depending on the branch?
Well, yes, there is.
First option: you can run a combination of scripted and declarative pipeline (please note that I haven't checked it works, this is just to send you down a right path):
// scripted pipeline
node('master') {
stage("Init variables") {
if (env.GIT_BRANCH == 'dev') {
env.AWS_REGION = "us-west-2"
}
else {
// ...
}
}
}
// declarative pipeline
pipeline {
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
stages {
stage('Use variables') {
steps {
sh 'echo $AWS_REGION'
}
}
}
Another option is to use withEnv directive inside steps:
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
steps {
withEnv(['AWS_REGION=us-west-2']) {
sh 'echo $AWS_REGION'
}
}
Thanks you MaratC for guiding me in the right path, it def helped. here is what i used
steps {
withCredentials([string(credentialsId: 'qa-aws-access-key-id', variable: 'TF_VAR_aws_access_key'),string(credentialsId: 'qa-aws-secret-access-key', variable: 'TF_VAR_aws_secret_key'),string(credentialsId: 'qa-aws-access-key-id', variable: 'AWS_ACCESS_KEY_ID'),string(credentialsId: 'qa-aws-secret-access-key', variable: 'AWS_SECRET_ACCESS_KEY')])
{
sh 'terraform plan -var-file=${GIT_BRANCH}.tfvars -out=${GIT_BRANCH}-output.plan'
}
}
Related
I have a jenkins pipeline that includes few stages- I wish to run the copy_file stage only if deploy parameter == yes. I have tried to use when but it is not working
servers =['100.1.1.1', '100.1.1.2']
deploy = yes
pipeline {
agent { label 'server-1' }
stages {
stage('Connect to git') {
steps {
git branch: 'xxxx', credentialsId: 'yyy', url: 'https://zzzz'
}
}
stage ('Copy file') {
when { deploy == 'yes' }
steps {
dir('folder_a') {
file_copy(servers)
}
}
}
}
}
def file_copy(list) {
list.each { item ->
sh "echo Copy file"
sh "scp 11.txt user#${item}:/data/"
}
}
First, declare an environmental variable.
environment {
DEPLOY = 'YES'
}
Now use it in when condition like this.
when { environment name: 'DEPLOY', value: 'YES' }
There are two types of pipeline codes in Jenkins.
Declarative pipeline
Scripted pipeline (groovy script)
You are coding in declarative manner so you need to follow the declarative syntax.
NOTE: There are other ways for what you are trying to achieve. I mean to say you may use different logic.
Another way is to use parameters:
parameters {
choice choices: ['YES', 'NO'], description: 'Deploy?', name: 'DEPLOY'
}
stages {
stage ('continue if DEPLOY set to YES') {
when {
expression { params.DEPLOY == 'YES' }
}
steps {
...
}
}
}
I have a below jenkins pipeline and it is working fine
pipeline {
agent
{
node
{
label 'test'
}
}
environment{
ansible_pass = 'credentials('ans-pass')'
}
stages {
stage('Load Vars'){
steps{
script{
configFileProvider([configFile(fileId: "${ENV_CONFIG_ID}", targetLocation: "${ENV_CONFIG_FILE}")]) {
load "${ENV_CONFIG_FILE}"
}
}
}
}
stage('svc install') {
steps {
sshagent(["${SSH_KEY_ID}"])
{
sh '''
ansible-playbook main.yaml -i hosts.yaml -b --vault-password-file $ansible_pass
'''
}
}
}
}
}
Now i want to pass the global environment variable id from shell instead of hartcoding
ansible_pass = 'credentials('ans-pass')'===>>>>
this ansible-pass1 should come from managed files(config provider)
I have already below from managed files
env.ARTI_TOKEN_ID='art-token'
env.PLAYBOOK_REPO='dep.stg'
env.SSH_KEY_ID = 'test_key'
Now how to add this credential id in this file?.Tried like below
env.ansible_pass = 'ansible-pass1'
and in jenkins pipeline refered the same as below
environment{
ansible_pass = 'credentials($ansible_pass)'
}
But it didn't worked.Could you please advice
As you are using secrets in config file it is better to use secret type 'secret file' in jenkins. Follow the link to read about different types of credentials.
Also correct way of setting credentials is:
environment{
ansible_pass = credentials('credentials-id-here')
}
I would like to configure an environment variable for my Jenkins pipeline, but dynamically based on an input parameter to the build. I'm trying to configure my pipeline to set the KUBECONFIG environment variable for kubectl commands.
My pipeline is as follows (slightly changed):
pipeline {
parameters {
choice(name: 'CLUSTER_NAME', choices: 'cluster1/cluster2')
}
stages {
// Parallel stages since only one environment variable should be set, based on input
stage ('Set environment variable') {
parallel {
stage ('Set cluster1') {
when {
expression {
params.CLUSTER_NAME == "cluster1"
}
}
environment {
KUBECONFIG = "~/kubeconf/cluster1.conf"
}
steps {
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
stage ('Set cluster2') {
when {
expression {
params.CLUSTER_NAME == "cluster2"
}
}
environment {
KUBECONFIG = "~/kubeconf/cluster2.conf"
}
steps {
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
}
stage ('Test env') {
steps {
sh "cat ${env.KUBECONFIG}"
}
}
}
}
However, while the stage where I set the environment variable can print it, once I move to another stage I only get null.
Is there some way of sharing env variables between stages? Since I'd like to use the default KUBECONFIG command (and not specify a file/context in my kubectl commands), it would be much easier to find a way to dynamically set the env variable.
I've seen the EnvInject plugin mentioned, but was unable to get it working for a pipeline, and was struggling with the documentation.
I guess that with the environment{} you are setting the environment variable only for the stage where it runs - it is not affecting the context of environment of the pipeline itself. Set environment variables like below to affect the main context. Works for me.
pipeline {
agent any
parameters {
choice(name: 'CLUSTER_NAME', choices: 'cluster1\ncluster2')
}
stages {
// Parallel stages since only one environment variable should be set, based on input
stage ('Set environment variable') {
parallel {
stage ('Set cluster1') {
when {
expression {
params.CLUSTER_NAME == "cluster1"
}
}
steps {
script{
env.KUBECONFIG = "~/kubeconf/cluster1.conf"
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
stage ('Set cluster2') {
when {
expression {
params.CLUSTER_NAME == "cluster2"
}
}
steps {
script{
env.KUBECONFIG = "~/kubeconf/cluster2.conf"
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
}
}
stage ('Test env') {
steps {
sh "cat ${env.KUBECONFIG}"
}
}
}
}
I have the following pipeline:
pipeline {
agent any
environment {
branch = 'master'
scmUrl = 'ssh://git#myrepo.git'
serverPort = '22'
}
stages {
stage('Stage 1') {
steps {
sh '/var/jenkins_home/contarpalabras.sh'
}
}
}
}
I want to change the pipeline to "scripted pipeline" in order to use try / catch blocks and have better error management. However i did not find how the equivalent to environment block in official documentation.
You can use the withEnv block like:
node {
withEnv(['DISABLE_AUTH=true',
'DB_ENGINE=sqlite']) {
stage('Build') {
sh 'printenv'
}
}
}
This info is also in the official documentation : https://jenkins.io/doc/pipeline/tour/environment/#
For below Pipeline, if selected environment is 'dev' then after completing pipeline it should promt and deploy to other environments i.e. 'qa' or 'staging' as well.
How do we achieve this in JenkinsFile? Simply putting if conditions OR any other plugins such as 'build promotion' ?
The purpose of this is to replicate any changes to all environments.
{
properties([
parameters([choice(choices: "dev\nqa\nstg", description: 'Environment?', name: 'environment')])
])
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Building..${params.environment}"
}
}
stage('Test') {
steps {
echo "Testing..${params.environment}"
}
}
stage('Deploy') {
steps {
echo "Deploying..${params.environment}"
}
}
}
}
}