Updating CircleCI environment variable based on branch - circleci

I have a circleci workflow that run different jobs but I have two possible environment variable that could determine what is built. Problem I have now is I am unable to set this environment at workflow level.
I am able to set environment at job level but I need it to be at environment level. my sample workflow.
workflows:
build-test-deploy:
jobs:
- update:
environment:
TEST_ENV_FR: 'BUILD 1'
if git branch is test, I want TEST_ENV_FR to be TEST_ENV_FR: 'BUILD 2'
this is the Job
jobs:
update:
macos:
xcode: 13.2.0
working_directory: /Users/distiller/project
shell: /bin/bash --login -o pipefail
steps:
- checkout
- run:
name: update
command: |
echo $TEST_ENV_FR
cd /Users/distiller/project
source CircleCiHelper.sh
update_source
Any help with this.

Related

Daily automatic Bitbucket deploy with manual step when pushed to branch

I'm trying to set up a pipeline in Bitbucket with a daily schedule for two branches.
develop : There a scheduled daily deployment running + when I push to this branch the pipeline runs again
master : This is the tricky one. I want to have a daily deployment because the page need to be rebuild daily, but I would like to have a security that if anyone pushes to this branch by mistake or the code is bad, it only runs the deployment after a manual trigger.
So my question is that is it possible to set up a rule to track if there was a push and in this case let the admin manually start the pipeline ?
pipelines:
branches:
develop:
- step:
name: Deploy staging
deployment: staging
caches:
- node
script:
- npm run staging:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project $FIREBASE_PROJECT_STAGING --only functions,hosting
artifacts:
- build/**
master:
- step:
name: Deploy to production
deployment: production
caches:
- node
script:
- npm run deploy:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN_STAGING --project $FIREBASE_PROJECT_PRODUCTION --only functions,hosting
artifacts:
- build/** ```
I'd suggest to schedule a different custom pipeline other than the one that runs on pushes to the production branch. The same steps definition can be reused with a yaml anchor and you can replace the trigger in one of them.
E.g:
definitions:
# write whatever is meaningful to you,
# just avoid "caches" or "services" or
# anything bitbucket-pipelines could expect
yaml-anchors:
- &deploy-pro-step
name: Deploy production
trigger: manual
deployment: production
script:
- do your thing
pipelines:
custom:
deploy-pro-scheduled:
- step:
<<: *deploy-pro-step
trigger: automatic
branches:
release/production:
- step: *deploy-pro-step
Sorry if I make some yaml mistakes, but this should be the general idea. The branch where the scheduled custom pipeline will run is configured in the web interface when the schedule is set up.

Similar to Jenkins Groovy file. Is there Any file for Bamboo?

Im totally new to this Devops field basically for Jenkins, Groovy file is used to maintain preparation-build-Deploy, Similarly for Bamboo which script is used?
I got to know bamboo plan is used. But how the plan is generated though any script or any file.
And i have pipeline for Jenkins similarly how it can be done for Bamboo plan.
the groovy file for Jenkins is
node {
stage('Preparation') { // for display purposes
// Get EDM code from a GitHub repository
cleanWs()
checkout scm
sh "python $WORKSPACE/common/deployment_scripts/abc.py --localFolder $WORKSPACE --env dev"
}
stage('Build') {
// Run the maven build
sh "mvn clean install -f $WORKSPACE/pom.xml -Dmaven.test.skip=true"
}
stage('Deploy') {
//Run the deployment script
sh "python $WORKSPACE/common/deployment_scripts/ase.py $WORKSPACE lm-edm-builds-ndev ${env.BUILD_NUMBER} dev"
sh "python $WORKSPACE/common/deployment_scripts/qwert.py --JsonParameterFile $WORKSPACE/common/deployment_scripts/my_properties.json --BuildVersion ${env.BUILD_NUMBER} --WorkSpace $WORKSPACE --environment dev"
}
}
For Bamboo, you can do so with Bamboo Specs. The Bamboo Specs allows you to define Bamboo configuration as code, and have corresponding plans/deployments created or updated automatically in Bamboo. Read more about the Bamboo Specs here.
Bamboo Specs recognize two ways of creating plans, with Java or YAML. Select the one that matches your needs best. The syntax for both can be found in their official reference documentation.
A sample YAML Specs to define a plan can look like below as detailed in this page:
---
version: 2
plan:
project-key: MARS
key: ROCKET
name: Build the rockets
# List of plan's stages and jobs
stages:
- Build the rocket stage:
- Build
#Job definition
Build:
tasks:
- script:
- mkdir -p falcon/red
- echo wings > falcon/red/wings
- sleep 1
- echo 'Built it'
- test-parser:
type: junit
test-results: '**/junit/*.xml'
# Job's requirements
requirements:
- isRocketFuel
# Job's artifacts. Artifacts are shared by default.
artifacts:
- name: Red rocket built
pattern: falcon/red/wings
You may start with this tutorial for Creating a simple plan with Bamboo Java Specs

Travis.ci - Build and Deploy based PR and Tags

I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.

Exporting environment variables from one stage to the next in GitLab CI

Is there a way to export environment variables from one stage to the next in GitLab CI? I'm looking for something similar to the job artifacts feature, only for environment variables instead of files.
Let's say I'm configuring the build in a configure stage and want to store the results as (secret, protected) environment variables for the next stages to use. I could safe the configuration in files and store them as job artifacts but I'm concerned about secrets being made available in files than can be downloaded by everyone.
Since Gitlab 13 you can inherit environment variables like this:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Note: for GitLab < 13.1 you should enable this first in Gitlab Rails console:
Feature.enable(:ci_dependency_variables)
Although not exactly what you wanted since it uses artifacts:reports:dotenv artifacts, GitLab recommends doing the below in their guide: 'Pass an environment variable to another job':
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo "$BUILD_VERSION" # Output is: 'hello'
needs:
- job: build
artifacts: true
I believe using the needs keyword is preferable over the dependencies keyword (as used in hd-deman`'s top answer) since:
When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs: configuration.
Furthermore, you could minimise the risk by setting the build's artifacts:expire_in time to be very small.
No this feature is not here yet, but there is already an issue for this topic.
My suggestion would be that you are saving the variables in a files and cache them, as these will be not downloadable and will be removed on finish of the job.
If you want to be 100% sure you can delete it manually. See the clean_up stage.
e.g.
cache:
paths:
- save_file
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
job_name_2:
script:
- cat save_file | do_something_with_content
clean_up:
script:
- rm save_file
when: always
You want to use Artefacts for this.
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
artifacts:
paths:
- save_file
# Hint: You can set an expiration for them too.
job_name_2:
needs:
- job: job_name_1
artifacts: true
script:
- cat save_file | do_something_with_content

TravisCI: Run after_success on a specific branch

I would like to know how to run an after_success script only for a specific branch.
I am using a custom script to deploy the app after build passes. I would only like to run this when on prod branch.
So far, I have tried the following:
#1
after_success:
- # some deployment script
on: prod
#2
branches:
only:
- prod
after_success:
- # some deployment script
#3
after_success:
branches:
only:
- prod
- # some deployment script
Any suggestions?
I solved it by writing a simple script using TRAVIS_BRANCH environment variable and executed the script in after_success
.travis.yml
after_success:
- ./deploy.sh
deploy.sh
#!/bin/bash
if [ "$TRAVIS_BRANCH" == "prod" ]; then
// do the deploy
fi
You can also do this by using the script provider in the deploy phase of your build. This approach is a bit cleaner but only allows one command, unlike after_success.
deploy:
provider: script
script: # some deployment script
on:
branch: prod

Resources