I would like to know how to run an after_success script only for a specific branch.
I am using a custom script to deploy the app after build passes. I would only like to run this when on prod branch.
So far, I have tried the following:
#1
after_success:
- # some deployment script
on: prod
#2
branches:
only:
- prod
after_success:
- # some deployment script
#3
after_success:
branches:
only:
- prod
- # some deployment script
Any suggestions?
I solved it by writing a simple script using TRAVIS_BRANCH environment variable and executed the script in after_success
.travis.yml
after_success:
- ./deploy.sh
deploy.sh
#!/bin/bash
if [ "$TRAVIS_BRANCH" == "prod" ]; then
// do the deploy
fi
You can also do this by using the script provider in the deploy phase of your build. This approach is a bit cleaner but only allows one command, unlike after_success.
deploy:
provider: script
script: # some deployment script
on:
branch: prod
Related
I have a circleci workflow that run different jobs but I have two possible environment variable that could determine what is built. Problem I have now is I am unable to set this environment at workflow level.
I am able to set environment at job level but I need it to be at environment level. my sample workflow.
workflows:
build-test-deploy:
jobs:
- update:
environment:
TEST_ENV_FR: 'BUILD 1'
if git branch is test, I want TEST_ENV_FR to be TEST_ENV_FR: 'BUILD 2'
this is the Job
jobs:
update:
macos:
xcode: 13.2.0
working_directory: /Users/distiller/project
shell: /bin/bash --login -o pipefail
steps:
- checkout
- run:
name: update
command: |
echo $TEST_ENV_FR
cd /Users/distiller/project
source CircleCiHelper.sh
update_source
Any help with this.
I'm trying to set up a pipeline in Bitbucket with a daily schedule for two branches.
develop : There a scheduled daily deployment running + when I push to this branch the pipeline runs again
master : This is the tricky one. I want to have a daily deployment because the page need to be rebuild daily, but I would like to have a security that if anyone pushes to this branch by mistake or the code is bad, it only runs the deployment after a manual trigger.
So my question is that is it possible to set up a rule to track if there was a push and in this case let the admin manually start the pipeline ?
pipelines:
branches:
develop:
- step:
name: Deploy staging
deployment: staging
caches:
- node
script:
- npm run staging:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project $FIREBASE_PROJECT_STAGING --only functions,hosting
artifacts:
- build/**
master:
- step:
name: Deploy to production
deployment: production
caches:
- node
script:
- npm run deploy:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN_STAGING --project $FIREBASE_PROJECT_PRODUCTION --only functions,hosting
artifacts:
- build/** ```
I'd suggest to schedule a different custom pipeline other than the one that runs on pushes to the production branch. The same steps definition can be reused with a yaml anchor and you can replace the trigger in one of them.
E.g:
definitions:
# write whatever is meaningful to you,
# just avoid "caches" or "services" or
# anything bitbucket-pipelines could expect
yaml-anchors:
- &deploy-pro-step
name: Deploy production
trigger: manual
deployment: production
script:
- do your thing
pipelines:
custom:
deploy-pro-scheduled:
- step:
<<: *deploy-pro-step
trigger: automatic
branches:
release/production:
- step: *deploy-pro-step
Sorry if I make some yaml mistakes, but this should be the general idea. The branch where the scheduled custom pipeline will run is configured in the web interface when the schedule is set up.
I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.
Is there a way to export environment variables from one stage to the next in GitLab CI? I'm looking for something similar to the job artifacts feature, only for environment variables instead of files.
Let's say I'm configuring the build in a configure stage and want to store the results as (secret, protected) environment variables for the next stages to use. I could safe the configuration in files and store them as job artifacts but I'm concerned about secrets being made available in files than can be downloaded by everyone.
Since Gitlab 13 you can inherit environment variables like this:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Note: for GitLab < 13.1 you should enable this first in Gitlab Rails console:
Feature.enable(:ci_dependency_variables)
Although not exactly what you wanted since it uses artifacts:reports:dotenv artifacts, GitLab recommends doing the below in their guide: 'Pass an environment variable to another job':
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo "$BUILD_VERSION" # Output is: 'hello'
needs:
- job: build
artifacts: true
I believe using the needs keyword is preferable over the dependencies keyword (as used in hd-deman`'s top answer) since:
When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs: configuration.
Furthermore, you could minimise the risk by setting the build's artifacts:expire_in time to be very small.
No this feature is not here yet, but there is already an issue for this topic.
My suggestion would be that you are saving the variables in a files and cache them, as these will be not downloadable and will be removed on finish of the job.
If you want to be 100% sure you can delete it manually. See the clean_up stage.
e.g.
cache:
paths:
- save_file
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
job_name_2:
script:
- cat save_file | do_something_with_content
clean_up:
script:
- rm save_file
when: always
You want to use Artefacts for this.
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
artifacts:
paths:
- save_file
# Hint: You can set an expiration for them too.
job_name_2:
needs:
- job: job_name_1
artifacts: true
script:
- cat save_file | do_something_with_content
I move to GitLab and use all his tools that comes with it.
I installed GitLab v8.0.4, on my CentOs7 with Tomcat. I create a project and push a grails example to the git project.
Now I'd like to be able, every time I push a file to the project, to fire up a deploy. In jenkis I was able to pull the project, compile it with grails cmd tool, and deploy the war to the Tomcat.
I'm trying to do the same but I really feel lost. Does anybody have never try this, and can show me how to do?
If the deployment script is in the same repository as the project itself, you can have a build stage and a deploy stage. If the the build stage succeeds, it will start the deployment stage. The .gitlab-ci.yml could look like this:
stages:
- build
- deploy
build_grails:
stage: build
script:
- build-script_of_grails_cmd
deploy_to_tomcat:
stage: deploy
script:
- deploy_script_with_capistrano_or_whatever
If your deployment code is in another project you can trigger this project to start the deployment when the build stage has finished. The deployment repo should have a trigger setup. This can be done in the continuous integration menu of the deployment project. After setting up a trigger, GitLab generates a triggering curl snippet you can paste in the yml-file. The grails app gitlab-ci.yml will look like this:
stages:
- build
- deploy
build_grails:
stage: build
script:
- build-script_of_grails_cmd
trigger:
type: deploy
script:
- curl -X POST -F token=4579a6f10c51f0a4b7bdbd384f6e53 https://gitlab-comewhere.com/ci/api/v1/projects/5/refs/master/trigger
The gitlab-ci.yml in the deployment project will look like this:
stages:
- deploy
deploy_to_tomcat:
stage: deploy
script:
- deploy_script_with_capistrano_or_whatever