Ive got a step definition which has a trigger manual currently set, but in certain cases in the pipeline, id like that to be changed to automatic. Is there a way i can change this trigger without having to create a whole new definition where only the trigger is different?
In my pipeline i was hoping something like the below would work, but it doesnt seem to be making any difference
pipelines:
specific-branch:
- step: *Deploy-Development
trigger: automatic
That YAML syntax is not correct. I do it like
definitions:
yaml-anchors:
- &deploy-step
script:
- bash deploy.sh $BITBUCKET_DEPLOYMENT_ENVIRONMENT
pipelines:
branches:
main:
- step:
<<: *deploy-step
name: Deploy dev
deployment: test
trigger: automatic
- step:
<<: *deploy-step
name: Deploy prod
deployment: production
trigger: manual
Is this what you are trying to achieve?
Related
To automate our CI process, I need run the Bitbucket pipelines only when the title not starts with "Draft" or "WIP". Atlassian has only this features https://support.atlassian.com/bitbucket-cloud/docs/use-glob-patterns-on-the-pipelines-yaml-file/.
I tried with the regex ^(?!Draft:|WIP:).+ like this:
pipelines:
pull-requests:
'^(?!Draft:|WIP:).+':
- step:
name: Tests
but the pipeline not start under any circumstances (with or withour Draft:/WIP:). Any suggestions?
Note the PR pattern you define in the pipelines is matched against the source branch, not the PR title. Precisely, I used to feature an empty pipeline for PRs from wip/* branches, e.g.
pipelines:
pull-requests:
wip/*:
- step:
name: Pass
script:
- exit 0
"**":
- step:
name: Tests
# ...
But this workflow requires you to work on wip/* branches and changing their source branch later on. This is somewhat cumbersome and developers just did not opt-in.
This works, though.
I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.
I'm trying to set up a pipeline in Bitbucket with a daily schedule for two branches.
develop : There a scheduled daily deployment running + when I push to this branch the pipeline runs again
master : This is the tricky one. I want to have a daily deployment because the page need to be rebuild daily, but I would like to have a security that if anyone pushes to this branch by mistake or the code is bad, it only runs the deployment after a manual trigger.
So my question is that is it possible to set up a rule to track if there was a push and in this case let the admin manually start the pipeline ?
pipelines:
branches:
develop:
- step:
name: Deploy staging
deployment: staging
caches:
- node
script:
- npm run staging:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project $FIREBASE_PROJECT_STAGING --only functions,hosting
artifacts:
- build/**
master:
- step:
name: Deploy to production
deployment: production
caches:
- node
script:
- npm run deploy:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN_STAGING --project $FIREBASE_PROJECT_PRODUCTION --only functions,hosting
artifacts:
- build/** ```
I'd suggest to schedule a different custom pipeline other than the one that runs on pushes to the production branch. The same steps definition can be reused with a yaml anchor and you can replace the trigger in one of them.
E.g:
definitions:
# write whatever is meaningful to you,
# just avoid "caches" or "services" or
# anything bitbucket-pipelines could expect
yaml-anchors:
- &deploy-pro-step
name: Deploy production
trigger: manual
deployment: production
script:
- do your thing
pipelines:
custom:
deploy-pro-scheduled:
- step:
<<: *deploy-pro-step
trigger: automatic
branches:
release/production:
- step: *deploy-pro-step
Sorry if I make some yaml mistakes, but this should be the general idea. The branch where the scheduled custom pipeline will run is configured in the web interface when the schedule is set up.
I wanted to create a new variable in bitbucket pipeline whose value is going to concatenation of two bitbucket tokens, so I tried this but the new value is not working.
image: node:16.13.0
SOME_VALUE: $(date +%Y-%m-%d)-${BITBUCKET_BUILD_NUMBER}
branches:
develop:
- step:
name: Build
size: 2x
script:
- echo ${SOME_VALUE}
Appreciate any help on this!
This may not work because your variable is not in a step. In Bitbucket Pipelines, each step has its own build environment. You are trying to create a dynamic variable using date, so identify it in a step. If you want to use this variable in multiple steps, you can use artifacts.
image: node:16.13.0
pipelines:
branches:
develop:
- step:
name: Build
size: 2x
script:
- SOME_VALUE=$(date +%Y-%m-%d)-${BITBUCKET_BUILD_NUMBER}
- echo ${SOME_VALUE}
I'm looking to declare environment variables in my Travis CI repository settings and use them in my .travis.yml file to deploy an application and post a build notification in Slack.
I've set environment variables in my Travis CI repository settings like so:
My .travis.yml file appears as the following:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api: $CF_API
username: $CF_USERNAME
password: $CF_PASSWORD
organization: $CF_ORGANIZATION
space: $CF_SPACE
notifications:
slack: $NOTIFICATIONS_SLACK
When I add the values into the .travis.yml file as they are, everything works as planned.
However, when I try to refer to the environment variables set in the repository, I receive no Slack notification on a build status, and the deployment fails.
Am I following this process correctly or is there something I'm overseeing?
I think it is probably too early in Travis CI's sequence for your environment variables to be read.
What I would suggest is to rather encrypt them using the travis command-line tool.
E.g.
$ travis encrypt
Reading from stdin, press Ctrl+D when done
username
Please add the following to your .travis.yml file:
secure: "TD955qR6qvpVIz3fLkGeeUhV76deeVRaLVYjW9YjV6Ob7wd+vPtACZ..."
Pro Tip: You can add it automatically by running with --add.
Then I would copy/paste the secure: "TD955qR6qvpVIz3fLkGeeUhV76d..." result at the appropriate place in your .travis.yml file:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api:
secure: "bHU4+ZDFeZcHpuE/WRpgMBcxr8l..."
username:
secure: "TD955qR6qvpVIz3fLkGeeUhV76d..."
You can have more details about how to encrypt sensitive data on Travis CI here.
Hope this helps.