Bitbucket pipelines - Run or skip step based on build output - bitbucket

We have a monorepo that contains about 5 projects. All these projects are build based on the git change history, so if project A is not changed, it is not build. This is al managed by Nx monorepo.
We are using Bitbucket Pipelines to build and deploy our projects. We want to split every deploy into it's own step so that we have more control over each project's deployment.
In order to achieve this we need to change our build step so that it only executes if the dist folder contains the portal it is ment to deploy. I've read about the condition configuration, but i cannot find anything about checking build artifacts in a condition instead of the git commit that triggerd the change. So is there a way to skip (or directly pass) a step if the portal is not in the build artifact?
Build step
- step: &build
name: Build
caches:
- node
script:
- git fetch origin master:refs/remotes/master
- npm run build
artifacts:
- dist/**
Example Dist artifact
.
├── dist/
│ ├── login-portal
│ ├── Portal-Y
Our deploy step
- step: &deployLoginPortal
image: amazon/aws-cli:2.4.17
deployment: test
trigger: manual
script:
- aws s3 sync $LOGIN_OUTPUT_PATH s3://$LOGIN_S3_BUCKET/ --acl public-read # Sync the portal
# $LOGIN_OUTPUT_PATH = 'dist/login-portal'
Example condition (does not work)
condition:
changesets:
includePaths:
- $LOGIN_OUTPUT_PATH/** # only run if dist contains changes in $LOGIN_BUILD_PATH
Am I missing something or is there an other way of only executing the step if the build artifact (dist/) contains the portal the step is ment to deploy?

You can manually check the presence of an artifact before doing aws s3 sync and gracefully exit the step if condition is not satisfied:
script:
- [ ! -d "$LOGIN_OUTPUT_PATH" ] && echo "Directory $LOGIN_OUTPUT_PATH is not present; gracefully exiting" && exit 0
- aws s3 sync $LOGIN_OUTPUT_PATH s3://$LOGIN_S3_BUCKET/ --acl public-read
Be advised that your step will still technically be triggered, so its startup time (usually about 40 seconds) will be affecting your quotas.

Related

Daily automatic Bitbucket deploy with manual step when pushed to branch

I'm trying to set up a pipeline in Bitbucket with a daily schedule for two branches.
develop : There a scheduled daily deployment running + when I push to this branch the pipeline runs again
master : This is the tricky one. I want to have a daily deployment because the page need to be rebuild daily, but I would like to have a security that if anyone pushes to this branch by mistake or the code is bad, it only runs the deployment after a manual trigger.
So my question is that is it possible to set up a rule to track if there was a push and in this case let the admin manually start the pipeline ?
pipelines:
branches:
develop:
- step:
name: Deploy staging
deployment: staging
caches:
- node
script:
- npm run staging:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project $FIREBASE_PROJECT_STAGING --only functions,hosting
artifacts:
- build/**
master:
- step:
name: Deploy to production
deployment: production
caches:
- node
script:
- npm run deploy:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN_STAGING --project $FIREBASE_PROJECT_PRODUCTION --only functions,hosting
artifacts:
- build/** ```
I'd suggest to schedule a different custom pipeline other than the one that runs on pushes to the production branch. The same steps definition can be reused with a yaml anchor and you can replace the trigger in one of them.
E.g:
definitions:
# write whatever is meaningful to you,
# just avoid "caches" or "services" or
# anything bitbucket-pipelines could expect
yaml-anchors:
- &deploy-pro-step
name: Deploy production
trigger: manual
deployment: production
script:
- do your thing
pipelines:
custom:
deploy-pro-scheduled:
- step:
<<: *deploy-pro-step
trigger: automatic
branches:
release/production:
- step: *deploy-pro-step
Sorry if I make some yaml mistakes, but this should be the general idea. The branch where the scheduled custom pipeline will run is configured in the web interface when the schedule is set up.

prevent the bitbucket pipline from tirggering when bitbucket-pipelines.yml is updated

I am new to bitbuckt pipeline. To my node project I have added bitbucket-pipelines.yml in the pipeline I have a step to build and push container to ECR and another step to deploy.
Now each time I make a change to bitbucket-pipelines.yml it build and pushes a new image to ECR and deploys.
I do not what the piepline to trigger when I make changes to bitbucket-pipelines.yml. I only want the pipeline to trigger when I make changes to my application. Am I setting up the project wrong?
my project structure.
.
├── bitbucket-pipelines.yml
├── Dockerfile
├── index.js
├── node_modules
├── package.json
├── package-lock.json
└── README.md
There are a few possible options:
1. Add [skip ci] to your git commit message
Whenever you change the bitbucket-pipelines.yml on its own, add "[skip ci]" (without quotes) somewhere in your Git commit message. This will prevent the pipeline from running when you push to the Bitbucket remote.
Advantages:
It's easy and simple.
Disadvantages:
You have to remember to manually write the "[skip ci]" text. It's easy to forget, or perhaps a new team member will not know about it.
2. Use a Git Hook to automatically modify your git commit message
Write a Git Hook script that will automatically insert the "[skip ci]" text into the Git commit message. The script will have to do something like this:
After a local commit, check the latest commit to see which files were changed. Use something like git diff --name-only HEAD~0 HEAD~1
If bitbucket-pipelines.yml was the only file changed, modify the commit to insert "[skip ci]" into the commit message.
More info about Git Hooks:
https://githooks.com/
https://www.atlassian.com/git/tutorials/git-hooks
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
Advantages:
It's fully automatic. No need to manually tag your commit messages.
Disadvantages:
Creating the script may not be easy.
Each cloned repo needs to configure the git hooks. See: Can Git hook scripts be managed along with the repository?
3. Make the bitbucket-pipelines.yml check for the file changes
Add a section in the yml build script to check which file was changed in the latest commit.
The script in the yml will have to do something like this:
Check the latest commit to see which files were changed. Use something like git diff --name-only HEAD~0 HEAD~1
If bitbucket-pipelines.yml was the only file changed, abort the CI build immediately, with an exit 0 statement.
Advantages:
It's fully automatic. No need to manually tag your commit messages.
No need to write Git Hook scripts.
Disadvantages:
The Docker image of your CI build will take 1-5 minutes to load, and then abort itself. This is a bit ineffecient and it will consume some of your build minutes.
Because the CI build will still run for a few minutes, it will pollute your CI build history with build runs that didn't do anything.
4. Use a Conditional Step with "changesets" and "includePaths"
Define a changesets with an includePaths to execute a step only if one of the modified files matches the expression in includePaths.
pipelines:
default:
- step:
name: build-frontend-artifact
condition:
changesets:
includePaths:
# only xml files directly under resources directory
- "src/main/resources/*.xml"
# any changes in frontend directory
- "src/site/**"
script:
- echo "Building frontend artifact"
Source and more info here: https://bitbucket.org/blog/conditional-steps-and-improvements-to-logs-in-bitbucket-pipelines

Travis.ci - Build and Deploy based PR and Tags

I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.

Azure DevOps Maven Docker build - can't locate target folder

I am new to Azure Devops and am having some difficulty building my first pipeline. So far I have three steps that work just fine:
Maven build from POM, successfully packages my war file
Copy files to $(system.defaultworkingdirectory), copying the files I want from the target folder
Successful Publish of the artifact to a private Azure package repository
My 4th step runs a DevOps Docker Task to build a Docker image to be used to deploy the web app. This has been a challenge because my dockerfile COPY commands are failing. I can't locate the target folder, the one that step 3 just used to build the war file! In an effort to locate the target folder I added this command to my dockerfile:
RUN ls -R -la /
It appears to have dumped the entire file system and the target folder is nowhere to be found in the listing.
Any thoughts regarding where I can find my target files?
I am very close to making this work the way I want. If I comment out the COPY command it builds a fundamentally empty image which my 5th step successfully pushes to my private Docker repository. Of course the image is useless without the web app.
Any help you might offer will be greatly appreciated.
After a lot of trial and error I came up with the following azure-pipelines.yml file:
trigger:
- master
jobs:
- job: build
pool:
vmImage: 'Ubuntu-16.04'
steps:
- script: |
echo Starting the build
env
java -version
./mvnw clean package -Dmaven.test.failure.ignore=true -e -U
ls -la *
displayName: 'Build with Maven'
- task: Docker#0
displayName: 'Build an image'
inputs:
azureSubscription: 'Visual Studio Enterprise (******)'
azureContainerRegistry: '{"loginServer":"testingcontainerregistry******.azurecr.io", "id" : "/subscriptions/******/resourceGroups/******/providers/Microsoft.ContainerRegistry/registries/testingContainerRegistry******"}'
action: 'Build an image'
- task: Docker#0
displayName: 'Push an image'
inputs:
azureSubscription: 'Visual Studio Enterprise (******)'
azureContainerRegistry: '{"loginServer":"testingcontainerregistry******.azurecr.io", "id" : "/subscriptions/******/resourceGroups/******/providers/Microsoft.ContainerRegistry/registries/testingContainerRegistry******"}'
action: 'Push an image'
- job: test
dependsOn: build
condition: succeeded()
pool:
vmImage: 'Ubuntu-16.04'
steps:
- script: |
echo Performing tests
env
ls -la
displayName: 'Running integration tests'
The testing job is not doing anything useful yet, but you can see that the Maven build and the Docker build&push are done by the same job.
Essentially I struggled with the same issue as you did. I created a GitHub ticket to make them aware of the fact that the basic concepts are not easy to understand from the current documentation: https://github.com/MicrosoftDocs/vsts-docs/issues/2851

Gitlab ci cache is empty when on retry

I've got a problem with gitlab-ci multi runner. I have several stages in my setup. Let's pretend build, test. Build works fine, but when it comes to the test stage the job is failing because of some infrastructure issue. Then I fix the reason of failure and want to repeat only the last step assuming the cache between stages is alive. But it fails again because of the empty cache. Here is an example to demonstrate my layout
eg.
stages:
- build
- test
build_step:
stage: build
tags:
- docker
cache:
key: ${CI_PIPELINE_ID}
untracked: true
paths:
- bld/
script:
- rm -rf bld
- mkdir -p bld
- cd bld
- touch build_here
test:
stage: test
cache:
key: ${CI_PIPELINE_ID}
untracked: true
paths:
- bld/
tags:
- docker
script:
- cd bld
- ls -all
Here is my gitlab-runner version:
# gitlab-ci-multi-runner --version
Version: 9.5.1
Git revision: 96b34cc
Git branch: 9-5-stable
GO version: go1.8.3
Built: Wed, 04 Oct 2017 16:26:27 +0000
OS/Arch: linux/amd64
Thanks for your help!
Cache is served on a best-effort basis; to pass data through jobs you need to use artifacts, as it explained in the documentation:
cache - Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don't have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts - Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI.
You need to use dependencies along with artifacts to obtain what you want

Resources