Bitbucket pipeline - possibility to merge one branch to another - bitbucket

I have a repository with two branches: master and Dev and I want to configure that pipline in such a way that when I push code to Dev branch and code build was successfull, the Dev was merged to master. Unfortunatly I can't find any information about merge in bitbucket piplines docs.
That's my yml file:
pipelines:
branches:
Dev:
- step:
script:
- ant deployCodeCheckOnly -Dsf.username=$SF_USERNAME -Dsf.password=$SF_PASSWORD
Could somebody help me with that case? If it possible?
--Edit
I try to change script as sugest:
pipelines:
branches:
Dev:
- step:
script:
- ant deployCodeCheckOnly -Dsf.username=$SF_USERNAME -Dsf.password=$SF_PASSWORD
- git remote -v
- git fetch
- git checkout master
- git merge Dev
- git push -v --tags origin master:master
Result:
git remote -v
+ git remote -v
origin git#bitbucket.org:repository/project.git (fetch)
origin git#bitbucket.org:repository/project.git (push)
git fetch origin
+ git fetch origin
Warning: Permanently added the RSA host key for IP address ..... to the list of known hosts.
And error:
+ git checkout master
error: pathspec 'master' did not match any file(s) known to git.
--Solution
Dev:
- step:
script:
- ant deployCodeCheckOnly -Dsf.username=$SF_USERNAME Dsf.password=$SF_PASSWORD
- git fetch
- git checkout -b master
- git merge Dev
- git push -v --tags origin master:master

I was facing the same issue, but wanted to use pull requests instead of simple git merge. So I ended up utilising bitbucket API for the job:
Create "App password"
--
Create "App password" so you don't have to push your own credentials to pipelines
(bitbucket settings -> app passwords)
Set environment variables for pipelines
--
BB_USER = your username
BB_PASSWORD = app password
Create bash script
--
I have a bash script that creates pull request from $BITBUCKET_BRANCH and merge it immediately
#!/usr/bin/env bash
# Exit immediately if a any command exits with a non-zero status
# e.g. pull-request merge fails because of conflict
set -e
# Set destination branch
DEST_BRANCH=$1
# Create new pull request and get its ID
echo "Creating PR: $BITBUCKET_BRANCH -> $DEST_BRANCH"
PR_ID=`curl -X POST https://api.bitbucket.org/2.0/repositories/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/pullrequests \
--fail --show-error --silent \
--user $BB_USER:$BB_PASSWORD \
-H 'content-type: application/json' \
-d '{
"title": "Automerger: '$BITBUCKET_BRANCH' -> '$DEST_BRANCH'",
"description": "Automatic PR from pipelines",
"state": "OPEN",
"destination": {
"branch": {
"name": "'$DEST_BRANCH'"
}
},
"source": {
"branch": {
"name": "'$BITBUCKET_BRANCH'"
}
}
}' \
| sed -E "s/.*\"id\": ([0-9]+).*/\1/g"`
# Merge PR
echo "Merging PR: $PR_ID"
curl -X POST https://api.bitbucket.org/2.0/repositories/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/pullrequests/$PR_ID/merge \
--fail --show-error --silent \
--user $BB_USER:$BB_PASSWORD \
-H 'content-type: application/json' \
-d '{
"close_source_branch": false,
"merge_strategy": "merge_commit"
}'
usage: ./merge.sh DESTINATION_BRANCH
see pipelines environment variables documentation to understand better used variables
see bitbucket API docs for more info about used API
Finally in pipelines
--
just call the script:
Dev:
- step:
script:
- chmod +x ./merge.sh
- ./merge.sh master
Benefits:
Pipeline will fail if there is conflict (if you want it to fail)
better control of what's happening

In the “script” section of the YAML configuration, you can do more or less anything you can do at the shell, so (although I’ve never tried it) don’t see a reason why this shouldn’t be possible.
In other words, you’d have to:
Switch the branch to master
Merge dev (optionally, using the predefined BITBUCKET_COMMIT environment variable, which identifies your dev commit)
Commit to master (and probably also push)
As git is available in script, you can use normal git commands and do not need anything specific to Bb Pipelines, like so:
script:
- git fetch
- git checkout -b master
- git merge Dev
- git push -v --tags origin master:master
To make sure this is only done when your Ant job is successful, you should make sure that in case of an error you’ll get a non-zero exit status (which, I assume, is Ant’s default behaviour).

Related

Bitbucket Pipeline git fetch with public key fails

Wtth help of below article i've setup SSH keys for bitbucket so i can use it in pipelines
https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/
When tested on terminal window by entering following command it works fine:
$ ssh -T git#bitbucket.org
but when i run my pipelines it fails
Added public key under my bitbucket profile
My Pipeline:
image:
name: abhisheksaxena7/salesforcedockerimg
pipelines:
branches:
feature/**:
- step:
script:
- ant -buildfile build/build.xml deployEmptyCheckOnly -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
# master:
# - step:
# script:
# - ant -buildfile build/build.xml deployCode -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
Admin-Changes:
- step:
script:
- echo my_known_hosts
# Set up SSH key; follow instructions at https://confluence.atlassian.com/display/BITBUCKET/Set+up+SSH+for+Bitbucket+Pipelines
- (mkdir -p ~/.ssh ; cat my_known_hosts >> ~/.ssh/known_hosts; umask 077 ; echo $SSH_KEY | base64 --decode -i > ~/.ssh/id_rsa)
# Read update_to_trigger_pipelines.txt into commitmsg variable
- commitmsg="$(<update_to_trigger_pipelines.txt)"
# Set up repo and checkout master
- echo git#bitbucket.org:$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG.git
- git remote set-url origin git#bitbucket.org:$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG.git
- git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/*
- git fetch
- git checkout master
# Get metadata from server
- ant -buildfile build/build.xml getCode -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
# Commit any changes to master
- git add force-app/main/default/*
- git config user.name "$GIT_USERNAME"
- git config user.email "$GIT_EMAIL"
- if [[ -n $(git status -s) ]] ; then filelist=`git status -s` ; git commit -a -m "$commitmsg" -m "$filelist" ; git push origin master:master ; else echo "No changes detected"; fi
I was adding my local server ssh key to my profile instead repository SSH KEY, so i had to get repository pipelines SSH Keys and add it to my profile.

Dokku and bitbucket ci/cd

Is there simple receipt how to integrate bitbucket pipeline with dokku?
I want to continuously deploy to production server after commit in master
The necessary steps can be boiled down to:
Enable pipelines.
Generate an SSH key for the pipelines script and add it to dokku.
Add the dokku host as a known host in pipelines.
If you're using private dependencies, also add bitbucket.org as a known host.
Define the environment variable DOKKU_REMOTE_URL.
Use a bitbucket-pipelines.yml file (see example below).
The easy way is to manage it directly from your app's root folder.
Create a bitbucket-pipelines.yml file in which we enter something like the following:
image: node:8.9.4
pipelines:
default:
- step:
caches:
- node
script:
# Add SSH keys for private dependencies
- mkdir -p ~/.ssh
- echo $SSH_KEY | base64 -d > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
# Install and run checks
- curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.3.2
- export PATH=$HOME/.yarn/bin:$PATH
- yarn install # Build is triggered from the postinstall hook
branches:
master:
- step:
script:
# Add SSH keys for deployment
- mkdir -p ~/.ssh
- echo $SSH_KEY | base64 -d > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
# Deploy to hosting
- git remote add dokku $DOKKU_REMOTE_URL
- git push dokku master
Remember dokku takes care of npm install so all we have to do is setup the docker instance (running in bitbucket) for deploying to dokku.
However pay attention to the image: node:8.9.4, as it is generally a good idea to enforce an image that uses the exact version of node (or whichever language), that you use in your application.
Steps 2-4 is just fidgetting around with the settings in Bitbuckets Repository Settings --> Pipelines --> SSH keys, where you will generate an SSH key, add it to your dokku installation.
For the known host you want to enter the IP adress (or domain name) of the server hosting your dokku installation, and press fetch, followed by add host.
See this example application: https://github.com/amannn/dokku-node-hello-world#continuous-deployment-from-bitbucket.

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

GitLab CI: How to pull specific docker container for deployment

I'm doing the productive deploy in gitlab manually. I'm using docker container.
Clicking on the 'Play'-Button in the pipeline list should do the deploy.
But how do I get the version of the selected container? Doing this script is always trying to pull the latest version, which should not be. I want to pull the 'selected' container.
deploy_prod:
stage: deploy
script:
- docker pull $CI_REGISTRY_IMAGE # here selected version is missing
# ...
when: manual
environment:
name: productive
url: https://example.com
only:
- master
As mentioned in the comments to your question, simply use the same script you used to push the image, to pull it in the deploy stage.
Here's an example pull.sh script:
#!/bin/sh
args=("$#")
CI_REGISTRY_IMAGE=${args[0]}
PACKAGE_VERSION=$(cat package.json \
| grep version \
| head -1 \
| awk -F: '{ print $2 }' \
| sed 's/[",]//g' \
| tr -d '[[:space:]]')
CONTAINER_RELEASE_IMAGE=$CI_REGISTRY_IMAGE\:$PACKAGE_VERSION
docker pull $CONTAINER_RELEASE_IMAGE
Notice the pull instead of the push in the last line.
Then modify your deploy job like this:
deploy_prod:
stage: deploy
script:
- ./pull.sh $CI_REGISTRY_IMAGE
# ...
when: manual
environment:
name: productive
url: https://example.com
only:
- master

Jenkins pipeline multiline shell with escape character

I'm running into a weird issues with a pipeline script. I have a multi line sh blob like
sh """
git tag -fa \\"${version}\\" -m \\"Release of ${version}\\"
"""
And this somehow runs as:
+ git tag -fa '"1.0-16-959069f'
error: Terminal is dumb, but EDITOR unset
Please supply the message using either -m or -F option.
So its dropping the -m and message. I've tried single escapes, double escapes, nothing seems to work.
I have no idea why this worked but this did
def tagGithub(String version) {
def exec = """
git tag -d ${version} || true
git push origin :refs/tags/${version}
# tag new version
git tag -fa ${version} -m "Release of ${version}"
git push origin --tags
"""
sh exec
}
Something with the inline jenkins groovy interpolation seems busted, doing the interpolation in another var and then executing it worked

Resources