I was able to do the set initial and increment $BUILD_NUMBER using the API but it seems there is no way to clear or set a value lower than the current build number.
The request body contains invalid properties - your build number is lower than existing builds in Pipelines.
$BITBUCKET_BUILD_NUMBER is designed to increment up, and if you do nothing to it, it's the same as your pipeline number. If you want to use a different build number you control... create a new variable and have a step to increment the build number as part of the pipeline. You will be able to edit it at will either in the console or with the API
- step:
name: Increment Build Number
image: ellerbrock/alpine-bash-curl-ssl # just image with curl and bash
script:
- newVer=$((MY_PERSONAL_BUILD_NUMBER++))
- a=$MY_PERSONAL_BUILD_NUMBER
- newBuildValue=$((a++))
- echo $newBuildValue
- curl -v -X PUT "https://api.bitbucket.org/2.0/repositories/$ORG_OR_WORKSPACE/$REPO/pipelines_config/variables/\{$UUID_VARIABLE\}" -H "Content-Type:application/json" -d "{\"key\":\"MY_PERSONAL_BUILD_NUMBER\", \"value\":\"$newBuildValue\" }" --user $PIPELINE_APP_PASSWORD
If you are using your own Docker Image, add curl and bash.
Related
Within .gitlab-ci.yml I've created a new variable under script: by using $CI_COMMIT_SHA and modifying it. When I echo the variable it returns the proper value. However, I'm not having any success passing it along to Docker. What am I not doing right?
Ultimately, I would like access this custom variable inside my container.
build:
script:
# converts commit SHA to UNIX time
- export COMMIT_TIME_UNIX=$(git show -s --format=%ct $CI_COMMIT_SHA)
- echo $COMMIT_TIME_UNIX
You would need to check, when the same script is executed in a Docker/container environment, if it is still in the right Git repository path.
You can add, before the first export:
pwd
git status
env|grep GIT
That way, you will check if you are doing Git commands where you should, and if there is any GIT_xxx environnement variable which might influence said command.
I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)
I have a requirement where i need to trigger the build jobs remotely using curl command. I am unable to pass the branch/tag name as a parameter to trigger the build.
I used the below command :
& $CURLEXE -k -X POST $dst_job_url --user username:token --data-urlencode json='{"parameters": [{"name":"branch","branch":"branches"}]}'
If i run the above command it was triggers the build for the trunk ( default ).
You omitted the URL, so it's hard to be certain. Jenkins has two urls for building: "build" and "buildWithParameters". If you're not using buildWithParameters, switching to it will probably help.
See:
How to trigger Jenkins builds remotely and to pass parameters
I see that returning a non-zero integer in shell script executing on Jenkins will make the result marked as a failure.
How do I make it change to Aborted? Is there a plugin to do this? Can I avoid having to use GroovyScript?
Instead of returning a non-zero integer and having the build fail, you could trigger an Abort on the build using its Rest API within the shell build step.
Example using curl:
curl -XPOST $BUILD_URL/stop
Example using wget:
wget --post-data="" $BUILD_URL/stop
I've managed to create a CircleCI build that triggers a subsequent build using their API using curl. I've added this to my circle.yml:
test:
override:
- mvn test -s settings.xml
- mvn deploy -Prun-its -s settings.xml
- curl -v -X POST https://circleci.com/api/v1/project/alexec/docker-maven-plugin/tree/master?circle-token=$CIRCLE_TOKEN
How do I trigger only if all of the previous steps are green?
I think you should do this in the deployment section: Since this is - by definition - only run if everything is fine, this should do the trick.
See their documentation on deployment for details. There it says:
These commands are triggered only after a successful (green) build.
You should have a requires variable in your job that you want to run only if the previous job has run. So you give the requires variable a value of the job name that you want to succeed first before the jobs resume running.
See this example: https://circleci.com/docs/2.0/configuration-reference/