I want to use Travis to create nice posts in Github Releases after commit in master
This is what I need:
establish custom title
establish custom description
Publish only build without source code zip
I tried everything I could, but I could only add the binary to the release. Here is my config:
language: java
sudo: false
install: gradle wrapper --gradle-version 4.2
jdk:
- oraclejdk8
script:
- chmod +x gradlew
- "./gradlew test"
- "./gradlew build"
- "./gradlew jar"
- cd build # test
- ls
- cd libs
- ls
- pwd
- echo $TRAVIS_BUILD_NUMBER
cache:
directories:
- "$HOME/.gradle"
- ".gradle"
before_deploy:
# Set up git user name and tag this commit
- git config --local user.name "RareScrap"
- git config --local user.email "RareScrap#users.noreply.github.com"
- export GIT_TAG=$TRAVIS_BRANCH-v0.1.$TRAVIS_BUILD_NUMBER
- git tag $GIT_TAG -a -m "Generated tag from TravisCI for build $TRAVIS_BUILD_NUMBER"
deploy:
provider: releases
api_key:
secure: [redacted]
file: "/home/travis/build/RareScrap/travis.test/build/libs/*"
skip_cleanup: true
file_glob: true
on:
repo: RareScrap/travis.test
branches: # ← new! ghjgjghjghjghj
only: # ← new!
- master # ← new!
The GitHub Releases API can accept:
[String] name - Name for the release
[String] body - Content for release notes
[Boolean] draft - Mark this release as a draft
[Boolean] prerelease - Mark this release as a pre-release
Source.
As for only on master, use on instead of only.
Therefore, simply add change your deploy:
deploy:
provider: releases
api_key:
secure: [redacted]
file: "/home/travis/build/RareScrap/travis.test/build/libs/*"
skip_cleanup: true
file_glob: true
on:
repo: RareScrap/travis.test
branches: # ← new! ghjgjghjghjghj
only: # ← new!
- master # ← new!
to this:
deploy:
provider: releases
api_key:
secure: [redacted]
file: "/home/travis/build/RareScrap/travis.test/build/libs/*"
skip_cleanup: true
file_glob: true
on:
repo: RareScrap/travis.test
branch: master
name: $YOUR_RELEASE_NAME
body: $YOUR_RELEASE_CONTENT
Feel free to reference my .travis.yml file where I did a very similar thing.
Hope this helps!
Related
I have 1 common project including both frontend and backend, sometimes backend sometimes frontend is getting new commits but my pipeline yml is working for both of them and deploying both to server even if they have no change. In other words, If I add 1 line of code to frontend, pipeline is deploying backend too. Here is my bitbucket-pipeline.yml
This is an example Starter pipeline configuration
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
In this example the frontend is not deployed but I will activate it. So What I need is that I want to execute a step according to that which folder/project got commit to in it. e.g. If there is a commit under mybackend then only deploy mybackend and if front end.. so on. Is it possible to execute a step for a specific folder ?
Yes, this is achievable by using condition keyword:
This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.
Your end result should look similar to this:
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
condition:
changesets:
includePaths:
- "myfrontend/**"
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
condition:
changesets:
includePaths:
- "myfolder/**"
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
condition:
changesets:
includePaths:
- "myfolder/**"
See here for more details.
Does anyone know why this script isn't working?
version: 2.1
orbs:
android: circleci/android#1.0.3
gcp-cli: circleci/gcp-cli#2.2.0
jobs:
build:
working_directory: ~/code
docker:
- image: cimg/android:2022.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD
environment:
JVM_OPTS: -Xmx3200m
steps:
- checkout
- run:
name: Chmod permissions
command: sudo chmod +x ./gradlew
- run:
name: Download Dependencies
command: ./gradlew androidDependencies
- run:
name: Run Tests
command: ./gradlew lint test
- store_artifacts:
path: app/build/reports
destination: reports
- store_test_results:
path: app/build/test-results
nightly-android-test:
parameters:
system-image:
type: string
default: system-images;android-30;google_apis;x86
executor:
name: android/android-machine
resource-class: xlarge
steps:
- checkout
- android/start-emulator-and-run-tests:
test-command: ./gradlew connectedDebugAndroidTest
system-image: << parameters.system-image >>
- run:
name: Save test results
command: |
mkdir -p ~/test-results/junit/
find . -type f -regex ".*/build/outputs/androidTest-results/.*xml" -exec cp {} ~/test-results/junit/ \;
when: always
- store_test_results:
path: ~/test-results
- store_artifacts:
path: ~/test-results/junit
workflows:
unit-test-workflow:
jobs:
- build
nightly-test-workflow:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image:
- system-images;android-30;google_apis;x86
- system-images;android-29;google_apis;x86
- system-images;android-28;google_apis;x86
- system-images;android-27;google_apis;x86
name: nightly-android-test-<<matrix.system-image>>
I keep getting the following build error:
Config does not conform to schema: {:workflows {:nightly-test-workflow {:jobs
[{:nightly-android-test {:matrix disallowed-key, :name disallowed-key}}]}}}
The second workflow seems to fail due to the matrix and name parameters but I can't see anything wrong in the script that would make them fail. I've tried looking at a yaml parser and couldn't see any null vaules and I tried the circle ci discussion forum with not a lot of luck.
I don't think that's the correct syntax. See the CircleCI documentation:
https://circleci.com/docs/2.0/configuration-reference/#matrix-requires-version-21
https://circleci.com/docs/2.0/using-matrix-jobs/
According to the above references, I believe it should be:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image: ["system-images;android-30;google_apis;x86", "system-images;android-29;google_apis;x86", "system-images;android-28;google_apis;x86", "system-images;android-27;google_apis;x86"]
name: nightly-android-test-<<matrix.system-image>>
ALM used Bitbucket Cloud
CI system used Bitbucket Cloud
Languages of the repository: Angular (Other (for JS, TS, Go, Python, PHP, …))
Error observed
ERROR: Error during SonarScanner execution
ERROR: Not authorized. Please check the property sonar.login or SONAR_TOKEN env variable
Steps to reproduce
SONAR_TOKEN already generated and added to my ENV_VAR
Bitbucket.yaml
image: ‘node:12.22’
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- sonar
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
EXTRA_ARGS: ‘-Dsonar.host.url=https://sonarcloud.io -Dsonar.login=${SONAR_TOKEN}’
step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
pipelines:
branches
Potential workaround
No idea.
if you already install the sonar cloud app to your workspace environment, there is no need to give the sonar url again. The integration process is handling the URL part. Also, you should add your Sonar token variable to Workspace or repo environment. After that, you should login to Sonar Cloud organization account and bind your repo to SonarCloud to be able to evaluate it by Sonar Cloud. Here is my Sonar Cloud setup;
bitbucket-pipelines.yml file,
image:
name: <base image>
clone:
# SonarCloud scanner needs the full history to assign issues properly
depth: full
definitions:
caches:
# Caching SonarCloud artifacts will speed up your build
sonar: ~/.sonar/cache
pipelines:
pull-requests:
'**':
- step:
name: "Code Quality and Security on PR"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
branches:
master:
- step:
name: "Code Quality and Security on master"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
tags:
'*.*.*-beta*':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: beta
clone:
enabled: false
script:
- <deploy script>
'*.*.*-prod':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: prod
clone:
enabled: false
script:
- <deploy script>
sonar-project.properties file,
sonar.organization=<sonar cloud organization name>
sonar.projectKey=<project key>
sonar.projectName=<project name>
sonar.sources=<sonar evaluation path>
sonar.language=<repo language>
sonar.sourceEncoding=UTF-8
I'm trying to add a hold job into a workflow in CircleCI's config.yml file but I cannot make it work and I'm pretty sure it's a really simple error on my part (I just can't see it!).
When validating it with the CircleCI CLI locally running the following command
circleci config validate:
I get the following error
Error: Job 'hold' requires 'build-and-test-service', which is the name of 0 other jobs in workflow 'build-deploy'
This is the config.yml (note it's for a Serverless Framework application - not that that should make any difference)
version: 2.1
jobs:
build-and-test-service:
docker:
- image: timbru31/java-node
parameters:
service_path:
type: string
steps:
- checkout
- serverless/setup:
app-name: serverless-framework-orb
org-name: circleci
- restore_cache:
keys:
- dependencies-cache-{{ checksum "v2/shared/package-lock.json" }}-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
- dependencies-cache
- run:
name: Install dependencies
command: |
npm install
cd v2/shared
npm install
cd ../../<< parameters.service_path >>
npm install
- run:
name: Test service
command: |
cd << parameters.service_path >>
npm run test:ci
- store_artifacts:
path: << parameters.service_path >>/test-results/jest
prefix: tests
- store_artifacts:
path: << parameters.service_path >>/coverage
prefix: coverage
- store_test_results:
path: << parameters.service_path >>/test-results
deploy:
docker:
- image: circleci/node:lts
parameters:
service_path:
type: string
stage_name:
type: string
region:
type: string
steps:
- run:
name: Deploy application
command: |
cd << parameters.service_path >>
serverless deploy --verbose --stage << parameters.stage_name >> --region << parameters.region >>
- save_cache:
paths:
- node_modules
- << parameters.service_path >>/node_modules
key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
orbs:
serverless: circleci/serverless-framework#1.0.1
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
- deploy:
service_path: v2/campaign
stage_name: dev
region: eu-west-2
requires:
- hold
It's obvious the error relates to the hold step (near the bottom of the config) not being able to find the build-and-test-service just above it but build-and-test-service does exist so am stumped at this point.
For anyone reading I figured out why it wasn't working.
Essentially I was using the incorrect property reference under the requires key:
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
The correct property reference in this case should have been the name of the previous step, i.e. Build and test campaign so I just changed that name to build-and-test-service.
I found the CircleCI docs were not very clear on this but perhaps it was because their examples surrounding manual approvals stated the requires property should be pointing to the root key of the job such as build-and-test-service.
I suppose I should have been more vigilant in my error reading too, it did mention name there as well.
Is it possible to share steps between branches and still run branch specific steps? For example, the develop and release branch has the same build process, but uploaded to separate S3 buckets.
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
develop:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://develop
staging:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://staging
I saw this post (Bitbucket Pipelines - multiple branches with same steps) but it's for the same steps.
Use YAML anchors:
definitions:
steps:
- step: &Test-step
name: Run tests
script:
- npm install
- npm run test
- step: &Deploy-step
name: Deploy to staging
deployment: staging
script:
- npm install
- npm run build
- fab deploy
pipelines:
default:
- step: *Test-step
- step: *Deploy-step
branches:
master:
- step: *Test-step
- step:
<<: *Deploy-step
name: Deploy to production
deployment: production
trigger: manual
Docs: https://confluence.atlassian.com/bitbucket/yaml-anchors-960154027.html
Although it's not officially supported yet, you can pre-define steps now.
You can use yaml anchors.
I got this tip from bitbucket staff when I had an issue running the same steps across a subset of branches.
definitions:
step: &Build
name: Build
script:
- npm install
- npm build
pipelines:
default:
- step: *Build
branches:
master:
- step: *Build
- step:
name: deploy
# do some deploy from master only
I think Bitbucket can't do it. You can use one pipeline and check the branch name:
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
- if [[ $BITBUCKET_BRANCH = develop ]]; then s3cmd put --config s3cmd.cfg ./build s3://develop; fi
- if [[ $BITBUCKET_BRANCH = staging ]]; then s3cmd put --config s3cmd.cfg ./build s3://staging; fi
The two last lines will be executed only on the specified branches.
You can define and re-use steps with YAML Anchors.
anchor & to define a chunk of configuration
alias * to refer to that chunk elsewhere
And the source branch is saved in a default variable called BITBUCKET_BRANCH
You'd also need to pass the build results (in this case the build/ folder) from one step to the next, which is done with artifacts.
Combining all three will give you the following config:
definitions:
steps:
- step: &build
name: Build
script:
- cd source
- npm install
- npm build
artifacts: # defining the artifacts to be passed to each future step.
- ./build
- step: &s3-transfer
name: Transfer to S3
script:
- s3cmd put --config s3cmd.cfg ./build s3://${BITBUCKET_BRANCH}
pipelines:
default:
- step: *build
develop:
- step: *build
- step: *s3-transfer
staging:
- step: *build
- step: *s3-transfer
You can now also use glob patterns as mentioned in the referenced post and steps for both develop and staging branches in one go:
"{develop,staging}":
- step: *build
- step: *s3-transfer
Apparently it's in the works. Hopefully available soon.
https://bitbucket.org/site/master/issues/12750/allow-multiple-steps?_ga=2.262592203.639241276.1502122373-95544429.1500927287