Travis deploy based on matrix parameters - travis-ci

I have a travis job that runs on both Linux and OSX, which I would like to be able to use to deploy different build artefacts for each platform to github releases. My .travis.yml file currently looks something like this:
language: rust
cache: cargo
os:
- linux
- osx
rust:
- stable
- beta
- nightly
script:
- cargo build --release -vv
- cargo test --release --all -vv
matrix:
allow_failures:
- rust: nightly
fast_finish: true
deploy:
- provider: releases
skip_cleanup: true
api_key:
secure: <encrypted key here, removed for brevity>
before_deploy:
- cargo install cargo-deb
- cargo deb --no-build --no-strip
- ./scripts/package_linux.sh .
file_glob: true
file:
- "target/debian/ellington_0.1.0_amd64.deb"
- "releases/*_linux.zip"
on:
tags: true
os: linux
rust: stable
I assume that I add a second deploy step (e.g. see below), but I can't find any documentation on how to do this, let alone whether it's even possible. There is extensive documentation on deploying to multiple providers, but not on deploying multiple times to the same providers on different platforms.
- provider: releases
skip_cleanup: true
api_key:
secure: <encrypted key here, removed for brevity>
before_deploy:
- ./scripts/package_osx.sh .
file_glob: true
file:
- "releases/*_osx.zip"
on:
tags: true
os: osx
rust: stable

Check out this link!
The gist of it is yes, you were on the right track and you can define multiple deployments like so:
deploy:
- provider: releases
api_key: "<deploy key>"
file:
- "target/release.deb"
skip_cleanup: true
on:
tags: true
- provider: releases
api_key: "<deploy key>"
file:
- "target/release.dmg"
skip_cleanup: true
on:
tags: true
- provider: releases
etc...
Relevant documentation for this feature can also be found here. approximately halfway through the conditional deployment section.

Related

bitbucket pipelines variables in line

It is possible to declare variables inside the pipeline file, as in this GitHub example:
# ...
env:
NODE_VERSION: 16.3.1
FOLDER_PATH: Project
# ...
steps:
- name: Move to project folder
run: cd $FOLDER_PATH
# ...
Is it possible to do something similar in the bitbucket pipeline files? (How?)
Thanks any help : )
No.
There is a feature request for that https://jira.atlassian.com/browse/BCLOUD-17453 .
Still "gathering interest" though.
The nearest approximation is to write a YAML anchor that exports those vars and use it in every step.
definitions:
yaml-anchors:
- &setenv-script >-
export NODE_VERSION=16.3.1
&& export FOLDER_PATH=Project
pipelines:
default:
- step:
script:
- *setenv-script
- ...
- step:
script:
- *setenv-script
- ...

When condition on Circleci 2.1 does not work

Recently I've made some configuration on my team's github circleci. I needed to use a when statement to devide ci logics. I referenced this document(https://circleci.com/docs/2.0/configuration-reference/#logic-statements) but it seems the document not correct.
Below is my step definition:
...
image_build_step:
executor: golang_executor
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
docker_layer_caching: true
- define_svc_name:
jobname: ${CIRCLE_JOB} # On this step set $SVC variable
- when:
conditon:
equal: ["${SVC}", "SVC_A" ]
- aws-ecr/build-and-push-image:
repo: SVC_A_REPO
dockerfile: ./Dockerfile
tag: "latest,${CIRCLE_SHA1},build-${CIRCLE_BUILD_NUM}"
...
Also I already tried this.
...
image_build_step:
executor: golang_executor
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
docker_layer_caching: true
- define_svc_name:
jobname: ${CIRCLE_JOB} # On this step set $SVC variable
- when:
equal: ["${SVC}", "SVC_A" ]
- aws-ecr/build-and-push-image:
repo: SVC_A_REPO
dockerfile: ./Dockerfile
tag: "latest,${CIRCLE_SHA1},build-${CIRCLE_BUILD_NUM}"
...
I cannot figure out my mistake using when statement on circleci. Additionaly, I already passed circleci config validate .circleci/config.yaml command before I pushed this commit.
What is the correct usage of when statement in circleci? Joining circleci forum is also annoying me using github account, so I leave my question on stakeoverflow.
It's not possible to use environment variables in logic statements. The reason is that logic statements are evaluated at configuration compilation time, whereas environment variables are interpolated at run time.
The only workaround I know of is to use the CircleCI dynamic configuration functionality to set pipeline parameters' values in the "setup workflow" that you then pass to the "continuation" workflow.

SonarCloud quality check doesn't work after running scan

I am trying to run sonarcloud-quality-gate check after performing sonarcloud-scan. I am doing this because I want bitbucket build pipeline should fail if the quality gate check is failed.
Doing this I get some error like this
Quality Gate failed: Could not get scanner report: [Errno 2] No such file or directory: '/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/sonarsource/sonarcloud-scan/sonarcloud-scan.log'
This is how my bitbucket.yml looks.
image: node:10.15.3
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- node
- sonar
script:
- npm install --quiet
- npm run test:coverage
- pipe: sonarsource/sonarcloud-scan:0.1.5
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
pipelines:
default:
- step: *build-test-sonarcloud
Although solarcloud-scan pipe runs successfully.
The problem is that the sonarsource/sonarcloud-quality-gate pipe requires a newer version of the sonarsource/sonarcloud-scan pipe. (This was the case ever since the first release of the sonarsource/sonarcloud-quality-gate pipe.)
Change your pipeline configuration like this:
- pipe: sonarsource/sonarcloud-scan:1.0.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.3
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
An easy way to see the latest versions is in the pipeline editor.
When you edit the bitbucket-pipelines.yml file, a sidebar like this opens,
where you can filter the list by entering "sonar":
And then, click on a pipe to see details, and note the version used.

How can I convince travis.ci to find Nashorn?

I have a JVM-based project that uses the Nashorn Javascript engine. It builds and tests locally fine. When using travis.ci my unit tests explode with NullPointerExceptions because ScriptEngineManager.getEngineByName("nashorn") is returning null.
Here's the travis.yml I'm using:
language: scala
scala:
- 2.11.8
notifications:
email:
recipients:
- info#blocke.com
jdk:
- oraclejdk8
script:
- sbt clean coverage test coverageReport && sbt coverageAggregate
before_install:
- export TZ=America/Chicago
- date
after_success:
- sbt coverageReport coveralls
addons:
apt:
packages:
- oracle-java8-installer
Moving to oraclejdk11 solved the problem!

Building wheels with travis' pypi deploy

I tried out the travis-ci pypi deployment, as can be seen here:
https://travis-ci.org/Simplistix/testfixtures/jobs/80429422
The pertinent .travis.yaml bits are:
deploy:
provider: pypi
user: ...
password:
secure: ...
on:
tags: true
repo: Simplistix/testfixtures
...but this has only created an sdist.
How can I configure it to also create and upload a wheel?
Just add in deploy section parameter "distributions", for example:
deploy:
provider: pypi
distributions: "sdist bdist bdist_wheel"
... e.t.c

Resources