What is equivalent of TRAVIS_TEST_RESULT in GitHub Actions - travis-ci

I'm rewriting a CI/CD pipeline from Travis to Github Actions. I'd like to know what is equivalent of Travis environment variable TRAVIS_TEST_RESULT in GitHub Actions?. Basically, I'd like to know if the status of an action / step filed or succeed. As for now there is no default environment variable in GitHub Actions that would match what I want to express.

I found work around it. Instead of looking at github's default environment variables (or lack of it). I fetched related state from workflow's context. I used ${{ job.status }} context.
Debugging workflow's context:
name: CI
on:
push:
branches: [ master, actions ]
workflow_dispatch:
branches: [ master, actions ]
jobs:
debug:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Debug
env:
JOB: ${{ toJson(job) }}
CONTEXT: ${{ toJson(github) }}
run: |
echo $JOB
echo $CONTEXT

Related

In Fortify how to exclude all java classes in a folder

I want to exclude all java classes that are underneath test folder from translations and scans. I have tried to use the -exclude parameter but it doesn't seem to work.
# This is a basic workflow to help you get started with running Fortify scans using ScanCentral into Fortify SSC
name: Fortify Scan
on:
workflow_dispatch:
jobs:
Fortify:
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
# Run ScanCentral scan on application
- name: Run ScanCentral Scan
run: |
scancentral arguments -targs "-exclude ${{ github.workspace }}\**\test\**\*"
scancentral -sscurl https://fortify.hello.com/ssc -ssctoken ${{ secrets.FORTIFY_SCANCENTRAL_TOKEN }} start -bt none -upload -uptoken ${{ secrets.FORTIFY_SCANCENTRAL_TOKEN }} -versionid 101
I tried multiple ways to exclude but none of them worked. Few of the exclude arguments used are as below.
Using full Path
scancentral arguments -targs "-exclude ${{ github.workspace }}\src\test\java\com\Testwalk.java"
Using Path with "/" instead ""
scancentral arguments -targs "-exclude ${{ github.workspace }}/**/test/**/*"
Using the file extension format
scancentral arguments -targs "-exclude ${{ github.workspace }}\**\test\**\*.java"
But None of the above ways worked. Please let me know where I am doing wrong or any suggestions to resolve this issue.

Github workflow: requested access to the resource is denied

I am trying to use GitHub workflow to build an ASP.NET 6 project using Dockerfile then push the image to a private Azure Registry using docker.
Here is my .yml file
name: Docker Image CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Login To Azure Container Registr
uses: Azure/docker-login#v1
with:
login-server: ${{ secrets.ACR_HOST }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWWORD }}
- name: Build And Push Docker Images
uses: docker/build-push-action#v3.1.1
with:
push: true
file: ./Dockerfile
tags: companyname/projectname:${{ github.run_number }}
In the above, the Dockerfile is located in the root of my project's code.
However, the the build runs I get the following error
Error: buildx failed with: error: denied: requested access to the resource is denied
In the Secrets > Action section in my repository settings, I added ACR_HOST, ACR_USERNAME and ACR_PASSWORD secrets.
When viewing the logs, this issue seems to happen after this line in the logs
pushing companyname/projectname:2 with docker:
How can I solve this issue?
UPDATED
I changed the .yml script to the following
name: Docker Image CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Login To Azure Container Registr
uses: Azure/docker-login#v1
with:
login-server: mycontainer.azurecr.io
username: "The admin username"
password: "The admin password"
- run: cat ${{ env.DOCKER_CONFIG }}/config.json
- name: Build And Push Docker Images
uses: docker/build-push-action#v3.1.1
with:
push: true
file: ./Dockerfile
tags: companyname/projectname:${{ github.run_number }}
The added step (i.e., cat ${{ env.DOCKER_CONFIG }}/config.json) displayed a json string that look like this
{"auths":{"mycontainer.azurecr.io":{"auth":"BASE64 string with the admin username:password as expected"}}}
The base64 string was formatted like this username:password
I am assuming that the step Azure/docker-login#v1 has no issue and stages the token for docker/build-push-action#v3.1.1 correctly.
If I set the push flag to false in the docker/build-push-action#v3.1.1 step, the workflow runs with no issue. So from what I can tell, the issue is when the step docker/build-push-action#v3.1.1 tries to push the created image to the Azure registry.
I use my local machine to login using the same credentials and all worked with no issue docker login mycontainer.azurecr.io
Additionally, the login request from my local machine is logged into Azure portal. However, I do not see the request when I run the workflow.
I think that main issue is that the step docker/build-push-action#v3.1.1 does not attempt to login before it pushes the image.
I followed the instructions here and it worked.

Grabbing PR number from a push event?

I have two GitHub actions workflow set up right now. One to publish an image to a jfrog registry, and another to promote the image with a new tag to the jfrog artifactory.
I am trying to use the github.event.number in the push workflow but for some reason you can't get the PR number if it isn't a pull_request being made... hence I get the error:
"Error response from daemon: manifest for (company jfrog artifactory url) - not found: manifest unknown: The named manifest is not known to the registry.
Anyone know any work arounds to this?
I successfully got the PR number from a PUSH event by using this implementation:
name: Get PR Number on PUSH event
on: [push, pull_request]
jobs:
push:
runs-on: ubuntu-latest
if: ${{ github.event_name == 'push' }}
steps:
- uses: actions/checkout#v2.3.4
with:
fetch-depth: 0
- name: Get Pull Request Number
id: pr
run: echo "::set-output name=pull_request_number::$(gh pr view --json number -q .number || echo "")"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: echo ${{ steps.pr.outputs.pull_request_number }}
pull-request:
runs-on: ubuntu-latest
if: ${{ github.event_name == 'pull_request' }}
steps:
- run: echo ${{ github.event.number }}
I also let the pull_request job to show how to get it from this event as well (if you want to compare when you realize a push to an already opened PR).
Part of the solution was shared here but you also needed to add the actions/checkout to the job steps otherwise the gh cli didn't recognised the repo.
You can check the 2 workflow runs here:
push event: https://github.com/GuillaumeFalourd/poc-github-actions/runs/4317001027?check_suite_focus=true
pull request event: https://github.com/GuillaumeFalourd/poc-github-actions/actions/runs/1501093093

goreleaser separate flows for merging to master and for cutting release

GoReleaser & GitHub actions are currently configured as follows when a tag is pushed:
# github action
name: Release
on:
push:
tags:
- '*'
env:
REF: ${{ github.event.inputs.tag || github.ref }}
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Run GoReleaser
uses: goreleaser/goreleaser-action#v2
with:
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# .goreleaser.yaml snippet
dockers:
- image_templates:
- "foo/bar:latest"
- "foo/bar:v{{ .Major }}"
- "foo/bar:v{{ .Major }}.{{ .Minor }}"
- "foo/bar:{{ .Tag }}"
The current setup has the disadvantage that I have to wait until we cut a release to play with latest. It also means that latest is out of sync with Master branch in GitHub. I would like to build and publish latest - potentially several times per day - whenever my automated tests are successful, and I merge to master branch.
I would like goreleaser to build an publish in different senarios.
whenever I merge a pull request to master, build & push latest
whenever I tag a release, build and push semver tags
The logical way to achieve this would be simply to have 2 github actions, which would operate on different .goreleaser.yml files. However the problem is that I cannot find a way to override the goreleaser.yaml
you can override the goreleaser config file with
args: release --rm-dist -f path_to_goreleaser.yml
Although goreleaser will refuse to publish anything if current commit is not a tag. Maybe what you'll need to do is some sort of automated tagging on each commit and run goreleaser against that tag, solving the image templates and etc using go templates only.
It will likely be very hacky, but I think it might work.

How to use a variable docker image in github-actions?

I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.

Resources