Pass environment variables to reusable workflow without using 1 to 1 inputs - environment-variables

Right now, if I want to pass a environment variable to a reusable workflow I have to do something like this:
name: Reusable workflow
on:
workflow_call:
inputs:
my_env_var:
required: false
type: string
env:
my_env_var: ${{ inputs.my_env_var }}
However, for this I first need to define as an input each environment variable I want to pass. This works, but having to hard code the environment variables makes my reusable workflows less generic. Is there a way to pass envs without having to define them one by one? I was thinking on something like this:
name: Calling reusable workflow
on:
workflow_dispatch:
jobs:
push-image-dev:
uses: ./.github/workflows/my-reusable-workflow.yml
with:
input1: ...
input2: ...
env:
env1: ...
env2: ...
However, I have been reading some documentation and I don't think that exists. Is there any other way of doing it, as inheriting env variables or creating a single input which is a variable dictionary, which is later parsed and sets all the env vars in the reusable workflow?

I created this simple logic which allows me to pass all the environment variables I want on a 1:N relationship with respect to inputs. I created an input which expects a list of environment variables, formatted as "env=value", and which is later converted to environment variables in a step inside the workflow.
Calling the workflow:
name: Calling reusable workflow
on:
workflow_dispatch:
jobs:
my-job:
uses: ./.github/workflows/my-reusable-workflow.yml
with:
env_vars: |
env1=value1
env2=value2
env3=value3
Workflow definition:
name: My reusable workflow
on:
workflow_call:
inputs:
env_vars:
description: List of environment variables to set up, given in env=value format.
required: false
type: string
jobs:
my-job:
runs-on: ubuntu-latest
steps:
- name: Set environment variables
if: ${{ inputs.env_vars }}
run: |
for i in "${{ inputs.env_vars }}"
do
printf "%s\n" $i >> $GITHUB_ENV
done

Related

How to read environment variables in env section of github action workflow

I'm trying to set a env variable based on another env variable in a github workflow. I've tried a couple of syntax options but none seem to work
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
workflow_dispatch:
env:
BASE_VERSION: 1.0.0
FULL_VERSION: ${BASE_VERSION}-${{ github.run_number }}-${{ github.ref_name }}
jobs:
The example for BASE_VERSION above just keeps ${BASE_VERSION} as a string
$BASE_VERSION also just keeps $BASE_VERSION as a string
${{ env.BASE_VERSION }}-blabla just fails with syntax error
Is this doable?
The output I want is "1.0.0-1-master" for example
Is this doable?
It does not seem like a supported behaviour at the moment.
The docs on env mentions that
variables in the env map cannot be defined in terms of other variables in the map.
Do it like this:
- name: Set docker image env var
run: |
echo "DOCKER_IMAGE=${ARTIFACTORY_URL}/${IMAGE_NAME}:${GITHUB_REF##*/}.${{github.run_number}}" >> $GITHUB_ENV
- run: |
echo ${{ env.DOCKER_IMAGE }}
Outputs
artifactory-host/some-project/some-repo/image-name:branch.number

Conditionally setting parameter in yml file (Azure pipeline): VAR not updating

Problem
I want to set a parameter conditionally based on which branch triggered the pipeline. If the triggered branch was feature/automated-testing, I would like to set a parameter equal to "True". See the code below.
Parts of my pipeline.yml file looks like so:
trigger:
branches:
include:
- feature/automated-testing
...
# Global variables for the pipeline
variables:
- name: "triggerRepoName"
value: "$(Build.SourceBranchName)"
stages:
# common stage. Docker build, tag and push
- stage: BuildDockerImage
displayName: "Build docker image"
variables:
...
jobs:
- template: /templates/pipelines/my-prject.yml#templates
parameters:
${{ if eq( variables.triggerRepoName, 'feature/automated-testing') }}:
runTests: "True"
${{ if ne(variables.triggerRepoName, 'feature/automated-testing') }}:
runTests: "False"
Question
When I push from branch feature/automated-testing and ´echo´ the variable runTests in the Dockerfile, it is blank. Is there something wrong with my syntax in the conditional statement?
I believe the error is in the way the variable is set conditionally, and I have therefore chosen not to supply the Dockerfile nor the other .yml template .yml used.
Please change variables.triggerRepoName to variables['triggerRepoName']. It should solve your issue.

Azure pipelines UI to accept parameters (like Jenkins)

Jenkins has a UI concept with dropdown lists, etc. to allow users to specify variables at run time. This has proven essential in our builds to make decisions in the pipeline (ie. which agent to run on, code base to choose, etc). By allowing parameters we are able to have a single pipeline/definition handle the same task for many clients/releases/environments.
I have been watching many people over the past year ask for this - to eliminate the number of almost identical build definitions - is there a best practice to handle this? Would be nice to have a single build definition for a specific task that can be smart enough to handle parameters.
Edit : example of possible pseudo-code to build on levi-lu#MSFT's suggestion.
parameters:
- name: ClientName
displayName: Pool Image
default: Select client
values: powershell
valuesScript : [
assemble curl request to http://myUrl.com/Clients/GetAll
]
- name: TargetEnvironment
displayName: Client Environment
type: string
values: powershell
valuesScript: [
assemble curl request using above parameter value to
https://myUrl.com/Clients/$(ClientName)/GetEnvironments
]
trigger: none
jobs:
- job: build
displayName: Run pipeline job
pool:
vmImage: windows-latest
parameters:
ClientName : $(ClientName)
TargetEnvironment : $(TargetEnvironment)
steps:
- script: echo building $(Build.BuildNumber)
Runtime parameters is available now. You can now set runtime parameters at the beginning of your pipepline YAML using parameters. For below example:
parameters:
- name: image
displayName: Pool Image
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
- name: test
displayName: Run Tests?
type: boolean
default: false
trigger: none
jobs:
- job: build
displayName: Build and Test
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber)
- ${{ if eq(parameters.test, true) }}:
- script: echo "Running all the tests"
Above example is from Microsoft official document. Click here to learn more about runtime parameters.
When you run above Yaml pipeline, You will be able to select the parameter's value from the dropdown list. See below screenshot.
Update: To set variables dynamically at runtime.
You can use the task.setvariable logging command to set variables dynamically in scripts.
For below example: $resultValue is the value from rest api call. And its value is assigned to variable VariableName
- powershell: |
$resultValue = call from Rest API
echo "##vso[task.setvariable variable=VariableName]$resultValue"
Check document here for more information.

Jenkins Job Builder: Project Level Variables

Within JJB, you can define project-level variables like this:
- defaults:
name: global
git_url: "git#....."
- project
name: some-test
jobs:
- test-{name}
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
My question, must I hardcode the value of git_url at the default level or can I use some JJB mechanism to bring that in at job load/execution?
The reason I ask is that the yaml script that contains these JJB jobs can be used to define TEST, QA and PROD. It would be nice to just point at a properties file that contains the value for git_url and any other global variable values. I took a look at: http://docs.openstack.org/infra/jenkins-job-builder/definition.html?highlight=default#defaults and I did not see any mechanism.
If I understand your question correctly, there are two other approaches available within the context of a single yaml file
Approach 1: Set git_url at the project level
- project
name: some-test
git_url: "git#dogs.net:woof/bark.git"
jobs:
- test-{name}:
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set at the project level. This approach allows you to define a second project with a different value for git_url, ie
- project
name: some-other-test
git_url: "git#cats.net:meow/meow.git"
jobs:
- test-{name}:
Approach 2: Set git_url at the job-template instance level
- project
name: some-test
jobs:
- test-{name}:
git_url: "git#....."
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set on the actual instance of the job-template where it is specified. If your job-template had more than just {name} in its name, this would allow you to create multiple instances of it in the list of jobs at the project level, ie
- project
name: some-test
git_url: "git#....."
jobs:
- test-{name}-{type}:
type: 'cat'
- test-{name}-{type}:
type: 'dog'
- job-template
name: test-{name}-{type}
display-name: 'Test for {type} projects'
scm:
- git:
url: "{git_url}"
branches:
- master
Thoughts on TEST vs QA vs PROD
You also mentioned that you would like some kind of external properties file to differentiate between TEST, QA, and PROD environments. To address this let's consider four different files, project.yaml, defaults/TEST.yaml, defaults/QA.yaml, defaults/PROD.yaml whose contents are enumerated below.
project.yaml
- project
name: some-test
jobs:
- test-{name}:
defaults/TEST.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/test.git"
defaults/QA.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/qa.git"
defaults/PROD.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/prod.git"
Okay so these aren't great examples because you probably wouldn't have a different git repository for each environment, but I don't want to complicate things by straying too far from your original example.
With JJB you can specify more than one YAML file on the command line (I don't want to complicate the example or its explanation, but you can also specify directories full of JJB yaml). To differentiate between TEST, QA, and PROD deployments of your Jenkins job you can then do something like:
jenkins-jobs project.yaml:defaults/TEST.yaml
For your test environment.
jenkins-jobs project.yaml:defaults/QA.yaml
For your qa environment.
jenkins-jobs project.yaml:defaults/PROD.yaml
For your prod environment.
Hope that helps.

Jenkins Job-Builder: How to correctly include job-templates from external file?

I am investigating using Jenkins Job-Builder (from OpenStack) as our means of managing jenkins job configurations. In doing so I am trying to figure out the right (best?) way to include a job-template from an external file using the !include custom tag.
In the current use case we will basically have one template that is going to be used by a LOT of job. Each job is going to need to exist in its own file for reason that are out of scope here.
So far I have gotten this to work:
job-template.yml
name: 'pre-build-{proj}-{repo}'
project-type: freestyle
... etc ...
job-1.yml
- job-template:
!include job-template.yml
- project:
name: job-1
proj: my-proj
repo: my-repo
jobs:
- 'build-config-{proj}-{repo}'
This seem wrong because the template definition gets split across both files and require needless duplication of the -job-template: line in every job file. I would like to get the following to work instead:
job-template.yml
- job-template:
name: 'pre-build-{proj}-{repo}'
project-type: freestyle
... etc ...
job-1.yml
!include job-template.yml
- project:
name: job-1
proj: my-proj
repo: my-repo
jobs:
- 'build-config-{proj}-{repo}'
The latter unfortunately results in a yaml parse error on the - project: line:
yaml.scanner.ScannerError: mapping values are not allowed here
in "job-1.yml", line 3, column 10
Is there way to get the entire template definition into the template file? This will become particularly annoying if ever we need to pull in multiple templates from multiple files.
Jenkins-jobs takes a path argument which can be a directory holding your files (job-template.yaml, job-1.yaml and job-2.yaml. It will assemble them as a single YAML document, so you do not need to use !include. So you can write:
job-template.yaml
- job-template:
name: 'pre-build-{proj}-{repo}'
builders:
- shell: 'echo building {proj} for {repo}'
job1.yaml
- project:
name: job-1
proj: my-proj
repo: my-repo
jobs:
- 'pre-build-{proj}-{repo}'
job2.yaml
- project:
name: job-2
proj: my-other-proj
repo: my-other-repo
jobs:
- 'pre-build-{proj}-{repo}'
That will generates two jobs with the following shell commands:
pre-build-my-other-proj-my-other-repo:
<command>echo building my-other-proj for my-other-repo</command>
pre-build-my-proj-my-repo:
<command>echo building my-proj for my-repo</command>
Assuming the files are in a directory config/ you can generate them all with:
jenkins-jobs test config/ -o /tmp/myjobs
Or use the name argument to filter the jobs that will be realized:
jenkins-jobs test config/ -o /tmp/myjobs '*my-proj*'
# Creates pre-build-my-proj-my-repo
# Skips pre-build-my-other-proj-my-other-repo

Resources