I would like to use path location in several steps in github action. I tried to follow "DAY_OF_WEEK" example, but my test failed:
name: env_test
on:
workflow_dispatch:
push:
branches: [ "main" ]
env:
ILOC: d:/a/destination/
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v3
- name: Setup Instalation Location
shell: cmd
run: |
echo "just iloc"
echo "${ILOC}"
echo "env"
echo env.ILOC
mkdir "${ILOC}"
Here is relevant part of log:
Run echo "just iloc"
echo "just iloc"
echo "${ILOC}"
echo "env"
echo env.ILOC
mkdir "${ILOC}"
shell: C:\Windows\system32\cmd.EXE /D /E:ON /V:OFF /S /C "CALL "{0}""
env:
ILOC: d:/a/local
"just iloc"
"${ILOC}"
"env"
env.ILOC
'#' is not recognized as an internal or external command,
operable program or batch file.
'#' is not recognized as an internal or external command,
operable program or batch file.
Error: Process completed with exit code 1.
So how to properly set GA global variable?
it looks like you are properly setting it, only the way its accessed needs attention:
1 name: learn_to_use_actions
2
3 on:
4 push
5
6 env:
7 MYVAR: d:/a/destination/
8 jobs:
9 learn:
10 name: use env
11 runs-on: ubuntu-latest
12 steps:
13 - name: print env
14 if: ${{ env.MYVAR == 'TEST' }}
15 run: |
16 echo "the var was detected to be test = " $MYVAR
17
18 - name: other print env
19 if: ${{ env.MYVAR != 'TEST' }}
20 run: |
21 echo "the var was NOT detected to be test = " $MYVAR
when using inside a conditional, the whole test has to be escaped like:
${{ env.MYVAR != 'TEST' }}
when using in a command, it looks like like a nix env var like:
echo "the var was detected to be test = " $MYVAR
Related
There has to be something I’m missing, but I just can’t see it. I have a staged build. The deploy stage is firing as expected, as are all of its phases, but not the deploy phase. Any idea why?
stages:
- name: build
- name: publish
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: deploy
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: clean
# ... Other bits until we hit the deploy stage of jobs: include: ...
- stage: deploy
name: "Deploy to dev|aut|stg"
install:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl ${HOME}/.local/bin
script:
- echo "Placeholder?"
before_deploy:
- aws ecr get-login-password --region "${AWS_REGION}" | docker login --username AWS --password-stdin "${AWS_ECR_REGISTRY_URL}/tmp"
deploy:
- provider: script
script: "bash ./bin/deploy dev"
skip_cleanup: true
on:
branch: rob-release-and-deploy
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
condition: tag IS present && (tag =~ /^\d{8}\.rc\d+$/)
I’m committing code to the rob-release-and-deploy branch (a PR open on that branch). There’s no indication that the deploy: phase is being recognized at all. It’s not being skipped with the message I might normally see if I were pushing to a different branch or something…it’s simply not doing anything at all.
Here's the end of the build log:
0.00s$ echo "Placeholder?"
189Placeholder?
190The command "echo "Placeholder?"" exited with 0.
191
192travis_run_after_success: command not found
193travis_run_after_failure: command not found
194travis_run_after_script: command not found
195travis_run_finish: command not found
196
197Done. Your build exited with 0.
What can I try next?
Solved. In my second deploy provider, I missing tags: true...
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
tags: true
condition: tag =~ /^\d{8}\.rc\d+$/
I knew it would be something dumb, but I thought I saw an example in the docs that deployed just using condition:. Alas. ¯_(ツ)_/¯
I have a playbook with a bunch of tasks:
vars:
params_ENV_SERVER: "{{ lookup('env', 'ENV_SERVER') }}"
params_UML_SUFFIX: "{{ lookup('env', 'UML_SUFFIX') }}"
tasks:
- name: delete previous files
shell: ssh deploy#{{ params_ENV_SERVER }} sudo rm -rf /opt/jenkins-files/*
become: true
become_user: deploy
- name: create build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo mkdir -p /opt/jenkins-files/build
become: true
become_user: deploy
- name: chown build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo chown -R deploy:deploy /opt/jenkins-files
become: true
become_user: deploy
Which I calling from Jenkinsfile for PROD and QA env-s:
withEnv(["ENV_SERVER=192.168.1.30","UML_SUFFIX=stage-QA"]) {
sh "ansible-playbook nginx-depl.yml --limit 127.0.0.1"
}
withEnv(["ENV_SERVER=192.168.1.130","UML_SUFFIX=stage-PROD"]) {
sh "ansible-playbook nginx-depl.yml --limit 127.0.0.1"
Is it possible to modify playbook somehow, to execute on QA all tasks and on PROD only 2-nd and 3-rd?
Is this what are you looking for?
- name: delete previous files
shell: ssh deploy#{{ params_ENV_SERVER }} sudo rm -rf /opt/jenkins-files/*
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'"
- name: create build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo mkdir -p /opt/jenkins-files/build
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'" or
"params_UML_SUFFIX == 'stage-PROD'"
- name: chown build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo chown -R deploy:deploy /opt/jenkins-files
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'" or
"params_UML_SUFFIX == 'stage-PROD'"
Optionally, "Ansible-way" would be creating the inventory
shell> cat hosts
[prod]
192.168.1.130
[qa]
192.168.1.30
and declare all hosts in the playbook
shell> cat playbook.yml
- hosts: all
tasks:
- debug:
msg: "Delete previous files.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa
- debug:
msg: "Create build dir.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa or
inventory_hostname in groups.prod
- debug:
msg: "Chown build dir.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa or
inventory_hostname in groups.prod
You can omit "become: true" and "become_user: deploy" and declare the remote user on the command-line. For example
shell> ansible-playbook -u deploy -i hosts playbook.yml
gives (abridged)
TASK [debug] ****
skipping: [192.168.1.130]
ok: [192.168.1.30] =>
msg: Delete previous files. Execute module file on 192.168.1.30
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Create build dir. Execute module file on 192.168.1.130
ok: [192.168.1.30] =>
msg: Create build dir. Execute module file on 192.168.1.30
TASK [debug] ****
ok: [192.168.1.30] =>
msg: Chown build dir. Execute module file on 192.168.1.30
ok: [192.168.1.130] =>
msg: Chown build dir. Execute module file on 192.168.1.130
You can limit the execution to particular hosts or groups. For example, the command below would execute on prod group only
shell> ansible-playbook -u deploy -i hosts playbook.yml --limit prod
gives (abridged)
TASK [debug] ****
skipping: [192.168.1.130]
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Create build dir. Execute module file on 192.168.1.130
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Chown build dir. Execute module file on 192.168.1.130
Notes
"Ansible-way" is to execute modules on the remote hosts.
Replace the debug tasks with file
Integrate into one tasks "create build dir" and "chown build dir"
If you run the playbook as user deploy you can omit the parameter "-u deploy"
In my Docker project's repo, I have a VERSION file that contains nothing more than the version number.
1.2.3
In Travis, I'm able to cat the file to an environment variable, and use that to tag my build before pushing to Docker Hub.
---
env:
global:
- USER=username
- REPO=my_great_project
- VERSION=$(cat VERSION)
What is the equivalent of that in GitHub Actions? I tried this, but it's not working.
name: Test
on:
...
...
env:
USER: username
REPO: my_great_project
jobs:
build_ubuntu:
name: Build Ubuntu
runs-on: ubuntu-latest
env:
BASE: ubuntu
steps:
- name: Check out the codebase
uses: actions/checkout#v2
- name: Build the image
run: |
VERSION=$(cat VERSION)
docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} .
build_alpine:
name: Build Alpine
runs-on: ubuntu-latest
env:
BASE: alpine
...
...
...
I've also tried this, which doesn't work.
- name: Build the image
run: |
echo "VERSION=$(cat ./VERSION)" >> $GITHUB_ENV
docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} .
I went down the road that Benjamin W. was talking about with having VERSION in my environment vs just in that specific step.
This worked for me to set the variable in one step, then use it in separate steps.
- name: Set variables
run: |
VER=$(cat VERSION)
echo "VERSION=$VER" >> $GITHUB_ENV
- name: Build Docker Image
uses: docker/build-push-action#v2
with:
context: .
file: ${{ env.BASE_DIR }}/Dockerfile
load: true
tags: |
${{ env.USER }}/${{ env.REPO }}:${{ env.VERSION }}
${{ env.USER }}/${{ env.REPO }}:latest
As I want to re-use ENV_VAR between jobs, this is how I do it. I wish I could find a way to minimize this code.
In this example, I use VARs from my Dockerfile. But it will work from any file.
pre_build:
runs-on: ubuntu-20.04
steps:
...
-
name: Save variables to disk
run: |
cat $(echo ${{ env.DOCKERFILE }}) | grep DOCKERHUB_USER= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/DOCKERHUB_USER
cat $(echo ${{ env.DOCKERFILE }}) | grep GITHUB_ORG= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/GITHUB_ORG
cat $(echo ${{ env.DOCKERFILE }}) | grep GITHUB_REGISTRY= | head -n 1 | grep -o '".*"' | sed 's/"//g' > ~/varz/GITHUB_REGISTRY
echo "$(cat ~/varz/DOCKERHUB_USER)/$(cat ~/varz/APP_NAME)" > ~/varz/DKR_PREFIX
-
name: Set ALL variables for this job | à la sauce GitHub Actions
run: |
echo "VERSION_HASH_DATE=$(cat ~/varz/VERSION_HASH_DATE)" >> $GITHUB_ENV
echo "VERSION_HASH_ONLY=$(cat ~/varz/VERSION_HASH_ONLY)" >> $GITHUB_ENV
echo "VERSION_CI=$(cat ~/varz/VERSION_CI)" >> $GITHUB_ENV
echo "VERSION_BRANCH=$(cat ~/varz/VERSION_BRANCH)" >> $GITHUB_ENV
-
name: Show variables
run: |
echo "${{ env.VERSION_HASH_DATE }} < VERSION_HASH_DATE"
echo "${{ env.VERSION_HASH_ONLY }} < VERSION_HASH_ONLY"
echo "${{ env.VERSION_CI }} < VERSION_CI"
echo "${{ env.VERSION_BRANCH }} < VERSION_BRANCH"
-
name: Upload variables as artifact
uses: actions/upload-artifact#master
with:
name: variables_on_disk
path: ~/varz
test_build:
needs: [pre_build]
runs-on: ubuntu-20.04
steps:
...
-
name: Job preparation | Download variables from artifact
uses: actions/download-artifact#master
with:
name: variables_on_disk
path: ~/varz
-
name: Job preparation | Set variables for this job | à la sauce GitHub Actions
run: |
echo "VERSION_HASH_DATE=$(cat ~/varz/VERSION_HASH_DATE)" >> $GITHUB_ENV
echo "VERSION_HASH_ONLY=$(cat ~/varz/VERSION_HASH_ONLY)" >> $GITHUB_ENV
echo "VERSION_BRANCH=$(cat ~/varz/VERSION_BRANCH)" >> $GITHUB_ENV
echo "BRANCH_NAME=$(cat ~/varz/BRANCH_NAME)" >> $GITHUB_ENV
I would like to setup a github action that runs this command from pandoc FAQ on a repo when its pushed to master. Our objective is to convert all md files in our repo from md to another format using the pandoc docker container.
here is where I got so far. In the first example i do not declare an entrypoint and i get the error "/usr/local/bin/docker-entrypoint.sh: exec: line 11: for: not found."
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
uses: docker://pandoc/latex:2.9
with:
args: |
for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done
In the second example we declare entrypoint: /bin/sh and the result is error "/bin/sh: can't open 'for': No such file or directory"
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
uses: docker://pandoc/latex:2.9
with:
entrypoint: /bin/sh
args: |
for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done
I am a total noob to git actions and not a technical person so my guess is this is easy idea for the SO community. just trying some simple workflow automation. any explicit and beginner feedback is appreciated. thanks - allen
I needed to do a recursive conversion of md files to make a downloadable pack, so this answer extends beyond the OP's goal.
This github action will:
Make the output directory (mkdir output)
Recurse through the folders, create similarly named folders in an output directory (for d in */; do mkdir output/$d; done)
Find all md files recursively (find ./ -iname '*.md' -type f) and execute a pandoc command (-exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;)
Note that you have to be careful with double and single quote marks when converting from stuff that works in terminal to things that get correctly transformed into a single docker command as part of the github action.
First iteration
jobs:
convert_via_pandoc:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v2
- name: convert md to docx
uses: docker://pandoc/latex:2.9
with:
entrypoint: /bin/sh
args: -c "mkdir output;for d in */; do mkdir output/$d; done;find ./ -iname '*.md' -type f -exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;"
- uses: actions/upload-artifact#master
with:
name: output
path: output
This solution was developed using #anemyte's info and this SO post on recursive conversion
Second iteration from #caleb
name: Generate Word docs
on: push
jobs:
convert_via_pandoc:
runs-on: ubuntu-20.04
container:
image: docker://pandoc/latex:2.9
options: --entrypoint=sh
steps:
- uses: actions/checkout#v2
- name: prepare output directories
run: |
for d in */; do
mkdir -p output/$d
done
- name: convert md to docx
run: |
find ./ -iname '*.md' -type f -exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;
- uses: actions/upload-artifact#master
with:
name: output
path: output
You can make your life easier if you do this with just shell:
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
run: |
docker run -v $(pwd):/data -w /data pandoc/latex:2.9 sh -c 'for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done'
The -v key mounts current working directory to /data inside the container. The -w key makes /data a working directory. Everything else you wrote yourself.
The problem you faced is that your args is been interpreted as a sequence of arguments. Docker accepts entrypoint and cmd (args in this case) arguments either as a string or as an array of strings. If it is a string it is parsed to create an array of elements. for became first element of that array and as the first element is an executable it tried to execute for but failed.
Unfortunately, it turned out that the action does not support an array of elements at this moment. Check #steph-locke's answer for a solution with correct args as a string for the action.
I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.
We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.