Setting enviroment variable in CircleCI using a command - circleci

I'm trying to set an environment variable (SHORT_HASH) to shorter github hash by running it as a command ('echo $CIRCLE_SHA1 | cut -c -7').
So, I'd want the hash 'b1e5ef8acff51c9218ccbf7152fae1d2049d03c5' to be shortened to 'b1e5ef8'
Here's a stripped down version of my circle.yml
machine:
python:
version: 2.7.3
services:
- docker
environment:
SHORT_HASH: 'echo $CIRCLE_SHA1 | cut -c -7'
BUILD_TAG: $CIRCLE_BUILD_NUM-$SHORT_HASH
I looked at the circleci docs, but am not finding anything like this. https://circleci.com/docs/environment-variables

The code is executed in a shell, so you'll want to use backticks or the $() method around the phrase you want to evaluate. This this:
SHORT_HASH: $(echo $CIRCLE_SHA1 | cut -c -7)

One way of doing it is to append export statement to $BASH_ENV
Here is an example:
version: 2
jobs:
build:
docker:
- image: buildpack-deps:jessie
working_directory: ~/project
steps:
- checkout
- run: |
bar_var="foo-bar"
echo 'export FOO_ENV_VAR="${bar_var}"' >> $BASH_ENV
- run:
command: |
echo $FOO_ENV_VAR

Related

Dynamically replace current time inside docker-compose command

version: '3.7'
services:
pgdump:
image: postgres:alpine
command: pg_dump -f "backup-`date -u -Iseconds`.pg_restore" $DATABASE_URL
This produces a file named
backup-`date -u -Iseconds`.pg_restore
instead of the desired
backup-2021-04-14T16:42:54+00:00.pg_restore.
I also tried:
command: pg_dump -f backup-`date -u -Iseconds`.pg_restore $DATABASE_URL
command: pg_dump -f "backup-${date -u -Iseconds}.pg_restore" $DATABASE_URL
command: pg_dump -f backup-${date -u -Iseconds}.pg_restore $DATABASE_URL
All of them yield different errors.
As of April 2021 command substitution is not supported by docker-compose according to this GitHub issue.
As a workaround in my use case, one could either use native docker run commands, where substitution works or use an .env file.
Current command
The date command itself is incorrect. Try running it on its own
date -u -Iseconds
echo `date -u -Iseconds`
From your command, I presume you want date in UTC seconds since epoch? Epoch by itself is UTC. So you just need seconds since Epoch. No need for -u parameter.
Solution
Here's the correct command in two forms:
A.
command: pg_dump -f "backup-`date +'%s'`.pg_restore" $DATABASE_URL
B.
command: pg_dump -f "backup-$(date +'%s').pg_restore" $DATABASE_URL
Explanation
There are multiple things to watch out for in the command you provided:
Notice the double quotes around the file name? This means you cannot nest another double-quote within the original outer pair without escaping the inner ones with \. Another alternative option is to use as many single-quote pairs you want within a pair of double-quotes. See this answer and this excerpt about 2.2.2 Single-Quotes and 2.2.3 Double-Quotes.
For string interpolation, you can use either $() or `` notation. But NOT within single-quotes as I said.
As a dry-run test, create a file directly with said notations:
vi "backup-`date +'%s'`.txt"
vi "backup-$(date +'%s').txt"
As for date format. Both GNU/date BSD/date accept %s to represent seconds since Epoch. Find "%s" in ss64 or man7 or cyberciti.
Docker-related, watch out what command does. Source:
command overrides the the default command declared by the container image (i.e. by Dockerfile's CMD).
You can create the filename and store it as a variable with shell command before doing the pg_dump:
version: '3.7'
services:
pgdump:
image: postgres:alpine
entrypoint: ["/bin/sh","-c"]
command: >
"FILENAME=backup-`date -u -Iseconds`.pg_restore
&& pg_dump -f $$FILENAME $$DATABASE_URL"
Successfully tested against Docker image for postgres 13.6.

How might I run Pandoc 'convert all files in Dir' command in Github Actions

I would like to setup a github action that runs this command from pandoc FAQ on a repo when its pushed to master. Our objective is to convert all md files in our repo from md to another format using the pandoc docker container.
here is where I got so far. In the first example i do not declare an entrypoint and i get the error "/usr/local/bin/docker-entrypoint.sh: exec: line 11: for: not found."
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
uses: docker://pandoc/latex:2.9
with:
args: |
for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done
In the second example we declare entrypoint: /bin/sh and the result is error "/bin/sh: can't open 'for': No such file or directory"
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
uses: docker://pandoc/latex:2.9
with:
entrypoint: /bin/sh
args: |
for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done
I am a total noob to git actions and not a technical person so my guess is this is easy idea for the SO community. just trying some simple workflow automation. any explicit and beginner feedback is appreciated. thanks - allen
I needed to do a recursive conversion of md files to make a downloadable pack, so this answer extends beyond the OP's goal.
This github action will:
Make the output directory (mkdir output)
Recurse through the folders, create similarly named folders in an output directory (for d in */; do mkdir output/$d; done)
Find all md files recursively (find ./ -iname '*.md' -type f) and execute a pandoc command (-exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;)
Note that you have to be careful with double and single quote marks when converting from stuff that works in terminal to things that get correctly transformed into a single docker command as part of the github action.
First iteration
jobs:
convert_via_pandoc:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v2
- name: convert md to docx
uses: docker://pandoc/latex:2.9
with:
entrypoint: /bin/sh
args: -c "mkdir output;for d in */; do mkdir output/$d; done;find ./ -iname '*.md' -type f -exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;"
- uses: actions/upload-artifact#master
with:
name: output
path: output
This solution was developed using #anemyte's info and this SO post on recursive conversion
Second iteration from #caleb
name: Generate Word docs
on: push
jobs:
convert_via_pandoc:
runs-on: ubuntu-20.04
container:
image: docker://pandoc/latex:2.9
options: --entrypoint=sh
steps:
- uses: actions/checkout#v2
- name: prepare output directories
run: |
for d in */; do
mkdir -p output/$d
done
- name: convert md to docx
run: |
find ./ -iname '*.md' -type f -exec sh -c 'pandoc ${0} -o output/${0%.md}.docx' {} \;
- uses: actions/upload-artifact#master
with:
name: output
path: output
You can make your life easier if you do this with just shell:
name: Advanced Usage
on:
push:
branches:
- master
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- name: convert md to rtf
run: |
docker run -v $(pwd):/data -w /data pandoc/latex:2.9 sh -c 'for f in *.md; do pandoc "$f" -s -o "${f%.md}.rtf"; done'
The -v key mounts current working directory to /data inside the container. The -w key makes /data a working directory. Everything else you wrote yourself.
The problem you faced is that your args is been interpreted as a sequence of arguments. Docker accepts entrypoint and cmd (args in this case) arguments either as a string or as an array of strings. If it is a string it is parsed to create an array of elements. for became first element of that array and as the first element is an executable it tried to execute for but failed.
Unfortunately, it turned out that the action does not support an array of elements at this moment. Check #steph-locke's answer for a solution with correct args as a string for the action.

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

docker compose Logstash - specify config file and install plugin?

Im trying to copy my Logstash config and install a plugin at the same time. Ive tried multiple methods thus far with no avail, Logstash exists with errors every time
this fails:
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
command: bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf && bash -c bin/logstash-plugin install logstash-filter-translate
this also fails
command: bash -c logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate
Im having no luck here and I bet the answer is simple... can anyone point me in the right direction?
Thanks
I use the image that I am having locally with the below config, then it's working fine. Hope it helps.
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate"
Sample output
logstash_1 | [2017-12-06T15:27:29,120][WARN ][logstash.agent ] stopping pipeline {:id=>".monitoring-logstash"}
logstash_1 | Validating logstash-filter-translate
logstash_1 | Installing logstash-filter-translate
try this if it's Ubuntu image
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/- install logstash-filter-translate".
Else it's Alpine image then use
command : sh -c " command to run "

How to get all Travis CI environment variables, excluding the system defaults?

I want to pass into docker run all the environment variables I've configured in the Travis web UI.
I'm able to run env > .env to save them to a file and then pass that into docker via --env-file .env.
Unfortunately, this also overrides system ones such as PATH that interfere with the container.
I'm able to filter out PATH using env | grep -vE "^(PATH=)" > .env but I'm wondering whether there's a way to get just the Travis ones?
Here's my .travis.yml:
language: bash
sudo: required
services:
- docker
before_install:
- env | grep -vE "^(PATH=)" > .env
install:
- docker build -t mycompany/myapp .
script:
- docker run -i --env-file .env mycompany/myapp nosetests
after_success:
- echo "SUCCESS!"
I don't recommend passing all your environment vars, but if you whitelist them by prefixing them with something like, say, TRAVIS_ you could do something like:
export TRAVIS_WUT=foo
export TRAVIS_FOO=asdf
docker run $(printenv | grep -E '^TRAVIS_' | sed 's/TRAVIS_/-e /g')
# would run -> docker run -e FOO=asdf -e WUT=foo something

Resources