I would like to share a variable across two steps.
I define it like:
- export MY_VAR="FOO-$BITBUCKET_BUILD_NUMBER"
but then when I try to print it in other step:
- echo $MY_VAR
it's empty.
How I can share such variable?
As Mr-IDE and Rik Tytgat explained, you can export your environment variables by writing them to a file and then share this file with a following step as an artifact. One way to do so is to write your variables to a shell script in a step, define it as an artifact and then source it in the next step.
definitions:
steps:
- step: &build
name: Build
script:
- MY_VAR="FOO-$BITBUCKET_BUILD_NUMBER"
- echo $MY_VAR
- echo "export MY_VAR=$MY_VAR" >> set_env.sh
artifacts: # define the artifacts to be passed to each future step
- set_env.sh
- step: &deploy
name: Deploy
script:
# use the artifact from the previous step
- cat set_env.sh
- source set_env.sh
- echo $MY_VAR
pipelines:
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: test
NB: In my case, the step which publish set_env.sh as an artifact is not always part of my pipelines. In this case, be sure to check if the file exists in the next step before using it.
- step: &deploy
name: Deploy
image: alpine
script:
# check if env file exists
- if [ -e set_env.sh ]; then
- cat set_env.sh
- source set_env.sh
- fi
For some reason, exported environment variables are not retained between the child items of a "step:" or between the top-level "step:" items (more info about these definitions here). But you can copy all the environment variables to a file, then read them back again, because files are preserved between steps:
1. Share variables between the child items of a "step:"
How to share variables between "script:" and "after-script:"
pipelines:
default:
- step:
script:
# Export some variables
- export MY_VAR1="FOO1-$BITBUCKET_BUILD_NUMBER"
- export MY_VAR2="FOO2-$BITBUCKET_BUILD_NUMBER"
- echo $MY_VAR1
- echo $MY_VAR2
# Copy all the environment variables to a file, as KEY=VALUE, to share to other steps
- printenv > ENVIRONMENT_VARIABLES.txt
after-script:
# If the file exists, read all the previous environment variables
# from the file, and export them again
- |
if [ -f ENVIRONMENT_VARIABLES.txt ]; then
export $(cat ENVIRONMENT_VARIABLES.txt | xargs)
fi
- echo $MY_VAR1
- echo $MY_VAR2
Note: Try to avoid using strings that have spaces or new line characters in them (for the keys and values). The export command will have trouble reading them, and can throw errors. One possible workaround is to use sed to automatically delete any line that has a space character in it:
# Copy all the environment variables to a file, as KEY=VALUE, to share to other steps
- printenv > ENVIRONMENT_VARIABLES.txt
# Remove lines that contain spaces, to avoid errors on re-import (then delete the temporary file)
- sed -i -e '/ /d' ENVIRONMENT_VARIABLES.txt ; find . -name "ENVIRONMENT_VARIABLES.txt-e" -type f -print0 | xargs -0 rm -f
More info:
https://www.cyberciti.biz/faq/linux-list-all-environment-variables-env-command/ -- for printenv command
Set environment variables from file of key/value pairs
2. Share variables between the top-level "step:" items
pipelines:
default:
- step:
script:
- export MY_VAR1="FOO1-$BITBUCKET_BUILD_NUMBER"
- step:
script:
- echo $MY_VAR1 # This will not work
In this scenario, Bitbucket Pipelines will treat the 2 "step:" items as completely independent builds, so the second "step:" will start from scratch with a blank folder and a new git clone.
So you should share files between steps by using declared artifacts, as shown in the answer by belgacea (19 Dec 2019).
I'm afraid, but it seems impossible to share environment variable from one step to another, BUT you can define global environment variables for all steps in the settings of the project under pipelines category.
Settings -> Pipelines -> Repository Variables
I know this question is rather old, but I've found a cleaner approach without uploading and downloading artifacts across steps.
Instead of defining an anchored step, you could anchor a script with the EXPORT commands in the definition and reuse it explicitly as part of a step. Note that the script defined in a script anchor is a one-liner and needs && for multiple commands.
definitions:
commonItems:
&setEnv export MY_VAR="FOO-$BITBUCKET_BUILD_NUMBER" &&
export MY_VAR_2="Hey" &&
export MY_VAR_3="What you're building"
Here's how you would call it in your steps.
steps:
step:
- name: First step
script:
- *setEnv
- echo $MY_VAR # FOO-1
- echo $MY_VAR_2 # Hey
- echo $MY_VAR_3 # What you're building
- name: Second step
script:
- *setEnv
- echo $MY_VAR # FOO-1
- echo $MY_VAR_2 # Hey
- echo $MY_VAR_3 # What you're building
Related
I need a wasm runtime to unit test my code on GitLab, so I have the following in my .gitlab-ci.yml:
default:
image: emscripten/emsdk
before_script:
- curl https://wasmtime.dev/install.sh -sSf | bash
- source /root/.bashrc
The wasmtime.dev script installs the binaries and updates PATH in ~/.bashrc. Running my tests fails with the message wasmtime: command not found (specified as below):
unit-test:
stage: test
script:
- bash test.sh
What do I need to do to make sure the changes of the wasmtime install script apply? Thanks!
Edit
Adding export PATH="$PATH:$HOME/.wasmtime/bin" before bash test.sh in the unit-test job sucesfully got the wasmtime binary on the path, but I'm not quite sure I'm happy with this solution - what if the path of wasmtime changes later on? Shouldn't sourcing .bashrc do this? Thanks!
I have set up a Github trigger on Google Cloud build. I already have a dockerfile to do the build steps. Since there is no provision to pass substitution variables I am trying to build it with cloud build configuration file (yaml) and then passing the path to the dockerfile in the configuration file.
This is the cloud build configuration file where I need to pass 5 variables for the Dockerfile to consume and the yaml file goes like this:
steps:
- name: 'gcr.io/cloud-builders/docker'
env:
- 'ACCESS_TOKEN=$ACCESS_TOKEN'
- 'BRANCH=$BRANCH'
- 'UTIL_BRANCH=$UTIL_BRANCH'
- 'ARG_ENVIRONMENT=$ARG_ENVIRONMENT'
- 'ARG_PYTHON_SCRIPT=$ARG_PYTHON_SCRIPT'
args:
- build
- "--tag=gcr.io/$PROJECT_ID/quickstart-image"
- "--file=./twitter/dax/processing_scripts/Dockerfile"
- .
When the trigger runs the build, I get an error in one of the build steps in dockerfile saying that the variable is not available. It's clear that the environment variable passed in the yaml file is not being passed to the Dockerfile for consumption.
And this is how the I have filled the substitution variables in on the trigger page
Pasting the build code from step 5 where the error occurs, before which is just apt-get update commands running:
Step 5/28 : RUN git config --global url."https://${ACCESS_TOKEN}:#github.com/".insteadOf "https://github.com/" && echo $(ACCESS_TOKEN)
---> Running in f7b94bc2a0d9
/bin/sh: 1: ACCESS_TOKEN: not found
Removing intermediate container f7b94bc2a0d9
---> 30965207dcec
Step 6/28 : ARG BRANCH
---> Running in 93e36589ac48
Removing intermediate container 93e36589ac48
---> 1d1508b1c1d9
Step 7/28 : RUN git clone https://github.com/my_repo45/twitter.git -b "${BRANCH}"
---> Running in fbeb93dbb113
Cloning into 'twitter'...
remote: Repository not found.
fatal: Authentication failed for 'https://github.com/my_repo45/twitter.git/'
The command '/bin/sh -c git clone https://github.com/my_repo45/twitter.git -b "${BRANCH}"' returned a non-zero code: 128
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 128```
Could anyone please point out what the issue and validate if the environment declaration is proper?
You would still need to add the variables to the docker build command. As in your current situation, the substitution variables are only available for you within cloud build and not as a regular environment variable. A simplified example of how to pass those variables to cloud build is outlined below. Note that you can only use this with ARG in your dockerfile, not with ENV.
steps:
- name: 'gcr.io/cloud-builders/docker'
env:
- 'ACCESS_TOKEN=$_ACCESS_TOKEN'
args:
- build
- "--build-arg=ACCESS_TOKEN=${ACCESS_TOKEN}"
- "--tag=gcr.io/$PROJECT_ID/quickstart-image"
- "--file=./twitter/dax/processing_scripts/Dockerfile"
Another option is to export the environment variable within the same build step your docker build command is in:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
env:
- 'ACCESS_TOKEN=$_ACCESS_TOKEN'
args:
- '-c'
- |
export ACCESS_TOKEN=${ACCESS_TOKEN}
docker build \
--tag=gcr.io/$PROJECT_ID/quickstart-image" \
--file=./twitter/dax/processing_scripts/Dockerfile"
Of course, you could argue if the env field is still needed in this setup, I'll leave that up to you.
we have a repository that needs to go get a private repo. To do this, we are using an SSH key to access the private repo/module.
We are storing this SSH key using Google Secret Manager and passing it to Docker using the build-arg flag. Now, when we do this locally, the Dockerfile builds and runs as intended. This is the command we use for a local build:
export SSH_PRIVATE_KEY="$(gcloud secrets versions access latest --secret=secret-data)" && \
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
However, when we try to move this setup to Google Cloud Build, we run into 403 forbidden errors from Bitbucket, which leads me to believe that the SSH key is either not being read or formatted correctly.
The full 403 error is:
https://api.bitbucket.org/2.0/repositories/my-repo?fields=scm: 403 Forbidden
Step #0 - "Build": server response: Access denied. You must have write or admin access.
What is even stranger is that when I run the Cloud Build local emulator, it works fine using this command: cloud-build-local --config=builder/cloudbuild-prod.yaml --dryrun=false .
I've tried many different formats and methods, so out of desperation I am asking the community for help. What could be the problem?
Here is our cloudbuild.yaml:
steps:
# Get secret
- id: 'Get Secret'
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=secret-data > /workspace/SSH_PRIVATE_KEY.txt
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt) &&
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
With Cloud Build, when you want to get local linux variable, and not the substitution variable, you have to espace the $ with another $. Look at this:
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt)
docker build --build-arg $$SSH_PRIVATE_KEY -t my-image .
The SSH_PRIVATE_KEY is prefixed by $$ to say: don't look at the substitution variable, but look at the linux variable.
I also remove the && at the end of the export line. The pipe | means: Run each command in succession, line return limit each command
Thanks for all the help! This one was pretty weird. Turns out it's not an issue with Cloud Build or Secret Manager but the Dockerfile I was using.
Instead of setting GOPRIVATE with the command in the Dockerfile below, I was using a statement like RUN export GOPRIVATE="bitbucket.org/odds".
In case anyone runs into something like this again, here's the full Dockerfile that works.
FROM golang:1.15.1
WORKDIR $GOPATH/src/bitbucket.org/gml/my-srv
ENTRYPOINT ["./my-srv"]
ARG CREDENTIALS
RUN git config \
--system \
url."https://${CREDENTIALS}#bitbucket.org/".insteadOf \
"https://bitbucket.org/"
RUN go env -w GOPRIVATE="bitbucket.org/my-team"
COPY . .
RUN make build
I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.
We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.
I am testing a GitLab CI pipeline with gitlab-runner exec. During a script, Boost ran into an error, and it created a log file. I want to view this log file, but I do not know how to.
.gitlab-ci.yml in project directory:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
I test this on my machine with:
sudo gitlab-runner exec docker build --timeout 3600
The last several lines of the output:
Building Boost.Build engine with toolset ...
Failed to build Boost.Build build engine
Consult 'bootstrap.log' for more details
ERROR: Job failed: exit code 1
FATAL: exit code 1
bootstrap.log is what I would like to view.
Appending - cat bootstrap.log to .gitlab-ci.yml does not output the file contents because the runner exits before this line. I tried looking though past containers with sudo docker ps -a, but this does not show the one that GitLab Runner used. How can I open bootstrap.log?
You can declare an artifact for the log:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
artifacts:
when: on_failure
paths:
- include/boost/bootstrap.log
Afterwards, you will be able to download the log file via the web interface.
Note that using when: on_failure will ensure that bootstrap.log will only be collected if the build fails, saving disk space on successful builds.