How to use value of variable from one job to another - circleci

Objective: Trying to pass one step variable in Job A to Job B in CircleCI or if possible to create an environment variable programmatically to a CircleCI context
What I have done so far:
I have config like below, but as part of step, I need to store one string value ($key in below snippet) in step and use it in next job.
But I read documentation that each "run" runs on own shell and, I saw export to bash as a work around, but I am using windows executor… so it won't be option
version: 2.1
jobs:
self-hosted-agent-test:
machine: true
resource_class: xxxxxxx/devops-self-hosted-agent
steps:
- checkout
- run:
command: az login --service-principal -u $xxxxxx -p $xxxxx --tenant $xxxx
- run:
command: az group create --location $xxxxx --name $xxxxxxxx
- run:
command: az storage account create --name $xxxxxxxx --resource-group $xxxxxxx --location $xxxxx --sku Standard_LRS
- run:
command: az storage container create --name xxxxxx --account-name $xxxxxxxx
- run:
command: Connect-AzAccount --service-principal -u $xxx -p $xxx --tenant $xxxx
- run:
shell: powershell.exe
command: $key=(Get-AzStorageAccountKey -ResourceGroupName $xxxxxxxxx -AccountName $xxxxxxx)[0].Value
self-hosted-agent-test1:
machine: true
resource_class: xxxxxxx/devops-self-hosted-agent
steps:
- checkout
- run:
name: Storage key persistance check
command: Write-Host $key
workflows:
my-workflow:
jobs:
- self-hosted-agent-test:
context:
- abcd
- self-hosted-agent-test1
Can you suggest how to achieve this or how to create environment variable during build (not through UI manually with this $key so that I refer to that in next job in workflow
Update:
I have used below but getting parsing error:
name: "Create Context"
command:
curl --request POST \
--url https://circleci.com/api/v2/context \
--header 'authorization: Basic xxxxxxxxxxxxxxxxxxxxxxxxxx' \
--header 'content-type:' 'application/json' \
--data '{"name":"string","owner":{"id":"string","type":"organization"}}'
Error:
Unable to parse YAML
mapping values are not allowed here
in 'string', line 23, column 36:
--header 'authorization: xxxxxxxxxxxxxxx ...

You could use the CircleCI API v2: https://circleci.com/docs/api/v2/#operation/addEnvironmentVariableToContext

Related

how to read secrets in docker file

Below is the github action
name: Build image
uses: docker/build-push-action#v3.2.0
with:
build-args:
secrets: |
"github_token=${{ inputs.token }}"
"uname=${{ github.actor }}"
"Mysecret=SecretValue"
in the Docker file
RUN --mount=type=secret,id=github_token \
cat /run/secrets/github_token
RUN --mount=type=secret,id=github_actor \
cat /run/secrets/github_actor
RUN --mount=type=secret,id=github_actor \
varu=$(cat /run/secrets/github_actor)
RUN --mount=type=secret,id=github_token \
var=$(cat /run/secrets/github_token)
RUN echo $var
I'm not able to consume the secrets, those are printing but not able to assing the secret to a variables for use in next statement.
If I want to get Mysecret value to a variable in docker file how do I get it?

Couchbase with Docker Compose: Unable to insert - DurabilityImpossibleError

I'm trying to set up Couchbase as part of a collection of servers using Docker-Compose. The sole purpose of this is for local application development.
The problem is that, once set up, I'm unable to write to the database. Insert and Upsert operations give me a DurabilityImpossibleError.
Docker compose file:
version: '3.4'
services:
...
couchbase-db:
image: couchbase/server
volumes:
- ./docker-data/couchbase/node3:/opt/couchbase/var
- ./provision/couchbase:/opt/startup/
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
The startup bash script, run after building, is an attempt to perform database setup without requiring a manual step:
#!/bin/bash
# Enables job control
set -m
# Enables error propagation
set -e
# Run the server and send it to the background
/entrypoint.sh couchbase-server &
# Check if couchbase server is up
check_db() {
curl --silent http://${1}:8091/pools > /dev/null
echo $?
}
# Variable used in echo
i=1
# Echo with
log() {
echo "[$i] [$(date +"%T")] $#"
i=`expr $i + 1`
}
# Wait until main server is ready
until [[ $(check_db 127.0.0.1) = 0 ]]; do
>&2 log "Waiting for Couchbase Server to be available ..."
sleep 1
done
couchbase-cli cluster-init -c localhost:8091 \
--cluster-username Administrator --cluster-password password \
--cluster-password password --services data,index,query --cluster-ramsize 512 \
--cluster-index-ramsize 256 || true
couchbase-cli setting-cluster -c localhost:8091 -u Administrator -p password \
--cluster-username Administrator --cluster-password password \
--cluster-password password --cluster-ramsize 512 \
--cluster-index-ramsize 256;
couchbase-cli setting-cluster -c localhost:8091 \
-u Administrator -p password --cluster-username Administrator \
--cluster-password password --cluster-ramsize 512 --cluster-index-ramsize 256;
curl -v POST http://localhost:8091/pools/default/buckets \
-u Administrator:password \
-d name=organisations \
-d bucketType=couchbase \
-d ramQuotaMB=512 \
-d durabilityMinLevel=majorityAndPersistActive
curl -v -X POST -u Administrator:password \
http://localhost:8091/settings/indexes \
-d indexerThreads=4 \
-d logLevel=verbose \
-d maxRollbackPoints=10 \
-d storageMode=plasma \
-d memorySnapshotInterval=150 \
-d stableSnapshotInterval=40000
# Need to wait until query service is ready to process N1QL queries
echo "$(date +"%T") Waiting ........."
sleep 20
# Create bucket1 indexes
echo "$(date +"%T") Create bucket1 indexes ........."
cbq -u Administrator -p password -s "CREATE PRIMARY INDEX idx_primary ON \`organisations\`;"
cbq -u Administrator -p password -s "CREATE INDEX idx_type ON \`organisations\`(_type);"
If I try to add a document via the web interface, I get:
Errors from server: ["Unexpected server error, request logged."]
If I try to add a document via the JavaScript SDK, I get:
DurabilityImpossibleError durability impossible
details:
{
name: 'DurabilityImpossibleError',
cause: LibcouchbaseError: libcouchbase error 308
at Object.translateCppError
(/app/node_modules/couchbase/dist/bindingutilities.js:174:21)
at /app/node_modules/couchbase/dist/connection.js:245:54 {
code: 308
},
context: KeyValueErrorContext {
status_code: 0,
opaque: 0,
cas: CbCas {
'0': <Buffer 00 00 00 00 00 00 00 00>
},
key: '22738bd4-7972-4370-85a3-71399d96ef05',
bucket: '',
collection: '',
scope: '',
context: '',
ref: ''
}
}
I've also attempted to send the following settings with the insert/upsert, to no effect:
insertOptions: {
durabilityLevel: 0,
durabilityPersistTo: 1,
durabilityReplicateTo: 0,
timeout: 5000,
},
My most recent attempt at a fix was to build a cluster of 3 nodes within docker-compose, and call the API commands to "add server" as part of a build script. However, "add server" takes a static IP, so the second time I run the servers, the IPs change and the database becomes unresponsive. I do get a functioning database on that first run though.
I'm looking for either a fix for a single node system (or an idea of where I'm going wrong), or a way of getting a cluster working in Docker-Compose after a down/up cycle. Anything that will give me a stable environment to develop in.
Thanks!
The bucket is created with -d durabilityMinLevel=majorityAndPersistActive, By default bucket enables replica with 1.
For single node cluster you will not have enough data nodes to satisfy durability (https://docs.couchbase.com/server/current/learn/data/durability.html). You can disable replica via UI and rebalance to take affect or change the bucket setting not include Minimum durability.
I have no idea about the 3 node docker compose error.

How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it

Kubectl: command not found on travis ci

I am trying to deploy a kubernetes cluster using Travis CI and I get the following error
EDIT:
invalid argument "myAcc/imgName:" for t: invalid reference format
See docker build --help
./deploy.sh: line 1: kubectl: command not found
This is my travis config file
travis.yml
sudo: required
services:
- docker
env:
global:
- SHA-$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before-install:
- openssl aes-256-cbc -K $encrypted_0c35eebf403c_key -iv $encrypted_0c35eebf403c_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project robust-chess-234104
- gcloud config set compute/zone asia-south1-a
- gcloud container clusters get-credentials standard-cluster-1
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
This is my deploy script
deploy.sh
doccker build -t myAcc/imgName:$SHA
docker push myAcc/imgName:$SHA
kubectl apply -k8s
I guess the gcloud components update kubectl command is not working. Any ideas?
Thanks !
The first issue invalid argument "myAcc/imgName:" for t: invalid reference format because the variable $SHA is not defined as expected. There is a syntax issue with defining the variable you should use = instead of - after SHA, so it should be like this:
- SHA=$(git rev-parse HEAD)
The second issue which is related to kubectl you need to install it using the following command according to the docs:
gcloud components install kubectl
Update:
After testing this file on Travis-CI I was able to figure out the issue. You should use before_install instead of before-install so in your case the before installation steps never get executed.
# travis.yml
---
env:
global:
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components install kubectl
script: kubectl version
And the final part of the build result:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7", GitCommit:"65ecaf0671341311ce6aea0edab46ee69f65d59e", GitTreeState:"clean", BuildDate:"2019-01-24T19:32:00Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

Resources