How to set env variable from gitlab ci cd dynamically - docker

I would like to edit the gitlab cicd variables thru my pipeline script.
Flow:
Submit merge request
CI pipeline retrieve the version number (variable stored in the CICD project setting (refer to pic although the variable version is not included)
version+1 and set back into the CICD project setting
git tag the file based on version number
the file i want to git tag is a bash script file and I am using Linuxx docker image. please advise
current script
variables:
PROFILE_NAME: default
default:
image: docker-image
stages:
- tagging
Tag:
stage: tag
script:
- yum install git -y
- git --version
- git remote set-url --push origin ${CI_SERVER_PROTOCOL}://${GITLAB_PERSONAL_ACCESS_TOKEN_NAME}:${GITLAB_PERSONAL_ACCESS_TOKEN}#${CI_SERVER_HOST}/${CI_PROJECT_PATH}.git
- export "VERSION=$(($VERSION +1 ))" > $INC_VERSION
- echo 'after version:' $VERSION
- echo 'after increment version:' $INC_VERSION
- git push origin --tags
only:
refs:
- merge_requests
variables:
- ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^dev/ || $CI_COMMIT_BRANCH =~ /^dev/ || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^release-/ || $CI_COMMIT_BRANCH =~ /^release-/)
except:
variables:
- ($CI_COMMIT_BEFORE_SHA == '0000000000000000000000000000000000000000' && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME !~ /^./)

Related

Bitbucket pipelines: How to find the directories/paths to cache apt-get installed packages?

I need help caching packages in my bitbucket pipeline that were installed via apt-get.
For non-apt-get installed packages you can find the path where packages are installed online. However, I'm not sure what directorie(s) to cache for apt-get installed packages.
For example I have the following command in my pipeline script:
apt-get update && apt-get install -y curl unzip git
I defined a cache directory in definitions like so:
caches:
apt-cache: /var/cache/apt
However, it's only caching 164 bytes and I don't think it's caching all of the packages that are actually installed.
Is there a way to find where these packages are installed so I can cache them?
Here is my full pipeline script below:
image: php:8.2-fpm
definitions:
# set the paths for where the packages are installed that we are caching
# these paths are used to download the packages from the cache to speed up deploys
caches:
install-php-extensions: /usr/local/bin/
phpunit: web-app/vendor/bin/
composer: /usr/local/bin
# directory where apt package cache is
apt-cache: /var/cache/apt
php-extensions: /usr/lib/php/
sonar: ~/.sonar
steps:
- step: &testing
name: Test
caches:
- install-php-extensions
- phpunit
- composer
- apt-cache
- php-extensions
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y curl unzip git
# xdebug is needed to run the code coverage later on and to generate the code coverage report
- pecl install xdebug-3.2.0 && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
# Install php extensions, set permissions to execute, required for snowflake, pdo, etc
# The PDO installation is required later so the composer install doesn't fail with an undefined constant
- curl -sSLf -o /usr/local/bin/install-php-extensions https://github.com/mlocati/docker-php-extension-installer/releases/download/1.5.49/install-php-extensions
- chmod +x /usr/local/bin/install-php-extensions
- install-php-extensions bcmath odbc pdo_odbc soap
# Install phpunit dependencies and run the phpunit tests with code coverage
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- cd web-app
- composer require phpunit/phpunit --dev
- XDEBUG_MODE=coverage vendor/bin/phpunit --testdox -d memory_limit=-1 --log-junit test-results/test-execution-results.xml --cache-result --coverage-cache=./coverage/cache --coverage-clover=phpunit-coverage.xml tests/Unit
artifacts:
- test-results/test-execution-results.xml
- step: &sonarqube
name: Sonarqube coverage report
caches:
- sonar
script:
- cd web-app
- pipe: sonarsource/sonarqube-scan:1.0.0
variables:
SONAR_HOST_URL: ${SONAR_HOST_URL} # Get the value from the repository/workspace variable.
SONAR_TOKEN: ${SONAR_TOKEN}
DEBUG: "true"
- step: &deploy
name: Deploy
caches:
- docker
- apt-cache
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y unzip awscli
- TAG=${BITBUCKET_COMMIT}
# Set aws credentials
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET_KEY}"
- aws configure set region "${AWS_REGION}"
# Get credentials for laravel from secrets manager and
# Write to .env file
- aws secretsmanager get-secret-value --secret-id ${ENV_SECRET_ID} --query SecretString --output text >> .env
# Write odbc snowflake definition for connecting to database
- aws secretsmanager get-secret-value --secret-id ${SNOWFLAKE_SECRET_ID} --query SecretString --output text > ./docker/php/snowflake/odbc.ini
# Authenticate bitbucket-deployment user
- aws ecr get-login-password --region us-west-2 | docker login -u AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com
# Build/deploy nginx image
- NGINX_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/nginx"
- docker build -f Dockerfile-nginx -t $NGINX_IMAGE .
# Push the :latest image
- docker push $NGINX_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $NGINX_IMAGE $NGINX_IMAGE:${BITBUCKET_COMMIT}
- docker push $NGINX_IMAGE:${BITBUCKET_COMMIT}
# Build/deploy php image
- PHP_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/php-app"
- docker build -f Dockerfile-php -t $PHP_IMAGE .
# Push the :latest image
- docker push $PHP_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $PHP_IMAGE $PHP_IMAGE:${BITBUCKET_COMMIT}
- docker push $PHP_IMAGE:${BITBUCKET_COMMIT}
# Start ecs migration task
- aws ecs run-task --cluster nova-api-cluster --launch-type FARGATE --network-configuration "awsvpcConfiguration={subnets=['${PUBLIC_SUBNET_A}','${PUBLIC_SUBNET_B}'],securityGroups=['${SECURITY_GROUP}'],assignPublicIp=ENABLED}" --task-definition nova-api-migration-task
# Force new ecs task deployment
- aws ecs update-service --cluster ${CLUSTER_NAME} --service ${SERVICE_NAME} --region ${AWS_REGION} --force-new-deployment
- step: &auto_merge_down
name: Auto Merge Down
image: atlassian/default-image:3
script:
- ./autoMerge.sh stage || true
- ./autoMerge.sh dev || true
pipelines:
branches:
dev:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Dev
stage:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Staging
prod:
- step:
<<: *testing
- step:
<<: *sonarqube
- step:
<<: *deploy
deployment: Production
- step:
<<: *auto_merge_down
Found another answer on the community here https://community.atlassian.com/t5/Bitbucket-questions/Any-way-to-cache-apt-get-install-y-zip-in-bitbucket-pipelines/qaq-p/622876, thanks #Chase Han. 
Basically you use the following command in your pipeline script or a local docker image that matches the image you have in the pipeline.
which <package-name-here>
e.g
which git
Then it will output a path where it exists. 
e.g. 
/usr/bin/git
Then you just need to include the path in your cache definitions that contains that package. 
e.g.
caches:
      #/usr/bin located packages like git, curl, etc
      usr-bin: /usr/bin
And then you can use that cache definition in your steps

Deploy phase of a stage not firing

There has to be something I’m missing, but I just can’t see it. I have a staged build. The deploy stage is firing as expected, as are all of its phases, but not the deploy phase. Any idea why?
stages:
- name: build
- name: publish
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: deploy
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: clean
# ... Other bits until we hit the deploy stage of jobs: include: ...
- stage: deploy
name: "Deploy to dev|aut|stg"
install:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl ${HOME}/.local/bin
script:
- echo "Placeholder?"
before_deploy:
- aws ecr get-login-password --region "${AWS_REGION}" | docker login --username AWS --password-stdin "${AWS_ECR_REGISTRY_URL}/tmp"
deploy:
- provider: script
script: "bash ./bin/deploy dev"
skip_cleanup: true
on:
branch: rob-release-and-deploy
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
condition: tag IS present && (tag =~ /^\d{8}\.rc\d+$/)
I’m committing code to the rob-release-and-deploy branch (a PR open on that branch). There’s no indication that the deploy: phase is being recognized at all. It’s not being skipped with the message I might normally see if I were pushing to a different branch or something…it’s simply not doing anything at all.
Here's the end of the build log:
0.00s$ echo "Placeholder?"
189Placeholder?
190The command "echo "Placeholder?"" exited with 0.
191
192travis_run_after_success: command not found
193travis_run_after_failure: command not found
194travis_run_after_script: command not found
195travis_run_finish: command not found
196
197Done. Your build exited with 0.
What can I try next?
Solved. In my second deploy provider, I missing tags: true...
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
tags: true
condition: tag =~ /^\d{8}\.rc\d+$/
I knew it would be something dumb, but I thought I saw an example in the docs that deployed just using condition:. Alas. ¯_(ツ)_/¯

Bitbucket Pipeline git fetch with public key fails

Wtth help of below article i've setup SSH keys for bitbucket so i can use it in pipelines
https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/
When tested on terminal window by entering following command it works fine:
$ ssh -T git#bitbucket.org
but when i run my pipelines it fails
Added public key under my bitbucket profile
My Pipeline:
image:
name: abhisheksaxena7/salesforcedockerimg
pipelines:
branches:
feature/**:
- step:
script:
- ant -buildfile build/build.xml deployEmptyCheckOnly -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
# master:
# - step:
# script:
# - ant -buildfile build/build.xml deployCode -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
Admin-Changes:
- step:
script:
- echo my_known_hosts
# Set up SSH key; follow instructions at https://confluence.atlassian.com/display/BITBUCKET/Set+up+SSH+for+Bitbucket+Pipelines
- (mkdir -p ~/.ssh ; cat my_known_hosts >> ~/.ssh/known_hosts; umask 077 ; echo $SSH_KEY | base64 --decode -i > ~/.ssh/id_rsa)
# Read update_to_trigger_pipelines.txt into commitmsg variable
- commitmsg="$(<update_to_trigger_pipelines.txt)"
# Set up repo and checkout master
- echo git#bitbucket.org:$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG.git
- git remote set-url origin git#bitbucket.org:$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG.git
- git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/*
- git fetch
- git checkout master
# Get metadata from server
- ant -buildfile build/build.xml getCode -Dsfdc.username=$SFDC_USERNAME -Dsfdc.password=$SFDC_PASS$SFDC_TOKEN -Dsfdc.serverurl=https://$SFDC_SERVERURL
# Commit any changes to master
- git add force-app/main/default/*
- git config user.name "$GIT_USERNAME"
- git config user.email "$GIT_EMAIL"
- if [[ -n $(git status -s) ]] ; then filelist=`git status -s` ; git commit -a -m "$commitmsg" -m "$filelist" ; git push origin master:master ; else echo "No changes detected"; fi
I was adding my local server ssh key to my profile instead repository SSH KEY, so i had to get repository pipelines SSH Keys and add it to my profile.

travis ci variables not accessible from .travis-ci.yml

I am trying to deploy the build to gh pages from travis ci. But, I am not able to access the variables from within the git commands, otherwise the variables are accessible when simply trying to echo them.
jobs:
include:
- stage: "lint"
name: "Check for code smell"
script: yarn lint
- stage: "deploy"
name: "Deploy to GH Pages"
script:
- git config --global user.name ${Name}
- git config --global user.email ${Email}
- git remote rm origin
- git remote add origin https://linux-nerd:${GITHUB_TOKEN}#${GH_REF}
- yarn run deploy
- echo ${Email}
- echo $Email
- echo https://linux-nerd:${GITHUB_TOKEN}#${GH_REF}
The last three echos are printing correctly, but the git commands do not take the correct values.
What am I missing?
this is how I configure git user and email:
git config --global user.name "username"
git config --global user.email "email"
Please not the use of "" above
thats what u are missing in your file

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

Resources