how to use same environement varibale in multiple steps in bitbucket pipelines - bitbucket

Below script giving an error
The deployment environment 'staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline. Please refer to our documentation for valid environments and their ordering.
image: python:3.8
options:
docker: true
pipelines:
branches:
master:
- step:
deployment: staging
name: Setup stage
script:
- echo ${db_name}
- step:
deployment: staging
name: Setup cli prod
script:
- echo ${db_name}
- step:
deployment: staging
name: Setup cli sandbox
script:
- echo ${db_name}
I want to use same environment variable (staging) in all steps of my pipeline. Please guide me how to do this.

This can't be done because steps marked as deployments must be unique.
Deployment variables are those needed to deploy the VCS to a particular deployment stage. If you are setting up administrative tasks as pipelines, those are NOT deployments.
Or if those are actual deployments, abide by the error message you are getting and make sure the declared deployment stage is unique:
image: python:3.8
options:
docker: true
definitions:
yaml-anchors:
- &deploy-step
script:
- echo ${db_name}
pipelines:
branches:
master:
- step:
<<: *deploy-step
deployment: staging
name: Deploy staging
- step:
<<: *deploy-step
deployment: production
name: Deploy prod
- step:
deployment: sandbox
name: Deploy sandbox

Related

Is it possible to execute a step for folder and commit specific steps with Bitbucket-pipeline.yml file in bitbucket?

I have 1 common project including both frontend and backend, sometimes backend sometimes frontend is getting new commits but my pipeline yml is working for both of them and deploying both to server even if they have no change. In other words, If I add 1 line of code to frontend, pipeline is deploying backend too. Here is my bitbucket-pipeline.yml
This is an example Starter pipeline configuration
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
In this example the frontend is not deployed but I will activate it. So What I need is that I want to execute a step according to that which folder/project got commit to in it. e.g. If there is a commit under mybackend then only deploy mybackend and if front end.. so on. Is it possible to execute a step for a specific folder ?
Yes, this is achievable by using condition keyword:
This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.
Your end result should look similar to this:
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
condition:
changesets:
includePaths:
- "myfrontend/**"
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
condition:
changesets:
includePaths:
- "myfolder/**"
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
condition:
changesets:
includePaths:
- "myfolder/**"
See here for more details.

Error Integration Bitbucket Pipeline and SonarCloud

ALM used Bitbucket Cloud
CI system used Bitbucket Cloud
Languages of the repository: Angular (Other (for JS, TS, Go, Python, PHP, …))
Error observed
ERROR: Error during SonarScanner execution
ERROR: Not authorized. Please check the property sonar.login or SONAR_TOKEN env variable
Steps to reproduce
SONAR_TOKEN already generated and added to my ENV_VAR
Bitbucket.yaml
image: ‘node:12.22’
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- sonar
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
EXTRA_ARGS: ‘-Dsonar.host.url=https://sonarcloud.io -Dsonar.login=${SONAR_TOKEN}’
step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
pipelines:
branches
Potential workaround
No idea.
if you already install the sonar cloud app to your workspace environment, there is no need to give the sonar url again. The integration process is handling the URL part. Also, you should add your Sonar token variable to Workspace or repo environment. After that, you should login to Sonar Cloud organization account and bind your repo to SonarCloud to be able to evaluate it by Sonar Cloud. Here is my Sonar Cloud setup;
bitbucket-pipelines.yml file,
image:
name: <base image>
clone:
# SonarCloud scanner needs the full history to assign issues properly
depth: full
definitions:
caches:
# Caching SonarCloud artifacts will speed up your build
sonar: ~/.sonar/cache
pipelines:
pull-requests:
'**':
- step:
name: "Code Quality and Security on PR"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
branches:
master:
- step:
name: "Code Quality and Security on master"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
tags:
'*.*.*-beta*':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: beta
clone:
enabled: false
script:
- <deploy script>
'*.*.*-prod':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: prod
clone:
enabled: false
script:
- <deploy script>
sonar-project.properties file,
sonar.organization=<sonar cloud organization name>
sonar.projectKey=<project key>
sonar.projectName=<project name>
sonar.sources=<sonar evaluation path>
sonar.language=<repo language>
sonar.sourceEncoding=UTF-8

Gitlab pipeline fails, even though deployment happened on GCP

I just created my first CI/CD pipeline on Gitlab, which creates a docker container for a Next.js app, and deploys it on Google Cloud Run.
My cloudbuild.yaml:
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/inook-web', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/inook-web']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'inook-web', '--image', 'gcr.io/$PROJECT_ID/inook-web', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
My .gitlab-ci.yml:
# File: .gitlab-ci.yml
image: docker:latest
stages: # List of stages for jobs, and their order of execution
- deploy-test
- deploy-prod
deploy-test:
stage: deploy-test
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
I get the following error message in the CI/CD pipeline:
https://ibb.co/ZXLWrj1
However, the deployment actually succeeds on GCP: https://ibb.co/ZJjtXzG
Any idea what I can do to fix the pipeline error?
What worked for me was to add a custom bucket for the gcloud builds submit to push logs to. Thanks #slauth for pointing me in the right direction.
Updated command:
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://inook_test_logs
If you add a bucket at the end of the command, then it works.
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://my_bucket_name_on_gcp
Remember to create a bucket on GCP :D

gitlab-ci.yml deployment on multiple hosts

I need to deploy my application on multiple servers.
I have hosted my source code on gitlab-ci.
I have setup envrionnement variables and .gitlab-ci.yml file
It works great for a single server: I can build docker images and push this images to a registry.
Then i am deploying this images on a kubernetes infrastructure.
All operations are described in .gitlab-ci.yml
What i need to do is to "repeat" .gitlab-ci.yml steps for each server.
I need a different set of envrionment variables for each server. (I will need one docker image for each server, for each upgrade of my application).
Is there a way to do this with gitlab-ci ?
Thanks
** EDIT **
Here is my .gitlab-ci.yml:
stages:
- build
- deploy
build:
stage: build
script:
- docker image build -t my_ci_registry_url/myimagename .
- docker login -u "${CI_REGISTRY_USER}" -p "${CI_REGISTRY_PASSWORD}" "${CI_REGISTRY}"
- docker push my_ci_registry_url/myimagename
deploy:
stage: deploy
environment: production
script:
- kubectl delete --ignore-not-found=true secret mysecret
- kubectl create secret docker-registry mysecret --docker-server=$CI_REGISTRY --docker-username=$CI_REGISTRY_USER --docker-password=$CI_REGISTRY_PASSWORD
- kubectl apply -f myapp.yml
- kubectl rollout restart deployment/myapp-deployment
In order to run same job with different environment variables you can use Yaml Anchors.
For example:
stages:
- build
- deploy
.deploy: &deploy
stage: deploy
environment: production
script:
- some use of $SPECIAL_ENV # from `variables` defined in each job
- some use of $OTHER_SPECIAL_ENV # from `variables` defined in each job
build:
stage: build
script:
- ...
deploy env 1:
variables:
SPECIAL_ENV: $SPECIAL_ENV_1 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV-1 # from `CI/CD > Variable`
<<: *deploy
deploy env 2:
variables:
SPECIAL_ENV: $SPECIAL_ENV_2 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV_2 # from `CI/CD > Variable`
<<: *deploy
deploy env 3:
variables:
SPECIAL_ENV: $SPECIAL_ENV_3 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV_3 # from `CI/CD > Variable`
<<: *deploy
That way on deploy stage the 3 jobs will run (parallel).
You can save the variables in Settings > CI/CD > Variable if they contain sensitive data. If not, just write them in your .gitlab-ci.yml

Bitbucket Pipelines share SOME steps between branches

Is it possible to share steps between branches and still run branch specific steps? For example, the develop and release branch has the same build process, but uploaded to separate S3 buckets.
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
develop:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://develop
staging:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://staging
I saw this post (Bitbucket Pipelines - multiple branches with same steps) but it's for the same steps.
Use YAML anchors:
definitions:
steps:
- step: &Test-step
name: Run tests
script:
- npm install
- npm run test
- step: &Deploy-step
name: Deploy to staging
deployment: staging
script:
- npm install
- npm run build
- fab deploy
pipelines:
default:
- step: *Test-step
- step: *Deploy-step
branches:
master:
- step: *Test-step
- step:
<<: *Deploy-step
name: Deploy to production
deployment: production
trigger: manual
Docs: https://confluence.atlassian.com/bitbucket/yaml-anchors-960154027.html
Although it's not officially supported yet, you can pre-define steps now.
You can use yaml anchors.
I got this tip from bitbucket staff when I had an issue running the same steps across a subset of branches.
definitions:
step: &Build
name: Build
script:
- npm install
- npm build
pipelines:
default:
- step: *Build
branches:
master:
- step: *Build
- step:
name: deploy
# do some deploy from master only
I think Bitbucket can't do it. You can use one pipeline and check the branch name:
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
- if [[ $BITBUCKET_BRANCH = develop ]]; then s3cmd put --config s3cmd.cfg ./build s3://develop; fi
- if [[ $BITBUCKET_BRANCH = staging ]]; then s3cmd put --config s3cmd.cfg ./build s3://staging; fi
The two last lines will be executed only on the specified branches.
You can define and re-use steps with YAML Anchors.
anchor & to define a chunk of configuration
alias * to refer to that chunk elsewhere
And the source branch is saved in a default variable called BITBUCKET_BRANCH
You'd also need to pass the build results (in this case the build/ folder) from one step to the next, which is done with artifacts.
Combining all three will give you the following config:
definitions:
steps:
- step: &build
name: Build
script:
- cd source
- npm install
- npm build
artifacts: # defining the artifacts to be passed to each future step.
- ./build
- step: &s3-transfer
name: Transfer to S3
script:
- s3cmd put --config s3cmd.cfg ./build s3://${BITBUCKET_BRANCH}
pipelines:
default:
- step: *build
develop:
- step: *build
- step: *s3-transfer
staging:
- step: *build
- step: *s3-transfer
You can now also use glob patterns as mentioned in the referenced post and steps for both develop and staging branches in one go:
"{develop,staging}":
- step: *build
- step: *s3-transfer
Apparently it's in the works. Hopefully available soon.
https://bitbucket.org/site/master/issues/12750/allow-multiple-steps?_ga=2.262592203.639241276.1502122373-95544429.1500927287

Resources