I have followed the guide described in Conditional steps in jobs and conditional workflows and written the below code for my CircleCI pipeline.
version: 2.1
workflows:
version: 2.1
workflowone:
when:
condition: false
jobs:
- samplejob:
workflowtwo:
when:
condition: true
jobs:
- jobone
jobs:
samplejob:
docker:
- image: buildpack-deps:stable
steps:
- run:
name: Sample Job in WF 1
command: |
echo "This job is in workflowone and the workflow should not run"
jobone:
docker:
- image: buildpack-deps:stable
steps:
- run:
name: Sample Job in WF 2
command: |
echo "This job is in workflowtwo and the workflow should run"
When I run the above code the output is not what is expected. First workflow should not run because the condition is false. Both worflows start running when the pipeline in triggered. Can anyone point out the missing piece here?
According to the CircleCI docs, workflows (specifically) does not accept the condition key:
Note: When using logic statements at the workflow level, do not
include the condition: key (the condition key is only needed for job
level logic statements).
See here logic-statement-examples (scroll to the bottom of this section to see the note)
Related
Please I'm trying to run some steps in the CircleCI Pipeline with conditions happened in the previous step. I tried a lot of tricks like exposing the value from Step 1 to global vars and pickup it in Step 2, I can see and print the variables in Step 2 but using WHEN BLOCK forever evaluated with Empty. I searched a lot and I knew that logical conditions already evaluated before running the jobs, Please I need alternative way to execute steps in second job in case a condition happened in Step 1?
I pasted here the example that I'm trying to fix
version: 2.1
orbs:
workflows:
test-and-deploy:
jobs:
- set-data:
context: my-context
- read-data:
context: my-context
requires:
- set-data
definitions:
node_image: &node-image
docker:
- image: cimg/node:14.15.5
executors:
base-12-14-0:
description: |
Single Docker container with Node 12.14.0 and Cypress dependencies
see https://github.com/cypress-io/cypress-docker-images/tree/master/base.
Use example: `executor: cypress/base-12-14-0`.
docker:
- image: cypress/base:12.14.0
jobs:
set-data:
<<: *node-image
description: Sets the data
steps:
- run: echo "VAR=app" > global-vars
- persist_to_workspace:
root: .
paths:
- global-vars
read-data:
<<: *node-image
description: read the data
steps:
- attach_workspace:
at: .
- run: ls
- run: cat global-vars // I COULD HERE SEE THE CORRECT VAR inside global-vars
- run: cat global-vars >> $BASH_ENV
- run: echo "Test $VAR" // Successfully Printed
- when:
condition:
matches: {
pattern: "app",
value: $VAR
}
steps:
- run: echo "Condition Executed"
It's not possible to use environment variables in logic statements. The reason is that logic statements are evaluated at configuration compilation time, whereas environment variables are interpolated at run time.
The only workaround I know of is to use the CircleCI dynamic configuration functionality to set pipeline parameters' values in the "setup workflow" that you then pass to the "continuation" workflow.
And by the way, you're not using $BASH_ENV correctly (https://circleci.com/docs/env-vars#setting-an-environment-variable-in-a-shell-command). But again, even if you did, you wouldn't be able to use an environment variable in a logic statement.
parameters:
- name: App_VariableGroup
type: string
default: my-defaults
values:
- my-defaults
trigger:
- main
pool:
vmImage: ubuntu-latest
container: ubuntu:20.04
variables:
- group: ${{ parameters.App_VariableGroup }}
steps:
- checkout: self
submodules: true
- script: |
echo Hello, world! \n
ls -al
displayName: 'Run a one-line script'
- task: AzureStaticWebApp#0
inputs:
app_location: $(publish_path)
api_location: ''
output_location: ''
skip_app_build: true
azure_static_web_apps_api_token: $(swa_deployment_token)
This code is failing with "container: ubuntu:20.04" and give the following error:
##[warning]Environment variable AGENT_CONTAINERMAPPING is a multiline string and cannot be added to the build environment.
/usr/bin/bash /__w/_tasks/AzureStaticWebApp_18aad896-e191-4720-88d6-8ced4806941a/0.200.0/launch-docker.sh
/__w/_tasks/AzureStaticWebApp_18aad896-e191-4720-88d6-8ced4806941a/0.200.0/launch-docker.sh: line 1: docker: command not found
##[error]Error: The process '/usr/bin/bash' failed with exit code 127
Finishing: AzureStaticWebApp
But the Task: AzureStaticWebApp#0 works fine with just the vmImage and no container.
I remember there is docker:dind concept that I used in gitlab-cicd but could anyone advice on what is going wrong here please?
One of the problems is that you need docker installed on your container. You may use this guide for how to do that.
However, there is another issue which seems to be a bug in the task itself causing it to fail, possibly related to not being able to load the AGENT_CONTAINERMAPPING environment variable. I ran into this bug myself with an Ubuntu container loaded with docker and other tools specific to my pipeline.
Please reference this bug I submitted to the Microsoft/azure-pipelines-task project for more details and to include your voice.
Recently I've made some configuration on my team's github circleci. I needed to use a when statement to devide ci logics. I referenced this document(https://circleci.com/docs/2.0/configuration-reference/#logic-statements) but it seems the document not correct.
Below is my step definition:
...
image_build_step:
executor: golang_executor
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
docker_layer_caching: true
- define_svc_name:
jobname: ${CIRCLE_JOB} # On this step set $SVC variable
- when:
conditon:
equal: ["${SVC}", "SVC_A" ]
- aws-ecr/build-and-push-image:
repo: SVC_A_REPO
dockerfile: ./Dockerfile
tag: "latest,${CIRCLE_SHA1},build-${CIRCLE_BUILD_NUM}"
...
Also I already tried this.
...
image_build_step:
executor: golang_executor
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
docker_layer_caching: true
- define_svc_name:
jobname: ${CIRCLE_JOB} # On this step set $SVC variable
- when:
equal: ["${SVC}", "SVC_A" ]
- aws-ecr/build-and-push-image:
repo: SVC_A_REPO
dockerfile: ./Dockerfile
tag: "latest,${CIRCLE_SHA1},build-${CIRCLE_BUILD_NUM}"
...
I cannot figure out my mistake using when statement on circleci. Additionaly, I already passed circleci config validate .circleci/config.yaml command before I pushed this commit.
What is the correct usage of when statement in circleci? Joining circleci forum is also annoying me using github account, so I leave my question on stakeoverflow.
It's not possible to use environment variables in logic statements. The reason is that logic statements are evaluated at configuration compilation time, whereas environment variables are interpolated at run time.
The only workaround I know of is to use the CircleCI dynamic configuration functionality to set pipeline parameters' values in the "setup workflow" that you then pass to the "continuation" workflow.
Jenkins has a UI concept with dropdown lists, etc. to allow users to specify variables at run time. This has proven essential in our builds to make decisions in the pipeline (ie. which agent to run on, code base to choose, etc). By allowing parameters we are able to have a single pipeline/definition handle the same task for many clients/releases/environments.
I have been watching many people over the past year ask for this - to eliminate the number of almost identical build definitions - is there a best practice to handle this? Would be nice to have a single build definition for a specific task that can be smart enough to handle parameters.
Edit : example of possible pseudo-code to build on levi-lu#MSFT's suggestion.
parameters:
- name: ClientName
displayName: Pool Image
default: Select client
values: powershell
valuesScript : [
assemble curl request to http://myUrl.com/Clients/GetAll
]
- name: TargetEnvironment
displayName: Client Environment
type: string
values: powershell
valuesScript: [
assemble curl request using above parameter value to
https://myUrl.com/Clients/$(ClientName)/GetEnvironments
]
trigger: none
jobs:
- job: build
displayName: Run pipeline job
pool:
vmImage: windows-latest
parameters:
ClientName : $(ClientName)
TargetEnvironment : $(TargetEnvironment)
steps:
- script: echo building $(Build.BuildNumber)
Runtime parameters is available now. You can now set runtime parameters at the beginning of your pipepline YAML using parameters. For below example:
parameters:
- name: image
displayName: Pool Image
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
- name: test
displayName: Run Tests?
type: boolean
default: false
trigger: none
jobs:
- job: build
displayName: Build and Test
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber)
- ${{ if eq(parameters.test, true) }}:
- script: echo "Running all the tests"
Above example is from Microsoft official document. Click here to learn more about runtime parameters.
When you run above Yaml pipeline, You will be able to select the parameter's value from the dropdown list. See below screenshot.
Update: To set variables dynamically at runtime.
You can use the task.setvariable logging command to set variables dynamically in scripts.
For below example: $resultValue is the value from rest api call. And its value is assigned to variable VariableName
- powershell: |
$resultValue = call from Rest API
echo "##vso[task.setvariable variable=VariableName]$resultValue"
Check document here for more information.
I am trying to run sonarcloud-quality-gate check after performing sonarcloud-scan. I am doing this because I want bitbucket build pipeline should fail if the quality gate check is failed.
Doing this I get some error like this
Quality Gate failed: Could not get scanner report: [Errno 2] No such file or directory: '/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/sonarsource/sonarcloud-scan/sonarcloud-scan.log'
This is how my bitbucket.yml looks.
image: node:10.15.3
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- node
- sonar
script:
- npm install --quiet
- npm run test:coverage
- pipe: sonarsource/sonarcloud-scan:0.1.5
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
pipelines:
default:
- step: *build-test-sonarcloud
Although solarcloud-scan pipe runs successfully.
The problem is that the sonarsource/sonarcloud-quality-gate pipe requires a newer version of the sonarsource/sonarcloud-scan pipe. (This was the case ever since the first release of the sonarsource/sonarcloud-quality-gate pipe.)
Change your pipeline configuration like this:
- pipe: sonarsource/sonarcloud-scan:1.0.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.3
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
An easy way to see the latest versions is in the pipeline editor.
When you edit the bitbucket-pipelines.yml file, a sidebar like this opens,
where you can filter the list by entering "sonar":
And then, click on a pipe to see details, and note the version used.