CircleCI: Skip entire workflow - circleci

Basically I'm trying to skip the build if it's not a pull request or a certain branch, however I don't seem to be able to skip a job or a part of the workflow if this fails, so far the problem is that circleci step halt does nothing in my pipelines, example config here:
version: 2.1
orbs:
hello: circleci/hello-build#0.0.5
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
command: |
if [[ $(echo "$CIRCLE_PULL_REQUEST $CIRCLE_PULL_REQUESTS" | grep -c "pull") -gt 0 ]]; then
echo "Do stuff if it's a PR"
else
echo "Not a PR, Skipping."
circleci step halt # does nothing
circleci-agent step halt # does nothing
exit 0
fi
workflows:
"Hello Workflow":
jobs:
- hello/hello-build:
requires:
- build
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
- build:
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
This does not fail, and it works on pull requests but the hello/hello-build is executed anyway despite the circleci step halt commands.
Any help would be appreciated, thanks!

After creating a thread in their forums this is what worked: https://discuss.circleci.com/t/does-circleci-step-halt-works-with-version-2-1/36674/4
Go to account settings -> Personal API Tokens -> New token. Once you have the token go to the project and create a new environment variable something like CIRCLE_TOKEN and save it there.
Then in the config.yml you can run something like this to cancel the current workflow:
curl -X POST https://circleci.com/api/v2/workflow/${CIRCLE_WORKFLOW_ID}/cancel -H 'Accept: application/json' -u '${CIRCLE_TOKEN}:'
Then you will see something like:

Related

execute multiple steps in parallel in CircleCI

I have a CircleCI job with the following structure.
jobs:
test:
steps:
- checkout
- run #1
...<<install dependencies>>
- run #2
...<<execute server-side test>>
- run #3
...<<execute frontend test 1>>
- run #4
...<<execute frontend test 2>>
I want to execute step #1 first, and then steps #2-4 in parallel.
#1, #2, #3, and #4 take around ~4 min., ~1 min., ~1 min., and ~1 min., respectively.
I tried to split the steps to different jobs and use workspaces to pass the installed artifacts from #1 to #2-4. However, because of the large size of the artifacts, it took around ~2 min. to persist & attach workspace, so the advantage of splitting jobs was cancelled out.
Is there a smart way to run #2-4 in parallel without significant overhead?
If you want to run the commands in parallel, you need to move these commands into a new job, otherwise, CircleCI will follow the structure of your step, running the commands only when the last one is finished. Let me give you an example. I created a basic configuration with 4 jobs.
npm install
test1 (that will run at the same time as
test2) but only when the npm install finish
test2 (that will
run at the same time as test1) but only when the npm install
finish
deploy (that will only run after the 2 tests are done)
Basically, you need to split the commands between jobs and set a dependency from what you want.
See my config file:
version: 2.1
jobs:
install_deps:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run: echo "running npm install"
- run: npm install
- persist_to_workspace:
root: .
paths:
- '*'
test1:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the first test and also will run the test2 in parallel"
- run: npm test
test2:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the second test in parallel with the first test1"
- run: npm test
deploy:
docker:
- image: circleci/node:14
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: .
- run: echo "running the deploy job only when the test1 and test2 are finished"
- run: npm run build
# Orchestrate our job run sequence
workflows:
test_and_deploy:
jobs:
- install_deps
- test1:
requires:
- install_deps
- test2:
requires:
- install_deps
- deploy:
requires:
- test1
- test2
Now see the logic above, the install_dep will run with no dependency, but the test1 and the test2 will not run until the install_dep is finished.
Also, the deploy will not run until both tests are finished.
I've run this config, in the first image we can see that the other jobs are waiting for the first one to finish, in the second image we can see both tests are running in parallel and the deploy job is waiting for them to finishes. In the third image, we can see that the deploy job is running.

Cloud Builds Failulre , unable to find logs to see what is going on

i am kicking off a dataflow flex template using a cloud build. In my cloud build file i am attempting to do 3 things
build an image
publish it
run a flex template job using that image
this is my yaml file
substitutions:
_IMAGE: my_logic:latest4
_JOB_NAME: 'pipelinerunner'
_TEMP_LOCATION: ''
_REGION: us-central1
_FMPKEY: ''
_PYTHON_VERSION: '3.8'
# checkout this link https://github.com/davidcavazos/python-docs-samples/blob/master/dataflow/gpu-workers/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
[ 'build'
, '--build-arg=python_version=$_PYTHON_VERSION'
, '--tag=gcr.io/$PROJECT_ID/$_IMAGE'
, '.'
]
# Push the image to Container Registry.
- name: gcr.io/cloud-builders/docker2
args: [ 'push', 'gcr.io/$PROJECT_ID/$_IMAGE' ]
- name: gcr.io/$PROJECT_ID/$_IMAGE
entrypoint: python
args:
- /dataflow/template/main.py
- --runner=DataflowRunner
- --project=$PROJECT_ID
- --region=$_REGION
- --job_name=$_JOB_NAME
- --temp_location=$_TEMP_LOCATION
- --sdk_container_image=gcr.io/$PROJECT_ID/$_IMAGE
- --disk_size_gb=50
- --year=2018
- --quarter=QTR1
- --fmpkey=$_FMPKEY
- --setup_file=/dataflow/template/setup.py
options:
logging: CLOUD_LOGGING_ONLY
# Use the Compute Engine default service account to launch the job.
serviceAccount: projects/$PROJECT_ID/serviceAccounts/$PROJECT_NUMBER-compute#developer.gserviceaccount.com
And this is the command i am launching
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
The error message i am getting is this
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7?project=111111111].
ERROR: (gcloud.beta.builds.submit) build 0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7 completed with status "FAILURE
but i cannot access the logs from the URL mentioned above
I cannot see the logs, so i am unable to see what is wrong, but i stongly suspect somethign in my run.yaml is not quite right
Note: before this, i was building the image myself by launching this command
gcloud builds submit --project=$PROJECT_ID --tag $TEMPLATE_IMAGE .
and my run.yaml just contained 1 step, the last one, and everything worked fine
But i am trying to see if i can do everything in the yaml file
Could anyone advise on what might be incorrect? I dont have much experience with yaml files for cloud build
thanks and regards
Marco
I guess the pipeline does not work because (in the second step) the container: gcr.io/cloud-builders/docker2 does not exist (check https://gcr.io/cloud-builders/ - there is a docker container, but not a docker2one).
This second step pushes the final container to the registry and, it is a dependence of the third step, which will fail too.
You can build the container and push it to the container registry in just one step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE_NAME', '<path_to_docker-file>']
images: ['gcr.io/$PROJECT_ID/$IMAGE_NAME']
Ok, sorted, the problem was the way i was launching the build command
this is the original
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
apparently when i removed the --no-source all worked fine.
I think i copied and pasted the command without really understanding it
regards

Kubernetes - How to setup parallels clusters (Prod, Dev) sharing the same repositories/pipeline

I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.
Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..
From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...
There is my cloudbuild.yaml
steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
Can anyone help me out? I don't really know in which direction to go to..
Did you know helm chart? It is designed for different environment deployment.
With different values.yaml file, you can quickly deploy to different environment with same source code base.
For example, you can name the values.yaml file with environment.
values-dev.yaml
values-sit.yaml
values-prod.yaml
the only differences are some varialbes, such as environment (dev/sit/prod), and namespaces.
so when you run the deployment, it will be:
env=${ENVIRONMENT}
helm install -f values-${env}.yaml myredis ./redis
So my question is:
How are you triggering these builds? Manually? GitHub Trigger? HTTP Trigger using the REST API?
so you're almost there for the building/pushing part, you would need to use substitution variables https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
If you would be triggering the builds manually, you would edit the build trigger and change the sub variable for what you want it to be.
GitHub Trigger -- this is a little more complex as you might want to do releases or branches.
HTTP Trigger, same as manual, in your request you change the sub variable.
So here's part of one of our repo build files, as you will see there are different sub. variables we use, sometimes we want to build the image AND deploy to cluster, other times we just want to build or deploy.
steps:
# pull docker image
- name: 'gcr.io/cloud-builders/docker'
id: pull-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
docker pull $${_TAG_DOCKER_IMAGE} || exit 0
# build docker image
- name: 'gcr.io/cloud-builders/docker'
id: build-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker build -t ${_DOCKER_IMAGE_TAG} --cache-from $${_TAG_DOCKER_IMAGE} .;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# push docker image
- name: 'gcr.io/cloud-builders/docker'
id: push-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker push ${_DOCKER_IMAGE_TAG};
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# tag docker image
- name: 'gcr.io/cloud-builders/gcloud'
id: tag-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
gcloud container images add-tag ${_DOCKER_IMAGE_TAG} $${_TAG_DOCKER_IMAGE} -q;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# update service image on environment
- name: 'gcr.io/cloud-builders/kubectl'
id: update service deployment image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_UPDATE_CLUSTER}" == "true" ]]; then
/builder/kubectl.bash set image deployment $REPO_NAME master=${_DOCKER_IMAGE_TAG} --namespace=${_DEFAULT_NAMESPACE};
else
echo "skipping ... UPDATE_CLUSTER=${_UPDATE_CLUSTER}";
fi
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
# subs are needed because of our different ENVs
# _DOCKER_IMAGE_TAG = ['gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA', 'other']
# _COMPANY_ENV = ['dev', 'staging', 'prod']
# _DEFAULT_NAMESPACE = ['default'] or ['custom1', 'custom2']
# _CLOUDSDK_CONTAINER_CLUSTER = ['dev', 'prod']
# _CLOUDSDK_COMPUTE_ZONE = ['us-central1-a']
# _BUILD_IMAGE = ['true', 'false']
# _UPDATE_CLUSTER = ['true', 'false']
substitutions:
_DOCKER_IMAGE_TAG: $DOCKER_IMAGE_TAG
_COMPANY_ENV: dev
_DEFAULT_NAMESPACE: default
_CLOUDSDK_CONTAINER_CLUSTER: dev
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_BUILD_IMAGE: 'true'
_UPDATE_CLUSTER: 'true'
options:
substitution_option: 'ALLOW_LOOSE'
env:
- _TAG_DOCKER_IMAGE=gcr.io/$PROJECT_ID/$REPO_NAME:${_COMPANY_ENV}-latest
- DOCKER_IMAGE_TAG=gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA
tags:
- '${_COMPANY_ENV}'
- 'build-${_BUILD_IMAGE}'
- 'update-${_UPDATE_CLUSTER}'
we have two workflows --
github trigger builds and deploys under the 'dev' environment.
we trigger via REST API https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create (we replace the variables via the request.json) -- this method also works using the gcloud builds --substitutions CLI.
Hope that answers your question!
The short answer for this is to apply GitOps deployment practices in your workflow.
All your Kubernetes YAMLs or Helmcharts are in a single git repository which is used by a GitOps operator.
In your CI pipeline, you only have to build and push docker images.
The GitOps operator intelligently fetches images versions and make changes and a commit to the target Kubernetes YAML in the repository.
Every change in the GitOps repository is applied to the cluster.
See https://fluxcd.io/

How to trigger a specific job in gitlab

I want to run a specific job in a pipeline , I thought assigning a tag for the job and then specifying this tag again in the post method will fulfill my needs .The problem is when I trigger using the api(post) , all the jobs in the pipeline are triggered event though only one of this tagged .
gitlab-ci.yml :
job1:
script:
- echo "helloworld!"
tags : [myTag]
job2:
script:
- echo "hello gitlab!"
the api call :
curl -X POST -F token="xxx" -F ref="myTag" https://gitlab.com/api/v4/projects/12345678/trigger/pipeline
add a variable to your trigger api call as shown here:
https://docs.gitlab.com/ee/ci/triggers/#making-use-of-trigger-variables
then use the only prperty
inside your gitlab.yml file
as shown here :
https://docs.gitlab.com/ee/ci/variables/#environment-variables-expressions
then only if the variable exists the job will be execute
for example
job1:
script: echo "HELLO"
only:
variables:
- $variables[API_CALL]=true
Probably changes in GitLab makes answers above not working.
The
only:
variables:
- $variables[....]
syntax trigger CI Lint.
For others that come here like me, here's how I trigger a specific job:
job1:
script:
- echo "HELLO for job1"
- "curl
--request POST
--form token=$CI_JOB_TOKEN
--form ref=master
--form variables[TRIGGER_JOB]=job2
https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
except:
- pipelines
job2:
script: echo "HELLO for job2"
only:
variables:
- $TRIGGER_JOB == "job2"
⚠️ Note the except - pipelines in job1, else, you go in infinite Child pipeline loop!
By using variables you can do:
Use this curl command to trigger the pipeline with a variable
curl --request POST --form token=${TOKEN} --form ref=master --form "variables[TRIGERRED_JOB]=job1" "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
Ofcourse you have to set the variable accordingly.
Define your jobs with the appropriate variable:
job1:
script: echo "HELLO for job1"
only:
variables:
- $variables[TRIGERRED_JOB] == "JOB1"
job2:
script: echo "HELLO for job2"
only:
variables:
- $variables[TRIGERRED_JOB] == "JOB2"
if you are running the curl from inside another/same job you can use ${CI_JOB_TOKEN} instead of $TOKEN and
https://docs.gitlab.com/ee/ci/triggers/#making-use-of-trigger-variables

Travis manually confirm next stage

I have a stage test and production. I would like to manually confirm the deployment to production. Is there way to achieve this?
You can make use of Conditional Deployments. This allows you to specify whether you push to production or test.
Combine it with e.g. a check-live-deployment.sh-script and differentiate between branches and/or tagged commits.
For example:
#!/bin/bash
set -e
contains() {
if [[ $TRAVIS_TAG = *"-live"* ]]
then
#"-live" is in $TRAVIS_TAG
echo "true"
else
#"-live" is not in $TRAVIS_TAG
echo "false"
fi
}
echo "============== CHECKING IF DEPLOYMENT CONDITION IS MET =============="
export LIVE=$(contains)
and .travis.yml for a dev/staging/live-deployment to Cloud Foundry:
sudo: false
language: node_js
node_js:
- '8.9.4'
branches:
only:
- master
- "/v*/"
script:
- printenv
before_install:
- chmod +x -R ci
install:
- source ci/check_live_deployment.sh
- ./ci/check_live_deployment.sh
deploy:
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_DEV CF_MANIFEST=manifest-dev.yml ci/deploy_to_cf.sh
on:
tags: false
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_STAGING CF_MANIFEST=manifest-staging.yml ci/deploy_to_cf.sh
on:
tags: true
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_LIVE CF_MANIFEST=manifest-live.yml ci/deploy_to_cf.sh
on:
tags: true
condition: $LIVE = true
This example pushes to dev if branch is master && no tag is present, staging if its a tagged commit, and staging+live if it is a tagged commit on master (a release) and the deployment-condition is met.
Granted:
Maybe not the prettiest solution, but it definitely works. And this is not Travis waiting for you to manually confirm live-deployment (which would kind of ridicule the whole automated deployment principle imo), but it is a way to guarantee, that you have to manually trigger the pipeline in a specific way.

Resources