Travis manually confirm next stage - travis-ci

I have a stage test and production. I would like to manually confirm the deployment to production. Is there way to achieve this?

You can make use of Conditional Deployments. This allows you to specify whether you push to production or test.
Combine it with e.g. a check-live-deployment.sh-script and differentiate between branches and/or tagged commits.
For example:
#!/bin/bash
set -e
contains() {
if [[ $TRAVIS_TAG = *"-live"* ]]
then
#"-live" is in $TRAVIS_TAG
echo "true"
else
#"-live" is not in $TRAVIS_TAG
echo "false"
fi
}
echo "============== CHECKING IF DEPLOYMENT CONDITION IS MET =============="
export LIVE=$(contains)
and .travis.yml for a dev/staging/live-deployment to Cloud Foundry:
sudo: false
language: node_js
node_js:
- '8.9.4'
branches:
only:
- master
- "/v*/"
script:
- printenv
before_install:
- chmod +x -R ci
install:
- source ci/check_live_deployment.sh
- ./ci/check_live_deployment.sh
deploy:
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_DEV CF_MANIFEST=manifest-dev.yml ci/deploy_to_cf.sh
on:
tags: false
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_STAGING CF_MANIFEST=manifest-staging.yml ci/deploy_to_cf.sh
on:
tags: true
- provider: script
skip_cleanup: true
script: env CF_SPACE=$CF_SPACE_LIVE CF_MANIFEST=manifest-live.yml ci/deploy_to_cf.sh
on:
tags: true
condition: $LIVE = true
This example pushes to dev if branch is master && no tag is present, staging if its a tagged commit, and staging+live if it is a tagged commit on master (a release) and the deployment-condition is met.
Granted:
Maybe not the prettiest solution, but it definitely works. And this is not Travis waiting for you to manually confirm live-deployment (which would kind of ridicule the whole automated deployment principle imo), but it is a way to guarantee, that you have to manually trigger the pipeline in a specific way.

Related

mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory

i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist

Github action not executing shell script with K6

I have a shell script which runs k6 scenarios. This shell script successfully runs locally as well as on TeamCity via Docker.
I m trying to setup github actions to run the Docker which runs the script so that each time a PR is merged, it runs. But without giving any error the script is not really running as the logs from github action is not printing any echo statement from the script neither is it printing any K6 logs.
Custom action - action.yml:
name: 'k6 Load Test'
description: 'K6 action created similar to grafana for running load test with k6 in Antman project.'
inputs:
cloud:
description: |
To run in the k6 cloud, provide your k6 cloud token as a secret to the input `token`.
required: false
default: false
token:
description: |
k6 Cloud Token. Only required for using the cloud service.
required: false
default: ''
filename:
description: |
Path to the test script to execute, relative to the workspace.
required: true
default: './src/scenarios/full-card-visa/index.js'
flags:
description: |
Additional argument, flags and environment variables to provide to the k6 CLI.
required: false
default: ''
runs:
using: 'docker'
image: 'Dockerfile'
env:
K6_CLOUD_TOKEN: ${{ inputs.token }}
args:
- ${{ inputs.cloud }}
- ${{ inputs.filename || './src/scenarios/full-card-visa/index.js' }}
- ${{ inputs.flags }}
Github action that uses action.yml:
name: K6 Local Cloud test
on:
push:
branches:
- 'main'
- 'task/ANT-4-github-action'
pull_request:
types: [opened]
jobs:
k6_load_test:
name: k6 Load Test
runs-on: ubuntu-latest
steps:
- name: Checkout branch
uses: actions/checkout#v3
with:
ref: task/ANT-4-github-action
- name: Run load test using action code from commit
uses: ./
with:
filename: ./src/scenarios/full-card-visa/index.js
cloud: true
token: <my token>
Dockerfile:
FROM loadimpact/k6:0.34.1
COPY ./src/lib /lib
COPY ./src/scenarios /scenarios
COPY ./src/k6-run-all.sh /k6-run-all.sh
WORKDIR /
ENTRYPOINT []
CMD ["sh", "-c", "./src/k6-run-all.sh"]
Sample github action run (runs successfully but doesn't really run k6):
Please note, I do not have permission to use grafana/k6 directly hence created my own action from their code.

Access(clone) repository bitbucket from pipeline another repo bitbucket ssh

I have a flutter web project in bitbucket and I am making a pipeline that allows me to use CI/CD. The problem I have, is that the project manages a dependency of a project that is in another repository at bitbucket. I have not been able to find a way to configure the private SSH key in bitbucket and I can access the project in git without problem when doing the build. It gives me the following error:
Downloading Web SDK... 2,828ms
Downloading CanvasKit... 569ms
Running "flutter pub get" in build...
Git error. Command: `git clone --mirror ssh://git#bitbucket.org/... /root/.pub-cache/git/cache/barest-playground-47e65fcf6973f19ceed46038aa27a70e7bc4d47b`
stdout:
stderr: Cloning into bare repository '/root/.pub-cache/git/cache/'...
Warning: Permanently added the RSA host key for IP address '18.205.93.0' to the list of known hosts.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
My pipeline
image: cirrusci/flutter
pipelines:
branches:
develop:
- step:
name: Build
caches:
- node
size: 2x
script:
- ./run.sh dev
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: dev
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'true'
master:
- step:
name: Build
size: 2x
script:
- ./run.sh prod
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: prod
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'false'
First of all, Thanks

CircleCI: Skip entire workflow

Basically I'm trying to skip the build if it's not a pull request or a certain branch, however I don't seem to be able to skip a job or a part of the workflow if this fails, so far the problem is that circleci step halt does nothing in my pipelines, example config here:
version: 2.1
orbs:
hello: circleci/hello-build#0.0.5
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
command: |
if [[ $(echo "$CIRCLE_PULL_REQUEST $CIRCLE_PULL_REQUESTS" | grep -c "pull") -gt 0 ]]; then
echo "Do stuff if it's a PR"
else
echo "Not a PR, Skipping."
circleci step halt # does nothing
circleci-agent step halt # does nothing
exit 0
fi
workflows:
"Hello Workflow":
jobs:
- hello/hello-build:
requires:
- build
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
- build:
filters:
branches:
only:
- testing
- /^(?!pull\/).*$/
tags:
only:
- /^pull\/.*$/
This does not fail, and it works on pull requests but the hello/hello-build is executed anyway despite the circleci step halt commands.
Any help would be appreciated, thanks!
After creating a thread in their forums this is what worked: https://discuss.circleci.com/t/does-circleci-step-halt-works-with-version-2-1/36674/4
Go to account settings -> Personal API Tokens -> New token. Once you have the token go to the project and create a new environment variable something like CIRCLE_TOKEN and save it there.
Then in the config.yml you can run something like this to cancel the current workflow:
curl -X POST https://circleci.com/api/v2/workflow/${CIRCLE_WORKFLOW_ID}/cancel -H 'Accept: application/json' -u '${CIRCLE_TOKEN}:'
Then you will see something like:

Kubernetes - How to setup parallels clusters (Prod, Dev) sharing the same repositories/pipeline

I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.
Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..
From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...
There is my cloudbuild.yaml
steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
Can anyone help me out? I don't really know in which direction to go to..
Did you know helm chart? It is designed for different environment deployment.
With different values.yaml file, you can quickly deploy to different environment with same source code base.
For example, you can name the values.yaml file with environment.
values-dev.yaml
values-sit.yaml
values-prod.yaml
the only differences are some varialbes, such as environment (dev/sit/prod), and namespaces.
so when you run the deployment, it will be:
env=${ENVIRONMENT}
helm install -f values-${env}.yaml myredis ./redis
So my question is:
How are you triggering these builds? Manually? GitHub Trigger? HTTP Trigger using the REST API?
so you're almost there for the building/pushing part, you would need to use substitution variables https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
If you would be triggering the builds manually, you would edit the build trigger and change the sub variable for what you want it to be.
GitHub Trigger -- this is a little more complex as you might want to do releases or branches.
HTTP Trigger, same as manual, in your request you change the sub variable.
So here's part of one of our repo build files, as you will see there are different sub. variables we use, sometimes we want to build the image AND deploy to cluster, other times we just want to build or deploy.
steps:
# pull docker image
- name: 'gcr.io/cloud-builders/docker'
id: pull-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
docker pull $${_TAG_DOCKER_IMAGE} || exit 0
# build docker image
- name: 'gcr.io/cloud-builders/docker'
id: build-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker build -t ${_DOCKER_IMAGE_TAG} --cache-from $${_TAG_DOCKER_IMAGE} .;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# push docker image
- name: 'gcr.io/cloud-builders/docker'
id: push-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker push ${_DOCKER_IMAGE_TAG};
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# tag docker image
- name: 'gcr.io/cloud-builders/gcloud'
id: tag-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
gcloud container images add-tag ${_DOCKER_IMAGE_TAG} $${_TAG_DOCKER_IMAGE} -q;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# update service image on environment
- name: 'gcr.io/cloud-builders/kubectl'
id: update service deployment image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_UPDATE_CLUSTER}" == "true" ]]; then
/builder/kubectl.bash set image deployment $REPO_NAME master=${_DOCKER_IMAGE_TAG} --namespace=${_DEFAULT_NAMESPACE};
else
echo "skipping ... UPDATE_CLUSTER=${_UPDATE_CLUSTER}";
fi
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
# subs are needed because of our different ENVs
# _DOCKER_IMAGE_TAG = ['gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA', 'other']
# _COMPANY_ENV = ['dev', 'staging', 'prod']
# _DEFAULT_NAMESPACE = ['default'] or ['custom1', 'custom2']
# _CLOUDSDK_CONTAINER_CLUSTER = ['dev', 'prod']
# _CLOUDSDK_COMPUTE_ZONE = ['us-central1-a']
# _BUILD_IMAGE = ['true', 'false']
# _UPDATE_CLUSTER = ['true', 'false']
substitutions:
_DOCKER_IMAGE_TAG: $DOCKER_IMAGE_TAG
_COMPANY_ENV: dev
_DEFAULT_NAMESPACE: default
_CLOUDSDK_CONTAINER_CLUSTER: dev
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_BUILD_IMAGE: 'true'
_UPDATE_CLUSTER: 'true'
options:
substitution_option: 'ALLOW_LOOSE'
env:
- _TAG_DOCKER_IMAGE=gcr.io/$PROJECT_ID/$REPO_NAME:${_COMPANY_ENV}-latest
- DOCKER_IMAGE_TAG=gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA
tags:
- '${_COMPANY_ENV}'
- 'build-${_BUILD_IMAGE}'
- 'update-${_UPDATE_CLUSTER}'
we have two workflows --
github trigger builds and deploys under the 'dev' environment.
we trigger via REST API https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create (we replace the variables via the request.json) -- this method also works using the gcloud builds --substitutions CLI.
Hope that answers your question!
The short answer for this is to apply GitOps deployment practices in your workflow.
All your Kubernetes YAMLs or Helmcharts are in a single git repository which is used by a GitOps operator.
In your CI pipeline, you only have to build and push docker images.
The GitOps operator intelligently fetches images versions and make changes and a commit to the target Kubernetes YAML in the repository.
Every change in the GitOps repository is applied to the cluster.
See https://fluxcd.io/

Resources