Deploy phase of a stage not firing - travis-ci

There has to be something I’m missing, but I just can’t see it. I have a staged build. The deploy stage is firing as expected, as are all of its phases, but not the deploy phase. Any idea why?
stages:
- name: build
- name: publish
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: deploy
if: (type == push && branch == rob-release-and-deploy) || tag IS present
- name: clean
# ... Other bits until we hit the deploy stage of jobs: include: ...
- stage: deploy
name: "Deploy to dev|aut|stg"
install:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl ${HOME}/.local/bin
script:
- echo "Placeholder?"
before_deploy:
- aws ecr get-login-password --region "${AWS_REGION}" | docker login --username AWS --password-stdin "${AWS_ECR_REGISTRY_URL}/tmp"
deploy:
- provider: script
script: "bash ./bin/deploy dev"
skip_cleanup: true
on:
branch: rob-release-and-deploy
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
condition: tag IS present && (tag =~ /^\d{8}\.rc\d+$/)
I’m committing code to the rob-release-and-deploy branch (a PR open on that branch). There’s no indication that the deploy: phase is being recognized at all. It’s not being skipped with the message I might normally see if I were pushing to a different branch or something…it’s simply not doing anything at all.
Here's the end of the build log:
0.00s$ echo "Placeholder?"
189Placeholder?
190The command "echo "Placeholder?"" exited with 0.
191
192travis_run_after_success: command not found
193travis_run_after_failure: command not found
194travis_run_after_script: command not found
195travis_run_finish: command not found
196
197Done. Your build exited with 0.
What can I try next?

Solved. In my second deploy provider, I missing tags: true...
- provider: script
script: "bash ./bin/deploy aut"
skip_cleanup: true
on:
tags: true
condition: tag =~ /^\d{8}\.rc\d+$/
I knew it would be something dumb, but I thought I saw an example in the docs that deployed just using condition:. Alas. ¯_(ツ)_/¯

Related

Trying and Failing with Gitlab CI with Google Run Cloud

This is my first time trying to CI to Google Cloud from Gitlab, so far has been this journey very painful, but I think I'm closer.
I follow some instructions from:
https://medium.com/google-cloud/deploy-to-cloud-run-using-gitlab-ci-e056685b8eeb
and I change to my needs the .gitlab-ci and the cloudbuild.yaml
After several tryouts, I finally manage to set all the Roles, Permissions and Service Accounts. But no luck building my docker file into the Container Registry or Artifact.
this is my failure log from gitlab log:
Running with gitlab-runner 14.6.0~beta.71.gf035ecbf (f035ecbf)
on green-3.shared.runners-manager.gitlab.com/default Jhc_Jxvh
Preparing the "docker+machine" executor
Using Docker executor with image google/cloud-sdk:latest ...
Pulling docker image google/cloud-sdk:latest ...
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
Preparing environment
Running on runner-jhcjxvh-project-32231297-concurrent-0 via runner-jhcjxvh-shared-1641939667-f7d79e2f...
Getting source from Git repository
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ProjectsD/node-projects/.git/
Created fresh repository.
Checking out 1f1e41f0 as dev...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
$ echo $GCP_SERVICE_KEY > gcloud-service-key.json
$ gcloud auth activate-service-account --key-file=gcloud-service-key.json
Activated service account credentials for: [gitlab-ci-cd#pdnodejs.iam.gserviceaccount.com]
$ gcloud config set project $GCP_PROJECT_ID
Updated property [core/project].
$ gcloud builds submit . --config=cloudbuild.yaml
Creating temporary tarball archive of 47 file(s) totalling 100.8 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [/root/.config/gcloud/logs/2022.01.11/22.23.29.855708.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://pdnodejs_cloudbuild/source/1641939809.925215-a19e660f1d5040f3ac949d2eb5766abb.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/pdnodejs/locations/global/builds/577417e7-67b9-419e-b61b-f1be8105dd5a].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/577417e7-67b9-419e-b61b-f1be8105dd5a?project=484193191648].
gcloud builds submit only displays logs from Cloud Storage. To view logs from Cloud Logging, run:
gcloud beta builds submit
BUILD FAILURE: Build step failure: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 577417e7-67b9-419e-b61b-f1be8105dd5a completed with status "FAILURE"
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
.gitlab-ci
# file: .gitlab-ci.yml
stages:
# - docker-build
- deploy_dev
# docker-build:
# stage: docker-build
# image: docker:latest
# services:
# - docker:dind
# before_script:
# - echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
# script:
# - docker build --pull -t "$CI_REGISTRY_IMAGE" .
# - docker push "$CI_REGISTRY_IMAGE"
deploy_dev:
stage: deploy_dev
image: google/cloud-sdk:latest
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # google cloud service accounts
- gcloud auth activate-service-account --key-file=gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/node-projects', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/node-projects']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/node-projects', '--region', 'us-central4', '--platform', 'managed', '--allow-unauthenticated']
options:
logging: CLOUD_LOGGING_ONLY
Is there any other configuration I'm missing inside GCP? or is something wrong with my files?
😮‍💨
UPDATE: I try and Success finally
I start to move around everything from scrath and I now achieve the correct deploy
.gitlab-ci
stages:
- build
- push
default:
image: docker:latest
services:
- docker:dind
before_script:
- echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
docker-build:
stage: build
only:
refs:
- main
- dev
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
interruptible: true
environment:
name: build/$CI_COMMIT_REF_NAME
push:
stage: push
only:
refs:
- main
- dev
script:
- apk upgrade --update-cache --available
- apk add openssl
- apk add curl python3 py-crcmod bash libc6-compat
- rm -rf /var/cache/apk/*
- curl https://sdk.cloud.google.com | bash > /dev/null
- export PATH=$PATH:/root/google-cloud-sdk/bin
- echo $GCP_SERVICE_KEY > gcloud-service-key-push.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key-push.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud auth configure-docker us-central1-docker.pkg.dev
- tag=":$CI_COMMIT_REF_SLUG"
- docker pull "$CI_REGISTRY_IMAGE${tag}"
- docker tag "$CI_REGISTRY_IMAGE${tag}" us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
- docker push us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
environment:
name: push/$CI_COMMIT_REF_NAME
when: on_success
.cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
[
'build',
'-t',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'.',
]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp']
# deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
[
'beta',
'run',
'deploy',
'dreamslear',
'--image',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'--region',
'us-central1',
'--platform',
'managed',
'--port',
'3000',
'--allow-unauthenticated',
]
And that worked!
if someone wants to give an optimised workflow or any advice, that would be great!

how to to reuse a docker container between jobs

I have the following code in my gitlab yml:
stages:
- unit_test
- deploy
Test:
stage: unit_test
script:
- docker run --rm -d --name myimage widgets:0.1 bash -c "tail -f /dev/null"
- docker exec -w /opt/source-code/tests myimage pwsh -c "dotnet test --test-adapter-path:. --logger:\"junit;LogFilePath=..\TestResults\test-results.xml;MethodFormat=Class;FailureBodyFormat=Verbose\""
- docker cp myimage:/opt/source-code/TestResults/test-results.xml ./
artifacts:
when: always
paths:
- ./test-results.xml
reports:
junit:
- ./test-results.xml
tags:
- docker-azure
deploy_to_dev:
stage: deploy
script:
- docker exec myimage pwsh -c "./mydeploymentscript.ps1"
only:
- master
tags:
- docker-azure
what the team wants is for a)unit tests to always run whenever the pipeline is triggered but b) the actual deployment logic to only trigger if the branch is master.
The pipeline is currently failing when it gets to the deploy stage with the error:
Error: No such container: myimage
I was trying to test to see if I could re-use the same container in between jobs since I'm not explicitly doing a "docker stop" on it in the unit test job. but I guess not.
I know I can repeat all the same commands / do another docker run in the deploy stage, but wondering if there's another way that I just don't know about.
Thank you
i'm not sure to understand your question. If you want to execute your job when you create a merge request, you can use "rules" like this
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
and the result of your tests will be available in your MR. If you job fail, your pipeline fail and your MR is not merged.
For this part, if your job fail, your pipeline fail too.

Github Actions workflow fails when running steps in a container

I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.
We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

travis script deployment timeout

I have the following deploy section in my .travis.yml
deploy:
provider: script
script: bash scripts/deploy.sh
skip_cleanup: true
on:
all_branches: true
The problem is that bash scripts/deploy.sh can take anywhere between 7 and 10 minutes meaning that this occasionally goes over the 10 minute timeout that travis has by default. But not to worry - travis offers travis_wait. Here is my updated .travis.yml.
deploy:
provider: script
script: travis_wait 30 bash scripts/deploy.sh
skip_cleanup: true
on:
all_branches: true
Problem is, this fails with Script failed with status 127.
Is it possible to use travis_wait within script deployment?
I worked around this by wrapping my deploy command (npm run deploy) in a simple script:
#!/bin/bash
npm run deploy &
# Output to the screen every 9 minutes to prevent a travis timeout
export PID=$!
while [[ `ps -p $PID | tail -n +2` ]]; do
echo 'Deploying'
sleep 540
done

Resources