Gitlab pipeline fails, even though deployment happened on GCP - docker

I just created my first CI/CD pipeline on Gitlab, which creates a docker container for a Next.js app, and deploys it on Google Cloud Run.
My cloudbuild.yaml:
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/inook-web', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/inook-web']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'inook-web', '--image', 'gcr.io/$PROJECT_ID/inook-web', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
My .gitlab-ci.yml:
# File: .gitlab-ci.yml
image: docker:latest
stages: # List of stages for jobs, and their order of execution
- deploy-test
- deploy-prod
deploy-test:
stage: deploy-test
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
I get the following error message in the CI/CD pipeline:
https://ibb.co/ZXLWrj1
However, the deployment actually succeeds on GCP: https://ibb.co/ZJjtXzG
Any idea what I can do to fix the pipeline error?

What worked for me was to add a custom bucket for the gcloud builds submit to push logs to. Thanks #slauth for pointing me in the right direction.
Updated command:
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://inook_test_logs

If you add a bucket at the end of the command, then it works.
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://my_bucket_name_on_gcp
Remember to create a bucket on GCP :D

Related

how to set bitbucket pipelines to be manually triggered?

i wrote a pipeline in bitbucket environment but i would like the pipeline to be triggered only when the user run it and not automatically on push or commit.
here is the code:
pipelines:
branches:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
actually i use the skip ci tip to avoid it but if another team member push or commit any change, the pipeline will run, how else can i avoid it please?
if you mention the definition under "custom" property it stops listening branches and only acts when a user triggers it.
use this.
pipelines:
custom:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
The Answer is not good you only need to add trigger: manual
-step
image: XXX
name: XXXX
deployment: XXXX
trigger: manual
script:
- whatever....
And it will be shown a option to be run inside the pipeline options.

Authentication Error when Building and Pushing docker image to ACR using Azure DevOps Pipelines and docker-compose

I am trying to build and push a docker image to ACR using Azure DevOps pipelines. I have to build it with a docker-compose.yml file to be able to use openvpn in the container.
When I run the pipeline I get the following error. Does anyone have an idea of how to solve this?
Starting: DockerCompose
==============================================================================
Task : Docker Compose
Description : Build, push or run multi-container Docker applications. Task can be used with Docker or Azure Container registry.
Version : 0.183.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-docker-compose-tsg
==============================================================================
/usr/local/bin/docker-compose -f /home/vsts/work/1/s/src/docker-compose.yml -f /home/vsts/agents/2.188.2/.docker-compose.1624362077551.yml -p Compose up -d
Creating network "composeproject_default" with the default driver
Pulling getstatus (***/getstatus:)...
Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]Creating network "composeproject_default" with the default driver
##[error]Pulling getstatus (***/getstatus:)...
##[error]Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]The process '/usr/local/bin/docker-compose' failed with exit code 1
Finishing: DockerCompose
My azure-pipelines.yml look like this:
# Docker
# Build and push an image to Azure Container Registry
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '*****************************'
imageRepository: 'getstatus'
containerRegistry: 'composeproject.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: DockerCompose#0
inputs:
containerregistrytype: 'Azure Container Registry'
dockerComposeFile: '**/docker-compose.yml'
action: 'Run a Docker Compose command'
dockerComposeCommand: 'up -d'
And the docker-compose.yml like this:
version: "3.3"
services:
getstatus:
image: composeproject.azurecr.io/getstatus
restart: always
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /etc/timezone:/etc/timezone:ro
I think your docker compose task is missing a couple of parameters
try adding azureContainerRegistry: composeproject.azurecr.io
and azureSubscriptionEndpoint: $(dockerRegistryServiceConnection)
Not sure why the credentials supplied in the Docker#2 task don't persist since they're in the same stage but then I could fill an encyclopedia with what I'm not sure on when it comes to Azure pipelines

Preferred way to Build/Test/Deploy docker images in GitLab CI/CD

I am trying to build a CI/CD pipeline in GitLab. The goal is to build a docker image from a Dockerfile, run tests on the running container, push the image to DockerHub, then deploy it to a Kubernetes cluster. This is what I currently have for my gitlab-ci.yml.
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: ${DOCKER_USER}/my_app
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
stage: build
script:
- echo ${DOCKER_PASSWORD} | docker login --username ${DOCKER_USER} --password-stdin
- docker pull ${CONTAINER_IMAGE}:latest || true
- docker build --cache-from ${CONTAINER_IMAGE}:latest --tag ${CONTAINER_IMAGE}:$CI_COMMIT_SHA --tag ${CONTAINER_IMAGE}:latest .
- docker push ${CONTAINER_IMAGE}:$CI_COMMIT_SHA
- docker push ${CONTAINER_IMAGE}:latest
deploy:
image:
name: bitnami/kubectl:1.16.15
entrypoint: [""]
stage: deploy
variables:
GIT_STRATEGY: none
script:
- kubectl get pods -A # <- Won't work until I pass a Kubeconfig file with cluster details
I have a few main questions:
How can I deploy this image? I know I need to pass a KUBECONFIG file to bitnami/kubectl, but not sure how to do that with GitLab CI/CD
Can I pass the built image to a test stage before pushing to DockerHub
---
stages:
- test app
- build
- test
- deploy
test app:
stage: test_app
image: node:latest
script:
- git clone (path to code)
- npm install
- lint
- audit fix
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: anchor:latest (one you have built yourself or use another testing suite)
script:
- anchore-cli image add user/image:v1
- anchore-cli image wait user/image:v1
- anchore-cli image content user/image:v1
- image vuln user/image:v1 all
- anchore-cli evaluate check user/image:v1 > result .txt
- if ( grep -ci "fail" result.txt >= 1); then exit 1 fi
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
image:
name: kubectl:latest (build your own image that installed kubectl)
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
1. have variables passed in for cluter address, cert data, and token stuff... so you can target other clusters, pre-prod, prod, qa...
2. you can't test an image that isn't on the repo, as the testing suite needs to pull the image from somewhere... You should have a clean up script running to cleanup old image in your repo anyway, so the initial push should be a (test location)
like: docker push untrusted/image:v1
You should also have before scripts and after scripts... before calls docker login
after calls docker logout...
I do not have an answer for deploying to Kubernetes, but I do recommend publishing a test/construction image to Dockerhub when working a merge request/development branch of building the image. Then only deploy the latest image when you merge the branch to master .
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_iamge:test
- deploy push your_image:test
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
- export BRANCH=${CI_COMMIT_TITLE#*\'}; export BRANCH=${BRANCH%\' into*}
- docker delete your_image:$BRANCH
rules:
- if: '$CI_COMMIT_REF_NAME == "master"

How to use docker image from build step in deploy step in CircleCI 2.0?

Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels

Kubernetes deployment.extensions not found

I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

Resources