I tried following this to build a docker image using Kaniko in Gitlab.
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
rules:
- if: $CI_COMMIT_TAG
I have a use case where I need to create a container and run a python unit test once the image is created as part of the same build process. I want to run a docker command against built image by creating one container from the created image. Here is a related stackoverflow post, but, was not answered - link
This is my first time trying to CI to Google Cloud from Gitlab, so far has been this journey very painful, but I think I'm closer.
I follow some instructions from:
https://medium.com/google-cloud/deploy-to-cloud-run-using-gitlab-ci-e056685b8eeb
and I change to my needs the .gitlab-ci and the cloudbuild.yaml
After several tryouts, I finally manage to set all the Roles, Permissions and Service Accounts. But no luck building my docker file into the Container Registry or Artifact.
this is my failure log from gitlab log:
Running with gitlab-runner 14.6.0~beta.71.gf035ecbf (f035ecbf)
on green-3.shared.runners-manager.gitlab.com/default Jhc_Jxvh
Preparing the "docker+machine" executor
Using Docker executor with image google/cloud-sdk:latest ...
Pulling docker image google/cloud-sdk:latest ...
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
Preparing environment
Running on runner-jhcjxvh-project-32231297-concurrent-0 via runner-jhcjxvh-shared-1641939667-f7d79e2f...
Getting source from Git repository
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ProjectsD/node-projects/.git/
Created fresh repository.
Checking out 1f1e41f0 as dev...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
$ echo $GCP_SERVICE_KEY > gcloud-service-key.json
$ gcloud auth activate-service-account --key-file=gcloud-service-key.json
Activated service account credentials for: [gitlab-ci-cd#pdnodejs.iam.gserviceaccount.com]
$ gcloud config set project $GCP_PROJECT_ID
Updated property [core/project].
$ gcloud builds submit . --config=cloudbuild.yaml
Creating temporary tarball archive of 47 file(s) totalling 100.8 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [/root/.config/gcloud/logs/2022.01.11/22.23.29.855708.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://pdnodejs_cloudbuild/source/1641939809.925215-a19e660f1d5040f3ac949d2eb5766abb.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/pdnodejs/locations/global/builds/577417e7-67b9-419e-b61b-f1be8105dd5a].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/577417e7-67b9-419e-b61b-f1be8105dd5a?project=484193191648].
gcloud builds submit only displays logs from Cloud Storage. To view logs from Cloud Logging, run:
gcloud beta builds submit
BUILD FAILURE: Build step failure: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 577417e7-67b9-419e-b61b-f1be8105dd5a completed with status "FAILURE"
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
.gitlab-ci
# file: .gitlab-ci.yml
stages:
# - docker-build
- deploy_dev
# docker-build:
# stage: docker-build
# image: docker:latest
# services:
# - docker:dind
# before_script:
# - echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
# script:
# - docker build --pull -t "$CI_REGISTRY_IMAGE" .
# - docker push "$CI_REGISTRY_IMAGE"
deploy_dev:
stage: deploy_dev
image: google/cloud-sdk:latest
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # google cloud service accounts
- gcloud auth activate-service-account --key-file=gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/node-projects', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/node-projects']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/node-projects', '--region', 'us-central4', '--platform', 'managed', '--allow-unauthenticated']
options:
logging: CLOUD_LOGGING_ONLY
Is there any other configuration I'm missing inside GCP? or is something wrong with my files?
😮💨
UPDATE: I try and Success finally
I start to move around everything from scrath and I now achieve the correct deploy
.gitlab-ci
stages:
- build
- push
default:
image: docker:latest
services:
- docker:dind
before_script:
- echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
docker-build:
stage: build
only:
refs:
- main
- dev
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
interruptible: true
environment:
name: build/$CI_COMMIT_REF_NAME
push:
stage: push
only:
refs:
- main
- dev
script:
- apk upgrade --update-cache --available
- apk add openssl
- apk add curl python3 py-crcmod bash libc6-compat
- rm -rf /var/cache/apk/*
- curl https://sdk.cloud.google.com | bash > /dev/null
- export PATH=$PATH:/root/google-cloud-sdk/bin
- echo $GCP_SERVICE_KEY > gcloud-service-key-push.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key-push.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud auth configure-docker us-central1-docker.pkg.dev
- tag=":$CI_COMMIT_REF_SLUG"
- docker pull "$CI_REGISTRY_IMAGE${tag}"
- docker tag "$CI_REGISTRY_IMAGE${tag}" us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
- docker push us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
environment:
name: push/$CI_COMMIT_REF_NAME
when: on_success
.cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
[
'build',
'-t',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'.',
]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp']
# deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
[
'beta',
'run',
'deploy',
'dreamslear',
'--image',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'--region',
'us-central1',
'--platform',
'managed',
'--port',
'3000',
'--allow-unauthenticated',
]
And that worked!
if someone wants to give an optimised workflow or any advice, that would be great!
Job getting success status when docker build got failed
Ci sample
.release:build:template: &release_builder
image: docker:stable
stage: release:build
services:
- docker:dind
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
after_script:
- export DOCKER_DEPLOY_IMAGE=$DOCKER_IMAGE_REPO:$(cat DOCKER_TAG)
- docker build --build-arg DOCKER_BASE_IMAGE --build-arg DOCKER_FRONT_IMAGE --build-arg DOCKER_VENDOR_IMAGE --compress -t $DOCKER_DEPLOY_IMAGE . -f .deploy/docker/$CI_ENVIRONMENT_NAME/Dockerfile; echo $?
- docker push $DOCKER_DEPLOY_IMAGE
artifacts:
paths:
- DOCKER_TAG
.release_build_prod:
<<: *release_builder
only:
- tags
script:
- echo $CI_COMMIT_TAG>DOCKER_TAG
environment:
name: prod
I was trying to fix this with docker build ... || exit 1 and also echo $0? after build shows nothing and this steps don't help me resolve and issue
So how could i check if docker build command in gitlab finished without errors, or got error and FAILED job
I have build a docker image successfully and tag it as testdock:latest ($CI_REGISTRY_IMAGE:latest) the $CI_REGISTRY variable is kept in GitLab project variable.
I have another stage , to start scanning the testdock image by using Trivy:
the process is just stuck without progress. I am guessing is that the image cannot be found or something wrong with the docker environment in GitLab.
Where is the `docker image (testdock)` stored?
this is the command that I used for Trivy to scan the testdock image:
$ TRIVY_INSECURE=true trivy --skip-update --output "$CI_PROJECT_DIR/scanning-report.json" $CI_REGISTRY_IMAGE:latest
the yml:
build:
stage: build
image: $CI_REGISTRY/devops/docker:latest
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
#tags:
# - docker
variables:
# No need to clone the repo, we exclusively work on artifacts. See
# https://docs.gitlab.com/ee/ci/runners/README.html#git-strategy
TRIVY_USERNAME: "$CI_REGISTRY_USER"
TRIVY_PASSWORD: "$CI_REGISTRY_PASSWORD"
TRIVY_AUTH_URL: "$CI_REGISTRY"
FULL_IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build -t $FULL_IMAGE_NAME .
# - docker push $CI_REGISTRY_IMAGE:latest
security_scan:
stage: test
image:
name: $CI_REGISTRY/devops/trivy/trivy:0.20.1
entrypoint: [""]
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
#tags:
# - docker
variables:
# No need to clone the repo, we exclusively work on artifacts. See
# https://docs.gitlab.com/ee/ci/runners/README.html#git-strategy
# GIT_STRATEGY: none
TRIVY_USERNAME: "$CI_REGISTRY_USER"
TRIVY_PASSWORD: "$CI_REGISTRY_PASSWORD"
TRIVY_AUTH_URL: "$CI_REGISTRY"
FULL_IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
before_script:
- git config --global http.sslVerify false
- git clone $CI_REPOSITORY_URL
- echo "the project directory is - $CI_PROJECT_DIR"
- echo "the CI_REGISTRY_IMAGE variable is - $CI_REGISTRY_IMAGE"
- echo "the full image name is - $FULL_IMAGE_NAME"
- ls -la
- trivy -h | grep cache
- mkdir -p /root/.cache/trivy/db
- ls -la
- cp "eval-trivy-2/trivy-offline.db.tgz" "/root/.cache/trivy/db"
- cd /root/.cache/trivy/db
- tar xvf trivy-offline.db.tgz
- ls -la
script:
- trivy --version
- time trivy image --clear-cache
# running 1 hr and stopped.
#- TRIVY_INSECURE=true trivy --skip-update $CI_REGISTRY_IMAGE:latest
#- TRIVY_INSECURE=true trivy --skip-update -f json -o scanning-report.json $CI_REGISTRY/devops/aquasec/trivy:0.16.0
- TRIVY_INSECURE=true trivy --skip-update -o "$CI_PROJECT_DIR/scanning-report.json" $FULL_IMAGE_NAME
#keep loading by using testdock:latest
#- TRIVY_INSECURE=true trivy --skip-update -o "$CI_PROJECT_DIR/scanning-report.json" testdock:latest
# - TRIVY_INSECURE=true trivy --skip-update --exit-code 1 --severity CRITICAL $CI_REGISTRY/devops/aquasec/trivy:0.16.0
artifacts:
when: always
reports:
container_scanning: scanning-report.json
All jobs are running isolated. Therefore jobA normally does not know what jobB produced as long as you do not tell the job specifically to pass things on to the next job with the artifacts directive.
In your case you build your image in your job, but if you did not push it - it will be just like any throw away data and lost at the next stage. The easiest way is to push it to a docker registry and use it from there. eg. a common practice is to tag it with the commit SHA instead of latest. This way you can ensure you are always hitting the right image.
final gitlan-ci.yml which works well now:
variables:
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
stages:
- build
- test
#include:
# Trivy integration with GitLab Container Scanning
# - remote: "https://github.com/aquasecurity/trivy/raw/master/contrib/Trivy.gitlab-ci.yml"
build:
image: $CI_REGISTRY/devops/docker:latest
stage: build
variables:
IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
script:
- docker info
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t $IMAGE .
- docker tag $IMAGE $CI_REGISTRY/$IMAGE
- docker push $CI_REGISTRY/$IMAGE
Trivy_container_scanning:
stage: test
image:
name: $CI_REGISTRY/devops/trivy/trivy:0.20.1
variables:
# Override the GIT_STRATEGY variable in your `.gitlab-ci.yml` file and set it to `fetch` if you want to provide a `clair-whitelist.yml`
# file. See https://docs.gitlab.com/ee/user/application_security/container_scanning/index.html#overriding-the-container-scanning-template
# for details
GIT_STRATEGY: none
IMAGE: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
allow_failure: true
before_script:
- trivy image --reset
- git config --global http.sslVerify false
- git clone $CI_REPOSITORY_URL
- echo "the project directory is - $CI_PROJECT_DIR"
- echo "the registry image is - $CI_REGISTRY_IMAGE"
- ls -la
- trivy -h | grep cache
- mkdir -p /root/.cache/trivy/db
- ls -la
- cp "eval-trivy-4/trivy-offline.db.tgz" "/root/.cache/trivy/db"
- cd /root/.cache/trivy/db
- tar xvf trivy-offline.db.tgz
- ls -la
#- export TRIVY_VERSION=${TRIVY_VERSION:-v0.19.2}
#- apk add --no-cache curl docker-cli
#- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
#- curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin ${TRIVY_VERSION}
#- curl -sSL -o /tmp/trivy-gitlab.tpl https://github.com/aquasecurity/trivy/raw/${TRIVY_VERSION}/contrib/gitlab.tpl
script:
- TRIVY_INSECURE=true trivy image --skip-update -f json -o "$CI_PROJECT_DIR/gl-container-scanning-report.json" $CI_REGISTRY/$IMAGE
#unable to write results: failed to initialize template writer: error retrieving template from path: open /tmp/trivy-gitlab.tpl: no such file or directory
# - TRIVY_INSECURE=true trivy image --skip-update --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $CI_REGISTRY/$IMAGE
#scan error
#- trivy --skip-update --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $CI_REGISTRY/$IMAGE
#- trivy --exit-code 0 --cache-dir .trivycache/ --no-progress --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $IMAGE
# cache:
# paths:
# - .trivycache/
artifacts:
reports:
container_scanning: gl-container-scanning-report.json
reference and modified for my env
https://gitlab.com/aquasecurity/trivy-ci-test/-/blob/master/.gitlab-ci.yml
I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?