I am trying to teach my Gitlab Runner image to get custom builder images from my private Docker Registry (GCR running in the Google Cloud).
What did not work out?
I created a custom Gitlab Runner image with the ServiceAccount properly set. I started in in non-privileged mode but the wormhole pattern (via docker.sock). On exec-ing into that container (which is based on gitlab/gitlab-runner:v11.3.0) I had to recognise that I cannot do any docker commands in there (neither as root nor as gitlab-user). How the gitlab-runner starts the builder containers afterwards is way above my cognitive capabilities. ;)
# got started via eu.gcr.io/my-project/gitlab-runner:0.0.5 which got taught the GCR credentials
stages:
- build
build:
image: docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.2
stage: build
script:
# only for test if I have access to private docker registry
- docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.1
What worked out?
According to this tutorial you can authenticate via in a before_script block in your .gitlab-ci.yml files. That worked out.
# got started via gitlab/gitlab-runner:v11.3.0
stages:
- build
before_script:
- apk add --update curl python which bash
- curl -sSL https://sdk.cloud.google.com | bash
- export PATH="$PATH:/root/google-cloud-sdk/bin"
- gcloud components install docker-credential-gcr
- gcloud auth activate-service-account --key-file=/key.json
- gcloud auth configure-docker --quiet
build:
image: docker:18.03.1-ce
stage: build
# only for test if I have access to private docker registry
- docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.1
The Question
This means that I have to do this (install gcloud & authenticate) in each build run - I would prefer to have done this in the gitlab-runner image. Do you have an idea how to achieve this?
Finally I found a way to get this done.
Teach the vanilla gitlab-runner how to pull from your private GCR Docker Repo
GCP
Create Service account with no permissions in IAM & Admin
Download Json Key
Add Permissions in Storage Browser
Select bucket holding your images (eg eu.artifacts.my-project.appspot.com)
Grant permission Storage Object Admin to the service account
Local Docker Container
Launch a library/docker container and exec into it (with Docker Wormhole Pattern docker.sock volume mount)
Login into GCR via (Check the url of your repo, in my case its located in Europe, therefore the eu prefix in the url)
docker login -u _json_key --password-stdin https://eu.gcr.io < /etc/gitlab-runner/<MY_KEY>.json
Verify if it works via some docker pull <MY_GCR_IMAGE>
Copy the content of ~/.docker/config.json
Gitlab config.toml configuration
Add the following into your config.toml file
[[runners]]
environment = ["DOCKER_AUTH_CONFIG={ \"auths\": { \"myregistryurl.com:port\": { \"auth\": \"<TOKEN-FROM-DOCKER-CONFIG-FILE>\" } } }"]
Vanilla Gitlab Runner Container
Run the runner eg like this
docker run -it \
--name gitlab-runner \
--rm \
-v <FOLDER-CONTAININNG-GITLAB-RUNNER-CONFIG-FILE>:/etc/gitlab-runner:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:v11.3.0
Your .gitlab-ci.yml file
Verify the done work via a .gitlab-ci.yml
Use an image which is located in your private GCP Container Registry
Teach your builder images how to push to your private GCR Docker Repo
GCP
Add permissions to your service account
Grant permission Storage Legacy Bucket Reader to your service account in the Storage Browser
Custom Docker Builder Image
Add your Service Account key file to your your custom image
FROM docker:18.03.1-ce
ADD key.json /<MY_KEY>.json
Your .gitlab-ci.yml file
Add the following script into your before_script section
docker login -u _json_key --password-stdin https://eu.gcr.io < /key.json
Final Thoughts
Now the vanilla gitlab-runner can pull your custom images from your private GCR Docker Repo. Furthermore those pullable custom images are also capable of talking to your private GCR Docker Repo and eg push the resulting images of your build pipeline.
That was quite complicated stuff. Maybe Gitlab enhances the support for this usecase in the future.
This example config worked for me in values.yaml:
config: |
[[runners]]
[runners.docker]
image = "google/cloud-sdk:alpine"
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "google/cloud-sdk:alpine"
[runners.cache]
Type = "gcs"
Path = "runner"
Shared = true
[runners.cache.gcs]
BucketName = "runners-cache"
[[runners.kubernetes.volumes.secret]]
name = "service-account-credentials"
mount_path = "keys"
read_only = true
Where service-account-credentials is a secret containing credentials.json
then in .gitlab-ci.yml you can do:
gcloud auth activate-service-account --key-file=/keys/credentials.json
Hope it helps
have you tried to use google cloudbuild?
i had the same problem and solved it like this:
echo ${GCR_AUTH_KEY} > key.json
gcloud auth activate-service-account --key-file key.json
gcloud auth configure-docker
gcloud builds submit . --config=cloudbuild.yaml --substitutions _CI_PROJECT_NAME=$CI_PROJECT_NAME,_CI_COMMIT_TAG=${CI_COMMIT_TAG},_CI_PROJECT_NAMESPACE=${CI_PROJECT_NAMESPACE}
cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/docker
id: builder
args:
- 'build'
- '-t'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- '.'
- name: gcr.io/cloud-builders/docker
id: tag-runner-image
args:
- 'tag'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:latest'
images:
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:latest'
just use google/cloud-sdk:alpine as image in the gitlab-ci stage
Related
I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker
I am trying to push a Docker image to Google Cloud Registry via the GitLab ci pipeline.
The image builds but when its time to push to registry i get the following error.
denied: Token exchange failed for project 'xxx-dev01-xxxxx'. Org
Policy Violated: 'eu' violates constraint
'constraints/gcp.resourceLocations'
.gitlab.yaml
deploy:dev:
allow_failure: true
extends:
- .prod
stage: Deploy
image: google/cloud-sdk
services:
- docker:dind
variables:
IMAGE_TAG: "eu.gcr.io/$PROJECT_ID/testapp"
before_script:
- echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
- gcloud config set project $DEV_PROJECT_ID
- gcloud auth configure-docker
- gcloud services enable containerregistry.googleapis.com
- docker login -u _json_key --password-stdin https://eu.gcr.io < ${HOME}/gcloud-service-key.json
script:
- docker build . -t "$IMAGE_TAG"
- docker push $IMAGE_TAG:latest
when: manual
It seems to violate one of your Organisation Policy, the "resource location" one.
According to the documentation, it looks like your company is preventing you to store data into this location (region).
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations
You might want to try to change the eu.gcr.io with another one from this list.
https://cloud.google.com/container-registry/docs/pushing-and-pulling#add-registry
I'm trying to make automatic publishing using docker + bitbucket pipelines; unfortunately, I have a problem. I read the pipelines deploy instructions on Docker Hub, and I created the following template:
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=paweltest/tester:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t paweltest/tester .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push paweltest/tester:tagname
I have completed the data, but after doing the push, I get the following error when the build starts:
unable to prepare context: lstat/opt/atlassian/pipelines/agent/build/Dockerfile: no dry file or directory
What would I want to achieve? After posting changes to the repository, I'd like for an image to be automatically built and sent to the Docker hub, preferably to the target server where the application is.
I've looked for a solution and tried different combinations. For now, I have about 200 commits with Failed status and no further ideas.
Bitbucket pipelines is a CI/CD service, you can build your applications and deploy resources to production or test server instance. You can build and deploy docker images too - it shouldn't be a problem unless you do something wrong...
All defined scripts in bitbucket-pipelines.yml file are running in a container created from the indicated image(atlassian/default-image:2 in your case)
You should have Dockerfile in the project and from this file you can build and publish a docker image.
I created simple repository without Dockerfile and I started build:
unable to prepare context: unable to evaluate symlinks in Dockerfile
path: lstat /opt/atlassian/pipelines/agent/build/Dockerfile: no such
file or directory
I need Dockerfile in my project to build an image(at the same level as the bitbucket-pipelines.yml file):
FROM node:latest
WORKDIR /src/
EXPOSE 4000
In next step I created a public DockerHub repository:
I also changed your bitbucket-pipelines.yml file(you forgot to mark the new image with a tag):
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t appngpl/stackoverflow-question-56065689 .
# add new image tag
- docker tag appngpl/stackoverflow-question-56065689 appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
Result:
Everything works fine :)
Bitbucket repository: https://bitbucket.org/krzysztof-raciniewski/stackoverflow-question-56065689
GitHub image repository: https://hub.docker.com/r/appngpl/stackoverflow-question-56065689
I have to run from bitbucket the sonar-scanner that is already configured. Thing is that I am new to all of these: BitBucket, sonar-scanner, docker and I need to integrate them in a way that I can only run the sonar-scanner from BitBucket from this point and then use more advanced analysis from sonar-scanner.
I tried to use a docker image using sonar-scanner, but didn't manage to build it. So I got it from GitHub directly, but didn't manage to use it from bitbucket.
I took a look on this thread but it is using GitLab, even though it is similar to what I need:
Launching Sonar Scanner from a gitlab docker runner
bitbucket-pipelines.yml
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=emeraldsquad/sonar-scanner:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
#RETURNS ERROR - docker build -t $IMAGE_NAME .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push $IMAGE_NAME
I am having issues while I am pushing my docker image to my private GCP registry.
I created a new Service account with Owner role from Google Cloud Platform. Then I created a service key and copied the content of the json file (that I downloaded from the Service Account) in the $GCP_SERVICE_KEY variable in Gitlab CI/CD Variables.
This my .gitlab-ci.yaml file:
image: python:3.6
stages:
- push
before_script:
- mkdir -p $HOME/.docker
- echo "$GCP_SERVICE_KEY" >> "$HOME/.docker/config.json"
dockerpush:
stage: push
image: docker:stable
services:
- docker:dind
script:
- docker build --build-arg MONGODB_URI=$MONGODB_URI -t my_image_name .
- docker login -u _json_key --password-stdin https://gcr.io < $HOME/.docker/config.json
- docker tag my_image_name eu.gcr.io/my_project_id/my_image_name
- docker push eu.gcr.io/my_project_id/my_image_name
When I check the console logs, I see "Login succeeded". But I cannot push to my GCP registry. I checked the Project ID, Roles of my user, everything seems okay. But why do I still see the "unauthorized "error?
$ docker login -u _json_key -p "$GCP_SERVICE_KEY" https://gcr.io
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker tag my_image_name eu.gcr.io/my_project_id/my_image_name
$ docker push eu.gcr.io/my_project_id/my_image_name
The push refers to repository
Preparing
Preparing
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials.
To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
You're logging in to https://gcr.io, but pushing to https://eu.gcr.io
Update your docker login command to https://eu.gcr.io