How to define private repo image in Gitlab Runner - docker

I have a gitlab runner register with docker+machine executor and I have setup my `.gitlab-ci.yml' as below:
stages:
- RUN_TESTS
- CLEAN
image:
name: <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG>
And the image <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> is a private AWS ECR repository. And It fails everytime as this is a private repository.
How can I configure this to pull this private image?
I got the password of the ecr using aws ecr get-login-password --region us-east-2 command and it gave an password.
I looked into this docker-credential-ecr-login tool and installed this in runner instance. And I configured the AWS credentials using aws configure and credentials are now at ~/.aws/credentials
And also add the following block to ~/.docker/config.json as below:
"credHelpers": {
"<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"
}
But when I try to docker pull <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> it gives me the following error:
Error response from daemon: Get https://Account-ID.dkr.ecr.REGION.amazonaws.com/v2/spot-runner-image/manifests/latest: no basic auth credentials
Is there anything to do with docker-credential-ecr-login

You're getting that error because your runner (and/or the job) isn't authenticating with the ECR registry before it tries to pull an image. You'll need to provide the auth data in one of the following forms:
a $DOCKER_AUTH_CONFIG variable in the job (see the docs for info: https://docs.gitlab.com/ee/ci/variables/README.html#create-a-custom-variable-in-gitlab-ciyml)
a $DOCKER_AUTH_CONFIG project variable from your project's Settings -> CI/CD page (see the docs here: https://docs.gitlab.com/ee/ci/variables/#create-a-custom-variable-in-the-ui)
a $DOCKER_AUTH_CONFIG variable in your runners config.toml file (https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#configuring-a-runner)
The docs on setting the DOCKER_AUTH_CONFIG are here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry.
To determine what should be in the DOCKER_AUTH_CONFIG variable, you can log into your registry from your local machine with docker login example-registry.example.com --username my_user --password my_password. This will create a config.json file in the ~/.docker directory. However, on my mac the credentials are stored in my keychain, and the config.json file can't be used by Gitlab. In that case, you'll have to create the content manually. All this information and more is in the docs: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#determining-your-docker_auth_config-data

Related

Access image in GCP Artifact Registry from Gitlab CI/CD using a service-account key

I am trying to run a Gitlab CI job using an image from a private GCP Artifact Registry, i.e.:
build job:
stage: build_job
image: toto-tata-docker.pkg.dev/my-gcp/my-project/my-image:${IMAGE_VERSION}
variables:
...
I have read in many places, including in GitLab doc that using a DOCKER_AUTH_CONFIG CI/CD variable could do the trick, and it is suggested to use
docker login toto-tata-docker.pkg.dev --username my_username --password my_password
and copy whatever is in ~/.docker/config.json after that.
Now my problem is that I am using a service account to authenticate into that AR, and I have no idea how to find the username and password necessary to generate the "right" ~/.docker/config.json.... :'(
I tried using a Credential Helper as suggested in GCP doc (that is actually what my ~/.docker/config.json looks like on my laptop when I docker login with the service account key):
{
"credHelpers": {
"toto-tata-docker.pkg.dev": "gcloud",
}
}
but the job fails because of a denied permission:
...
denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on reso
...
It would be awesome if anyone could advise !!
Thank you !!

How to build and push a docker image in a Terraform Docker Provider by GCP Cloud Build

References
Terraform Docker Provider
Terraform Google Provider
GCP Cloud Build
Context Details
The deployemnt is done by CICD based on the GCP Cloud Build (and the cloud build service account has an 'owner' role for relevant projects).
Inside a 'cloudbuild.yaml' file there is a step with a 'hashicorp/terraform' worker an a command like 'terraform apply'.
Goal
To build and push a docker image into a GCP Artefact Registry, so that it can be used in a container optimised compute engine deployment in other TF resources.
Issue
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images, I have to use the Terraform Docker Provider.
The docker image is described as:
resource "docker_registry_image" "my_image" {
name = "europe-west2-docker.pkg.dev/${var.my_project_id}/my-docker-reg/my-vm-image:test"
build {
context = "${path.module}/image"
dockerfile = "Dockerfile"
}
}
According to the comment Creating and pushing a docker image to a google cloud registry using terraform: For pushing images, the only way to set credentials is to declare them at the provider level.
Therefore the registry_auth block is to be provided as described in the Terraform Docker Provider documentation.
On one hand, as described in the GCP Artefact Registry authentication documentation You do not need to configure authentication for Cloud Build. So I use this for configuration (as it is to be executed under the Cloud Build service account):
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
}
}
and the Cloud Build job (terraform step) failed with an error:
Error: Error loading registry auth config: could not open config file from filePath: /root/.docker/config.json. Error: open /root/.docker/config.json: no such file or directory
with provider["registry.terraform.io/kreuzwerker/docker"],
on my-vm-image.tf line 6, in provider "docker":
6: provider "docker" {
as the Docker Provider mandatory would like to get some credentials for authentication...
So, another option is to try an 'access token' as descibed in the comment and documentation.
The access token for the cloud build service account can be retrieved by a step in the cloud build yaml:
## Get Cloud Build access token
- id: "=> get CB access token =>"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'sh'
args:
- '-c'
- |
access_token=$(gcloud auth print-access-token) || exit 1
echo ${access_token} > /workspace/access_token || exit 1
and later used in the TF step as a variable value:
...
access_token=$(cat /workspace/access_token)
...
terraform apply -var 'access_token=${access_token}' ....
...
So the Terraform Docker Provider is supposed to be configured according to the example gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://LOCATION-docker.pkg.dev from the GCP Artefact Registry authentication documentation:
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
username = "oauth2accesstoken"
password = var.access_token
}
}
But the Cloud Build job (terraform step) failed again:
Error: Error pushing docker image: Error pushing image: unauthorized: failed authentication
Questions
So, if I dont' try any completely different alternative approach, how the Terraform Docker Provider works within the GCP Cloud Build? What is to be done for a correct authentication?
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images
First, I don't understand the above sentence. Here is Google's Artifact Registry resource.
Second, why use docker_registry_image? Or even docker provider?
If you provide your service account with the right role (no need for full ownership, roles/artifactregistry.writer will do) then you can push images built by Cloud Build to Artifact Registry without any problem. Just set the image name to docker in the necessary build steps.
For example:
steps:
- id: build
name: docker
args:
- build
- .
- '-t'
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE
- id: push
name: docker
args:
- push
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE

Docker compose up [ecs context] with private repo

Trying to upload compose.yml to aws with docker-compose [ecs context];
Have my private repositories in https://hub.docker.com/.
Created ecs context, started to use it (docker context use)
Executed docker login -> login succeeded
Executed docker compose up
It fails and returns the error
ServerService TaskFailedToStart: CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref "docker.io/myrepo/server:latest": pull access denied, the repository does not exist or may require authorization: server message: insufficient_scope: authorization...'
How should I get access to this 'docker ecs compose' tool? Is it related somehow to aws credentials?
You want to use the x-aws-pull_credentials key, which points to a secretsmanager ARN, as described here: https://docs.docker.com/cloud/ecs-integration/#private-docker-images
Create a secret using docker secret:
echo '{"username":"joe","password":"hunter2"}' | docker secret create myToken -
arn:aws:secretsmanager:eu-west-3:12345:secret:myToken
In your compose file:
services:
worker:
image: mycompany/privateimage
x-aws-pull_credentials: "arn:aws:secretsmanager:eu-west-3:12345:secret:myToken"

Gitlab Runner Image with GCP credentials

I am trying to teach my Gitlab Runner image to get custom builder images from my private Docker Registry (GCR running in the Google Cloud).
What did not work out?
I created a custom Gitlab Runner image with the ServiceAccount properly set. I started in in non-privileged mode but the wormhole pattern (via docker.sock). On exec-ing into that container (which is based on gitlab/gitlab-runner:v11.3.0) I had to recognise that I cannot do any docker commands in there (neither as root nor as gitlab-user). How the gitlab-runner starts the builder containers afterwards is way above my cognitive capabilities. ;)
# got started via eu.gcr.io/my-project/gitlab-runner:0.0.5 which got taught the GCR credentials
stages:
- build
build:
image: docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.2
stage: build
script:
# only for test if I have access to private docker registry
- docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.1
What worked out?
According to this tutorial you can authenticate via in a before_script block in your .gitlab-ci.yml files. That worked out.
# got started via gitlab/gitlab-runner:v11.3.0
stages:
- build
before_script:
- apk add --update curl python which bash
- curl -sSL https://sdk.cloud.google.com | bash
- export PATH="$PATH:/root/google-cloud-sdk/bin"
- gcloud components install docker-credential-gcr
- gcloud auth activate-service-account --key-file=/key.json
- gcloud auth configure-docker --quiet
build:
image: docker:18.03.1-ce
stage: build
# only for test if I have access to private docker registry
- docker pull eu.gcr.io/my-project/gitlab-builder-docker:0.0.1
The Question
This means that I have to do this (install gcloud & authenticate) in each build run - I would prefer to have done this in the gitlab-runner image. Do you have an idea how to achieve this?
Finally I found a way to get this done.
Teach the vanilla gitlab-runner how to pull from your private GCR Docker Repo
GCP
Create Service account with no permissions in IAM & Admin
Download Json Key
Add Permissions in Storage Browser
Select bucket holding your images (eg eu.artifacts.my-project.appspot.com)
Grant permission Storage Object Admin to the service account
Local Docker Container
Launch a library/docker container and exec into it (with Docker Wormhole Pattern docker.sock volume mount)
Login into GCR via (Check the url of your repo, in my case its located in Europe, therefore the eu prefix in the url)
docker login -u _json_key --password-stdin https://eu.gcr.io < /etc/gitlab-runner/<MY_KEY>.json
Verify if it works via some docker pull <MY_GCR_IMAGE>
Copy the content of ~/.docker/config.json
Gitlab config.toml configuration
Add the following into your config.toml file
[[runners]]
environment = ["DOCKER_AUTH_CONFIG={ \"auths\": { \"myregistryurl.com:port\": { \"auth\": \"<TOKEN-FROM-DOCKER-CONFIG-FILE>\" } } }"]
Vanilla Gitlab Runner Container
Run the runner eg like this
docker run -it \
--name gitlab-runner \
--rm \
-v <FOLDER-CONTAININNG-GITLAB-RUNNER-CONFIG-FILE>:/etc/gitlab-runner:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:v11.3.0
Your .gitlab-ci.yml file
Verify the done work via a .gitlab-ci.yml
Use an image which is located in your private GCP Container Registry
Teach your builder images how to push to your private GCR Docker Repo
GCP
Add permissions to your service account
Grant permission Storage Legacy Bucket Reader to your service account in the Storage Browser
Custom Docker Builder Image
Add your Service Account key file to your your custom image
FROM docker:18.03.1-ce
ADD key.json /<MY_KEY>.json
Your .gitlab-ci.yml file
Add the following script into your before_script section
docker login -u _json_key --password-stdin https://eu.gcr.io < /key.json
Final Thoughts
Now the vanilla gitlab-runner can pull your custom images from your private GCR Docker Repo. Furthermore those pullable custom images are also capable of talking to your private GCR Docker Repo and eg push the resulting images of your build pipeline.
That was quite complicated stuff. Maybe Gitlab enhances the support for this usecase in the future.
This example config worked for me in values.yaml:
config: |
[[runners]]
[runners.docker]
image = "google/cloud-sdk:alpine"
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "google/cloud-sdk:alpine"
[runners.cache]
Type = "gcs"
Path = "runner"
Shared = true
[runners.cache.gcs]
BucketName = "runners-cache"
[[runners.kubernetes.volumes.secret]]
name = "service-account-credentials"
mount_path = "keys"
read_only = true
Where service-account-credentials is a secret containing credentials.json
then in .gitlab-ci.yml you can do:
gcloud auth activate-service-account --key-file=/keys/credentials.json
Hope it helps
have you tried to use google cloudbuild?
i had the same problem and solved it like this:
echo ${GCR_AUTH_KEY} > key.json
gcloud auth activate-service-account --key-file key.json
gcloud auth configure-docker
gcloud builds submit . --config=cloudbuild.yaml --substitutions _CI_PROJECT_NAME=$CI_PROJECT_NAME,_CI_COMMIT_TAG=${CI_COMMIT_TAG},_CI_PROJECT_NAMESPACE=${CI_PROJECT_NAMESPACE}
cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/docker
id: builder
args:
- 'build'
- '-t'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- '.'
- name: gcr.io/cloud-builders/docker
id: tag-runner-image
args:
- 'tag'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:latest'
images:
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:$_CI_COMMIT_TAG'
- 'eu.gcr.io/projectID/$_CI_PROJECT_NAMESPACE-$_CI_PROJECT_NAME:latest'
just use google/cloud-sdk:alpine as image in the gitlab-ci stage

Ballerina: build image and push to gcr.io via k8s plugin

I'm using a simple ballerina code to build my program (simple hello world) with ballerinax/kubernetes annotations. The service is being compiled succesfully and accessible via the specific bind port from local host.
When configuration a kubernetes deployment I'm specifying the image build and push flags:
#kubernetes:Deployment {
replicas: 2,
name: "hello-deployment",
image: "gcr.io/<gct-project-name>/hello-ballerina:0.0.2",
imagePullPolicy: "always",
buildImage: true,
push: true
}
When building the source code:
ballerina build hello.bal
This is what I'm getting:
Compiling source
hello.bal
Generating executable
./target/hello.balx
#docker - complete 3/3
Run following command to start docker container:
docker run -d -p 9090:9090 gcr.io/<gcr-project-name>/hello-ballerina:0.0.2
#kubernetes:Service - complete 1/1
#kubernetes:Deployment - complete 1/1
error [k8s plugin]: Unable to push docker image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Note that when pushing it manually via docker on my local machine it works find and the new image is getting pushed.
What am I missing? Is there a way to tell ballerina about docker registry credentials via the kubernetes package?
Ballerina doesn't support gcloud docker registry yet, but it supports dockerhub.
Please refer sample6 for more info.
Basically, you can export docker registry username and password as environment variables.
Please create an issue at https://github.com/ballerinax/kubernetes/issues for track this.
Seems like a problem with Container Registry, you are not able to authenticate.
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry.
We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources.
You can check yourself the steps for Container Registry Authentication Methods

Resources