Docker compose up [ecs context] with private repo - docker

Trying to upload compose.yml to aws with docker-compose [ecs context];
Have my private repositories in https://hub.docker.com/.
Created ecs context, started to use it (docker context use)
Executed docker login -> login succeeded
Executed docker compose up
It fails and returns the error
ServerService TaskFailedToStart: CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref "docker.io/myrepo/server:latest": pull access denied, the repository does not exist or may require authorization: server message: insufficient_scope: authorization...'
How should I get access to this 'docker ecs compose' tool? Is it related somehow to aws credentials?

You want to use the x-aws-pull_credentials key, which points to a secretsmanager ARN, as described here: https://docs.docker.com/cloud/ecs-integration/#private-docker-images
Create a secret using docker secret:
echo '{"username":"joe","password":"hunter2"}' | docker secret create myToken -
arn:aws:secretsmanager:eu-west-3:12345:secret:myToken
In your compose file:
services:
worker:
image: mycompany/privateimage
x-aws-pull_credentials: "arn:aws:secretsmanager:eu-west-3:12345:secret:myToken"

Related

How to build and push a docker image in a Terraform Docker Provider by GCP Cloud Build

References
Terraform Docker Provider
Terraform Google Provider
GCP Cloud Build
Context Details
The deployemnt is done by CICD based on the GCP Cloud Build (and the cloud build service account has an 'owner' role for relevant projects).
Inside a 'cloudbuild.yaml' file there is a step with a 'hashicorp/terraform' worker an a command like 'terraform apply'.
Goal
To build and push a docker image into a GCP Artefact Registry, so that it can be used in a container optimised compute engine deployment in other TF resources.
Issue
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images, I have to use the Terraform Docker Provider.
The docker image is described as:
resource "docker_registry_image" "my_image" {
name = "europe-west2-docker.pkg.dev/${var.my_project_id}/my-docker-reg/my-vm-image:test"
build {
context = "${path.module}/image"
dockerfile = "Dockerfile"
}
}
According to the comment Creating and pushing a docker image to a google cloud registry using terraform: For pushing images, the only way to set credentials is to declare them at the provider level.
Therefore the registry_auth block is to be provided as described in the Terraform Docker Provider documentation.
On one hand, as described in the GCP Artefact Registry authentication documentation You do not need to configure authentication for Cloud Build. So I use this for configuration (as it is to be executed under the Cloud Build service account):
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
}
}
and the Cloud Build job (terraform step) failed with an error:
Error: Error loading registry auth config: could not open config file from filePath: /root/.docker/config.json. Error: open /root/.docker/config.json: no such file or directory
with provider["registry.terraform.io/kreuzwerker/docker"],
on my-vm-image.tf line 6, in provider "docker":
6: provider "docker" {
as the Docker Provider mandatory would like to get some credentials for authentication...
So, another option is to try an 'access token' as descibed in the comment and documentation.
The access token for the cloud build service account can be retrieved by a step in the cloud build yaml:
## Get Cloud Build access token
- id: "=> get CB access token =>"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'sh'
args:
- '-c'
- |
access_token=$(gcloud auth print-access-token) || exit 1
echo ${access_token} > /workspace/access_token || exit 1
and later used in the TF step as a variable value:
...
access_token=$(cat /workspace/access_token)
...
terraform apply -var 'access_token=${access_token}' ....
...
So the Terraform Docker Provider is supposed to be configured according to the example gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://LOCATION-docker.pkg.dev from the GCP Artefact Registry authentication documentation:
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
username = "oauth2accesstoken"
password = var.access_token
}
}
But the Cloud Build job (terraform step) failed again:
Error: Error pushing docker image: Error pushing image: unauthorized: failed authentication
Questions
So, if I dont' try any completely different alternative approach, how the Terraform Docker Provider works within the GCP Cloud Build? What is to be done for a correct authentication?
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images
First, I don't understand the above sentence. Here is Google's Artifact Registry resource.
Second, why use docker_registry_image? Or even docker provider?
If you provide your service account with the right role (no need for full ownership, roles/artifactregistry.writer will do) then you can push images built by Cloud Build to Artifact Registry without any problem. Just set the image name to docker in the necessary build steps.
For example:
steps:
- id: build
name: docker
args:
- build
- .
- '-t'
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE
- id: push
name: docker
args:
- push
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE

Error response from daemon: pull access denied for registry.gitlab.com repository does not exist or may require 'docker login'

Dockerfile
FROM openjdk:8-jre-alpine
WORKDIR /app1/backend
COPY ./target/app1-backend.jar app1-backend.jar
ADD cloudfront_private_key.pem /host_files/
EXPOSE 9000
ENTRYPOINT [ "java", "-cp", "app1-backend.jar", "hsnbe.app1"]
docker-compose.yml
version: '3.4'
services:
app1:
logging:
driver: awslogs
options:
awslogs-region: eu-west-1
image: app1-server:development
container_name: health_backend
build:
context: .
dockerfile: ./build/DockerfileHS.dev
target: app1
restart: unless-stopped
volumes:
- ~/.ssh/health_backend_dev_cloudfront_private_key.pem:${HAPP_AWS_CLOUDFRONT_KEY_FILE_PATH:-/host_files/health_backend_dev_cloudfront_private_key.pem}
ports:
- ${APP1_PORT:-9000}:9000
depends_on:
- postgres
links:
- postgres
Error:
Reason CannotPullContainerError: Error response from daemon: pull access denied for registry.gitlab.com/app1/backend, repository does not exist or may require 'docker login'
What I've tried already:
Docker Login Succeeded, but if I try to docker pull from registry returns:
Error response from daemon: Get https://registry.gitlab.com/v2/app1/backend/manifests/latest: denied: access forbidden
TL;DR:
Check:
Are you logged in? Look in ~/.docker/config.json for auths section
The Auth token needs read-registry and write_registry scopes
Does the project exist on GitLab? You can't push to arbitrary namespace
Did you build / tag with the same name?
Access Denied for GitLab Container Registry
The access denied can be because the personal access token (PAT) which you are using does not have access to the project for this container registry.
The PAT must have read_registry scope to be able to pull, and also write_registry scope, if you want to push to the container registry
The GitLab User who owns the PAT must have permission to access the GitLab Project which owns this container registry:
For a docker push to a Private project, you need at least Developer access to that project
For docker pull, it is enough to be a Guest
You cannot use deploy tokens with the public API. They are only useful for CI/CD jobs.
Invalid tag name
Another reason for access denied can be if the project does not actually exist. You can't push to arbitrary namespaces in GitLab. In your example, the error says:
access denied for registry.gitlab.com/app1/backend
There is no "backend" project in #app1's namespace. This may not even be your namespace: http://gitlab.com/app1
There's not quite enough detail in the question, but as a guess, you should probably be doing this, substituting your username or group name for $GITLAB_USERNAME, and the project's namespace for $GITLAB_PROJECT_NAMESPACE:
docker build -t registry.gitlab.com/$GITLAB_USERNAME/$GITLAB_PROJECT_NAMESPACE/app1/backend:1.0
docker push registry.gitlab.com/$GITLAB_USERNAME/$GITLAB_PROJECT_NAMESPACE/app1/backend:1.0
I had installed gitlab-runner locally via apt-get and had the same issue.
I was able to fix the issue only I removed the installed gitlab-runner and reinstall it via curl
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

How to define private repo image in Gitlab Runner

I have a gitlab runner register with docker+machine executor and I have setup my `.gitlab-ci.yml' as below:
stages:
- RUN_TESTS
- CLEAN
image:
name: <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG>
And the image <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> is a private AWS ECR repository. And It fails everytime as this is a private repository.
How can I configure this to pull this private image?
I got the password of the ecr using aws ecr get-login-password --region us-east-2 command and it gave an password.
I looked into this docker-credential-ecr-login tool and installed this in runner instance. And I configured the AWS credentials using aws configure and credentials are now at ~/.aws/credentials
And also add the following block to ~/.docker/config.json as below:
"credHelpers": {
"<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"
}
But when I try to docker pull <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> it gives me the following error:
Error response from daemon: Get https://Account-ID.dkr.ecr.REGION.amazonaws.com/v2/spot-runner-image/manifests/latest: no basic auth credentials
Is there anything to do with docker-credential-ecr-login
You're getting that error because your runner (and/or the job) isn't authenticating with the ECR registry before it tries to pull an image. You'll need to provide the auth data in one of the following forms:
a $DOCKER_AUTH_CONFIG variable in the job (see the docs for info: https://docs.gitlab.com/ee/ci/variables/README.html#create-a-custom-variable-in-gitlab-ciyml)
a $DOCKER_AUTH_CONFIG project variable from your project's Settings -> CI/CD page (see the docs here: https://docs.gitlab.com/ee/ci/variables/#create-a-custom-variable-in-the-ui)
a $DOCKER_AUTH_CONFIG variable in your runners config.toml file (https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#configuring-a-runner)
The docs on setting the DOCKER_AUTH_CONFIG are here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry.
To determine what should be in the DOCKER_AUTH_CONFIG variable, you can log into your registry from your local machine with docker login example-registry.example.com --username my_user --password my_password. This will create a config.json file in the ~/.docker directory. However, on my mac the credentials are stored in my keychain, and the config.json file can't be used by Gitlab. In that case, you'll have to create the content manually. All this information and more is in the docs: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#determining-your-docker_auth_config-data

Using private registry docker images in Kubernetes when launched using docker stack deploy

I have a simple docker-compose file like the following:
version: "3.7"
services:
mongo:
image: asia.gcr.io/myproj/mymongo:latest
hostname: mongo
volumes:
- type: bind
source: $MONGO_DB_DATA
target: /data/db
command: [ "--bind_ip_all", "--replSet", "rs0", "--wiredTigerCacheSizeGB", "1.5"]
I am launching it in Kubernetes using the following command
docker-compose config | docker stack deploy --orchestrator kubernetes --compose-file - mystack
However, when the pod fails with this error
Failed to pull image "asia.gcr.io/myproj/mymongo:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
My private registry is the gcloud one. I have already logged in docker like the following using the service account keyfile.
docker login -u _json_key -p "$(cat keyfile.json)" https://asia.gcr.io
The image is pulled correctly when I run
docker-compose pull
From this link https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/, I found that I need to create ImagePullSecrets
I have two questions.
How can I write the ImagePullSecrets syntax in my docker-compose so that it is referred correctly.
The method that the links mentions asks you to use .docker/config.json file. However, my config.json has
"auths": {
"asia.gcr.io": {},
},
It doesn't include the username and password since I configured it using the keyfile. How can I do this?
Or is there any simpler way to do this?
I solved this issue by first creating a secret like this
kubectl create secret docker-registry regcred --docker-server https://<docker registry> --docker-username _json_key --docker-password <json key> --docker-email=<email>
and then adding it to the default service account
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'

Ballerina: build image and push to gcr.io via k8s plugin

I'm using a simple ballerina code to build my program (simple hello world) with ballerinax/kubernetes annotations. The service is being compiled succesfully and accessible via the specific bind port from local host.
When configuration a kubernetes deployment I'm specifying the image build and push flags:
#kubernetes:Deployment {
replicas: 2,
name: "hello-deployment",
image: "gcr.io/<gct-project-name>/hello-ballerina:0.0.2",
imagePullPolicy: "always",
buildImage: true,
push: true
}
When building the source code:
ballerina build hello.bal
This is what I'm getting:
Compiling source
hello.bal
Generating executable
./target/hello.balx
#docker - complete 3/3
Run following command to start docker container:
docker run -d -p 9090:9090 gcr.io/<gcr-project-name>/hello-ballerina:0.0.2
#kubernetes:Service - complete 1/1
#kubernetes:Deployment - complete 1/1
error [k8s plugin]: Unable to push docker image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Note that when pushing it manually via docker on my local machine it works find and the new image is getting pushed.
What am I missing? Is there a way to tell ballerina about docker registry credentials via the kubernetes package?
Ballerina doesn't support gcloud docker registry yet, but it supports dockerhub.
Please refer sample6 for more info.
Basically, you can export docker registry username and password as environment variables.
Please create an issue at https://github.com/ballerinax/kubernetes/issues for track this.
Seems like a problem with Container Registry, you are not able to authenticate.
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry.
We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources.
You can check yourself the steps for Container Registry Authentication Methods

Resources