access ecr images from within jenkins docker ecs container - docker

Hello Jenkins / Docker experts -
Stuff that is working:
Using the approach suggested here, I was able to get Jenkins docker image running in an AWS ECS cluster. Using -v volume mounts for docker socket (/var/run/docker.sock) and docker (/usr/bin/docker) I am able to access the docker process from inside Jenkins container as well.
Stuff that isn't:
The last problem I am facing is pulling / pushing images to and from AWS ECR Registry. When I try to execute docker pull / push commands, I am ending up with - no basic auth credentials.
I stumbled up on this link explaining my problem. But, I am unable to use the solutions suggested here as there is no ~/.docker/config.json in the host machine to share with Jenkins docker container.
Any suggestions?

Amazon ECR users require permissions to call ecr:GetAuthorizationToken
before they can authenticate to a registry and push or pull any images
from any Amazon ECR repository. Amazon ECR provides several managed
policies to control user access at varying levels; for more
information, see ecr_managed_policies
AmazonEC2ContainerRegistryPowerUser
This managed policy allows power user access to Amazon ECR, which allows read and write access to repositories, but does not allow users to delete repositories or change the policy documents applied to them.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
}]
}
So, instead of using ~/.docker/config.json this, assign the above policy role to your ECS Task and your docker container service will be able to push pull image from ECR.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation. The applications in the task’s containers can
then use the AWS SDK or CLI to make API requests to authorized AWS
services.
Benefits of Using IAM Roles for Tasks
Credential Isolation: A container can only retrieve credentials for
the IAM role that is defined in the task definition to which it
belongs; a container never has access to credentials that are intended
for another container that belongs to another task.
Authorization: Unauthorized containers cannot access IAM role
credentials defined for other tasks.
Auditability: Access and event logging is available through CloudTrail
to ensure retrospective auditing. Task credentials have a context of
taskArn that is attached to the session, so CloudTrail logs show which
task is using which role.
But you have to run this command as mentioned above to get Auth token.
eval $(aws ecr get-login --no-include-email)
You will get response like
Login Succeeded
Now you push pull image once you obtain the auth token from ECR.
docker push xxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nodejs:test
Automate ECR login

Related

GCP - Not able to Push docker image from Compute Engine to Container Registry

I'm not able to push docker images from Compute Engine VM to Container registry. I have added the credentials to the service account but I still get:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Your service account should have roles/storage.legacyBucketWriter role on GCR's bucket (artifacts.PROJECT-ID.appspot.com for gcr.io). More documentation is here.

How to inject docker registry username/password in docker-compose file?

In order to deploy an application using docker and a remote registry:
Using docker client, execute docker login so that the credentials will be stored either directly in $HOME/.docker/config.json or on a credential store specified also in $HOME/.docker/config.json. Then use the docker create command to start the application.
In kubernetes, a secret can be generated using the docker registry username and password. Then, the secret can be injected in the helm-chart using imagePullSecret. then, helm install command can instruct kubelet to pull the image inside the created container inside the scheduled pod. To update the image registry, the image name and pull secret can be updated before re-installation.
I have three questions:
How can I set the username and password or inject these credentials for the services in docker-compose without having to run docker login first in each deployment host ? (as in nu
Can I populate a credential store specified in a $HOME/.docker/config.json using docker login command on one machine, then specify the same credential store in $HOME/.docker/config.json of another machine, then use the answer of the previous question to inject or pull the credentials
if the docker daemon checks for the credentials inside the credential stores that is specified in $HOME/.docker/config.json, then what is the use of the helper program ?

How to pull/push from/to GCR from GKE node

I'm building an application that I will run in GKE. This application will use shell commands (for now) to build docker images and try to push them to GCR. I'm finding that when I try to do this from a pod running in GKE I get authentication problems. I'm having trouble figuring out why these authentication problems are happening.
Here's a list of all of the debugging I've done so far. At the highest level, my GKE clusters have the https://www.googleapis.com/auth/devstorage.read_write oauth scope. When I examine the permissions on the underlying GCE instance, I see these permissions - note the Read Write value for Storage:
Now, when I SSH into that instance using the console and list the docker images I see the image used by GKE when spinning up my pod:
paymahn#gke-prod-478557c-default-pool-e9314f46-d9mn ~ $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/gadic-310112/server latest 8f8a22237c31 2 days ago 1.85GB
...
However, if I try to manually pull that image while SSH-ed into the GCP instance, I get an authentication problem:
paymahn#gke-prod-478557c-default-pool-e9314f46-d9mn ~ $ docker pull gcr.io/gadic-310112/server:latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I also looked at the service account 65106360748-compute#developer.gserviceaccount.com which is the default compute instance service account. Here are the permissions it has (I manually added the Storage Object Creator role):
Adding the Storage Object Creator role to that service account didn't help.
Is my approach to authentication here fundamentally flawed? It seems like I have all the right pieces in place to pull/push from GCR from GKE. Maybe there's an extra step I need to do for the docker client to authenticate?
Figured it out. I had to:
make a service account with the roles/storage.objectAdmin
generate a key for that service account
store that key as a secret in GKE
Mount that secret into my pods
run gcloud auth activate-service-account --key-file <path to key>
run gcloud auth configure-docker
Once all of that was done, my pods could pull from and push to GCR.

How to authenticate to GitLab's container registry before building a Docker image?

I have a private GitLab project with a pipeline for building and pushing a Docker image. Therefore I have to authenticate to GitLab's Docker registry first.
Research
I read Authenticating to the Container Registry with GitLab CI/CD:
There are three ways to authenticate to the Container Registry via GitLab CI/CD which depend on the visibility of your project.
Available for all projects, though more suitable for public ones:
Using the special CI_REGISTRY_USER variable: The user specified by this variable is created for you in order to push to the Registry connected to your project. Its password is automatically set with the CI_REGISTRY_PASSWORD variable. This allows you to automate building and deploying your Docker images and has read/write access to the Registry. This is ephemeral, so it’s only valid for one job. You can use the following example as-is:
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
For private and internal projects:
Using a personal access token: You can create and use a personal access token in case your project is private:
For read (pull) access, the scope should be read_registry.
For read/write (pull/push) access, use api.
Replace the <username> and <access_token> in the following example:
docker login -u <username> -p <access_token> $CI_REGISTRY
Using the GitLab Deploy Token: You can create and use a special deploy token with your private projects. It provides read-only (pull) access to the Registry. Once created, you can use the special environment variables, and GitLab CI/CD will fill them in for you. You can use the following example as-is:
docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
and Container Registry:
With the update permission model we also extended the support for accessing Container Registries for private projects.
Version history
Your jobs can access all container images that you would normally have access to. The only implication is that you can push to the Container Registry of the project for which the job is triggered.
This is how an example usage can look like:
test:
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CI_REGISTRY/group/other-project:latest
- docker run $CI_REGISTRY/group/other-project:latest
I tried the first and the fourth way and I could authenticate.
Question
What are the pros and cons? I guess the third way is for deployment only, not for building and pushing. Same could be for the second way. Is that right?
And why is the fourth way not listed in the other documentation? Is that way deprecated?
I prefer the fourth option. A note: "If a user creates one named gitlab-deploy-token, the username and token of the deploy token is automatically exposed to the CI/CD jobs as CI/CD variables: CI_DEPLOY_USER and CI_DEPLOY_PASSWORD respectively.
When creating deploy token, you can grant permission read/write to registry/package registry.
The CI_REGISTRY_PASSWORD is ephemeral so avoid using it if you have multiple deploy jobs (which need to pull private image) run parallel.
I believe the differences are just about user skill and permissions.
The first way anyone can do since the variables are automatically present in a running job.
Second, anyone, with any permissions, can create a personal access token (but has an extra step compared to 1 to create the access token).
Third, someone with the correct permissions could create a deploy key. Deploy keys don't give access to the API like personal access tokens can, and only have permission to pull/read the data in the repository, they cannot write/push.
Fourth option, it allows you to both read/pull container images from the registry, but it also allows you to push to the registry. This is helpful if you have a CI step that builds an app in an image, or anything else where you're generating a container image and want to push it into the registry (so another step in the pipeline can pull it down and use it). My guess is that this option isn't listed with the others since it's meant for the building of container images. You probably could use it like any of the others though.

How to stop gcloud docker -a overwriting long-lived credentials?

We are using the Google Container Registry to store our Docker images.
To authorize our build instances we place long-lived access tokens in .docker/config.json as described in the docs.
This works perfectly fine until someone (i.e. some Makefile) uses gcloud docker -- push ... to push to the registry (instead of e.g. docker push ...). gcloud will replace the existing, long-lived credentials with short-lived ones that expire after some time. Thus subsequent builds may fail, depending on the exact timing.
My Question: How can I prevent gcloud docker ... from messing with my provisioned credentials?
I've tried chattr +i .docker/config.json, but this just makes gcloud complain.
From https://cloud.google.com/sdk/gcloud/reference/docker:
The gcloud docker command group wraps docker commands, so that gcloud can inject the appropriate fresh authentication token into requests that interact with the docker registry.
The only thing that gcloud docker does is change these credentials, then invoke the docker CLI. If you don't want it to change the credentials, there's no reason not to just call docker directly.
One workaround might be to use an alternate configuration file location for your long-lived credentials; per https://docs.docker.com/engine/reference/commandline/cli/:
Options:
--config string Location of client config files (default "/root/.docker")

Resources