Pulling Docker image from private protected repository within bash script - docker

I am trying to write a bash script to automatize the setup of a multi-containers environment.
Each container is built from images pulled from a private protected repository.
The problem is that when the script calls for docker-compose up for the first time, access to the repository is denied, like if it does not know I have properly done docker login before running the script.
If I docker pull an image manually, that very image is no longer a problem when the script tries to build its container. But when it has to docker pull on its own from a Dockerfile definition, it gets access denied.
Considering that I would like this script to be portable to other devs' environments, how can I get it to be able to access the repository using the credentials each dev will have already set on its computer with docker login?

You can do something like:
#!/bin/bash
cat ~/pwd.txt | docker login <servername> -u <username> --password-stdin
docker pull
This reads the password from pwd.txt and logs in to the specified server.
In case you have multiple servers you want to log in you can try:
#!/bin/bash
serverlist="server1.com server2.com"
for server in $serverlist; do
cat ~/${server}_pwd.txt | docker login $server -u <username> --password-stdin
done
docker pull
This reads the passwords from files like server1.com_pwd.txt.

Related

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

How to run an AWS Lambda Layer in a Docker container?

I would like to run a Docker container to see what is in a public Lambda Layer.
Following the aws sam layers docs using a sam app with only the pytorch layer I produced the Docker tag then I tried pulling the Docker image which fails with pull access denied / repo may require auth.
I did try aws ecr get-login --no-include-email to auth correctly though still couldn't access the image.
So I think the issue maybe that I am not authorised to pull the image of the lambda layer or the image doesn't exist. It is not clear to me
Alternatively it would be good to download the public Lambda Layer and then I could use https://github.com/lambci/docker-lambda to inspect it
More context about what I tried
So the Lambda Layer I would like to investigate is:
arn:aws:lambda:eu-west-1:934676248949:layer:pytorchv1-py36:1
The docker tag I prodcued is:
python3.6-0ffbca5374c4d95e8e10dbba8
Then I tried pulling the Docker image with:
docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
docker run -it --entrypoint=/bin/bash <aws_account_id>.dkr.ecr.<region>.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
Which both failed with the error:
docker: Error response from daemon: pull access denied for samcli/lambda, repository does not exist or may require 'docker login'.
.
Just a quick potential answer (I've not read the links you provided as I am not at my computer), given you mentioned aws ecr get-login --no-include-email I am assuming you are trying to pull a docker image from AWS's docker repository service.
The line docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i, with default config, will look at docker hubs repository. If you are trying to pull a docker image in AWS I would expect something more like docker run -it --entrypoint=/bin/bash aws_account_id.dkr.ecr.region.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i (again not saying that cammand will work but something like it to go along with your aws repo sign in command).
Since https://hub.docker.com/samcli/lambda is a 404 I suspect this is one of those occasions the error message is exactly right, the repo does not exist.

how to authenticate docker build when using private gitlab repo

When running docker build on my Dockerfile, I pull the most up to date code from a private gitlab repo using a FROM gitlab statment. I am getting a access forbidden error as I have not given my credentials. How do you give your credentials so that I can pull from this private repo?
(Assuming you are talking about Gitlab Container Registry)
To be able to pull docker images from private registries, you need to first run this at the command line:
$ docker login -u $DOCKER_USER -p $DOCKER_PASS
If you are running this in a CI environment, you should set these as secret environment variables.
With Gitlab, I believe it is something along these lines:
$ docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.example.com
See the above linked page (search for "login") to see more examples and instructions.

Docker in Docker unable to push

I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.

GitLab CI ssh registry login

I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.

Resources