In order to deploy an application using docker and a remote registry:
Using docker client, execute docker login so that the credentials will be stored either directly in $HOME/.docker/config.json or on a credential store specified also in $HOME/.docker/config.json. Then use the docker create command to start the application.
In kubernetes, a secret can be generated using the docker registry username and password. Then, the secret can be injected in the helm-chart using imagePullSecret. then, helm install command can instruct kubelet to pull the image inside the created container inside the scheduled pod. To update the image registry, the image name and pull secret can be updated before re-installation.
I have three questions:
How can I set the username and password or inject these credentials for the services in docker-compose without having to run docker login first in each deployment host ? (as in nu
Can I populate a credential store specified in a $HOME/.docker/config.json using docker login command on one machine, then specify the same credential store in $HOME/.docker/config.json of another machine, then use the answer of the previous question to inject or pull the credentials
if the docker daemon checks for the credentials inside the credential stores that is specified in $HOME/.docker/config.json, then what is the use of the helper program ?
Related
I am trying to pull and push images between Docker Desktop and Azure and Visual Studio 2019.
currently I can push from VS2019 by Publish option and I can push to Docker and Azure Container Registry.
How do I pull from Azure to Docker? I believe there is an issue with security accounts between the 2 systems. After all, my Docker account is not my Azure account. I came across this article
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-service-principal
which contains a script. Is this the right article to solve my problem? I made a copy of the script but I am struggling to run it. If I save it to assignpermissions.sh file and run wsl ./assignpermissions.sh it complains that az does not exist.
So
Is that the right article to help me (eventually) pull and push between Azure and Docker?
How do I run the script when calling az is causing an error?
Any other things I need to watch out for in the next step?
Log in to a registry
There are several ways to authenticate to your private container registry.
Azure CLI
The recommended method when working in a command line is with the Azure CLI command az acr login. For example, to log in to a registry named myregistry, log into the Azure CLI and then authenticate to your registry:
az login
az acr login --name myregistry
Azure PowerShell
The recommended method when working in PowerShell is with the Azure PowerShell cmdlet Connect-AzContainerRegistry. For example, to log in to a registry named myregistry, log into Azure and then authenticate to your registry:
Connect-AzAccount
Connect-AzContainerRegistry -Name myregistry
You can also log in with docker login. For example, you might have assigned a service principal to your registry for an automation scenario. When you run the following command, interactively provide the service principal appID (username) and password when prompted. For best practices to manage login credentials, see the docker login command reference:
docker login myregistry.azurecr.io
Both commands return Login Succeeded once completed.
Note: You might want to use Visual Studio Code with Docker extension for a faster and more convenient login.
Tip: Always specify the fully qualified registry name (all lowercase) when you use docker login and when you tag images for pushing to your registry. In the examples in this article, the fully qualified name is myregistry.azurecr.io.
Push the image to your registry
Now that you've tagged the image with the fully qualified path to your private registry, you can push it to the registry with docker push:
docker push myregistry.azurecr.io/samples/nginx
Pull the image from your registry
Use the docker pull command to pull the image from your registry:
docker pull myregistry.azurecr.io/samples/nginx
Since Docker Hub only allows 1 private repo, I wonder if there is any way to use Github or Gitlab, etc., to download the images? for instance:
FROM git#github.com/username/repo
...
...
...
Very easy with an account on gitlab.com. GitLab provides a Docker registry linked to projects and you can have unlimited private projects:
Create a project my-docker-project
Go to Package and Registries > Container registries, you should see a few commands to access your registry
Connect your machine to this registry using a command like:
# Will prompt for login/pass
docker login registry.gitlab.com
You'll need an access token or deploy token with read_registry and write_registry scopes. You can generate one via your profile Preferences > Access token. Login is the token name and password the secret token provided.
You can now push Docker images with commands such as:
# Push an image
docker push registry.gitlab.com/YourUsernameOrGroup/my-docker-project
# Push an image on a sub-path
docker push registry.gitlab.com/YourUsernameOrGroup/my-docker-project/myimage
You can then use the image in a Dockerfile by referencing its URL such as:
FROM registry.gitlab.com/YourUsernameOrGroup/my-docker-project
# ...
Of course the machine from which you build must be authenticated on related GitLab registry using docker login command above (or the project must be public)
both have excellent package registry services
For GitHub GPR
For GitLab GCR
Both have excellent features like use it directly from Dockerfile as you want for example.
I have a public example, you can check it in Github with node.js which uses GPR to store the build image/package.
I have a private GitLab project with a pipeline for building and pushing a Docker image. Therefore I have to authenticate to GitLab's Docker registry first.
Research
I read Authenticating to the Container Registry with GitLab CI/CD:
There are three ways to authenticate to the Container Registry via GitLab CI/CD which depend on the visibility of your project.
Available for all projects, though more suitable for public ones:
Using the special CI_REGISTRY_USER variable: The user specified by this variable is created for you in order to push to the Registry connected to your project. Its password is automatically set with the CI_REGISTRY_PASSWORD variable. This allows you to automate building and deploying your Docker images and has read/write access to the Registry. This is ephemeral, so it’s only valid for one job. You can use the following example as-is:
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
For private and internal projects:
Using a personal access token: You can create and use a personal access token in case your project is private:
For read (pull) access, the scope should be read_registry.
For read/write (pull/push) access, use api.
Replace the <username> and <access_token> in the following example:
docker login -u <username> -p <access_token> $CI_REGISTRY
Using the GitLab Deploy Token: You can create and use a special deploy token with your private projects. It provides read-only (pull) access to the Registry. Once created, you can use the special environment variables, and GitLab CI/CD will fill them in for you. You can use the following example as-is:
docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
and Container Registry:
With the update permission model we also extended the support for accessing Container Registries for private projects.
Version history
Your jobs can access all container images that you would normally have access to. The only implication is that you can push to the Container Registry of the project for which the job is triggered.
This is how an example usage can look like:
test:
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CI_REGISTRY/group/other-project:latest
- docker run $CI_REGISTRY/group/other-project:latest
I tried the first and the fourth way and I could authenticate.
Question
What are the pros and cons? I guess the third way is for deployment only, not for building and pushing. Same could be for the second way. Is that right?
And why is the fourth way not listed in the other documentation? Is that way deprecated?
I prefer the fourth option. A note: "If a user creates one named gitlab-deploy-token, the username and token of the deploy token is automatically exposed to the CI/CD jobs as CI/CD variables: CI_DEPLOY_USER and CI_DEPLOY_PASSWORD respectively.
When creating deploy token, you can grant permission read/write to registry/package registry.
The CI_REGISTRY_PASSWORD is ephemeral so avoid using it if you have multiple deploy jobs (which need to pull private image) run parallel.
I believe the differences are just about user skill and permissions.
The first way anyone can do since the variables are automatically present in a running job.
Second, anyone, with any permissions, can create a personal access token (but has an extra step compared to 1 to create the access token).
Third, someone with the correct permissions could create a deploy key. Deploy keys don't give access to the API like personal access tokens can, and only have permission to pull/read the data in the repository, they cannot write/push.
Fourth option, it allows you to both read/pull container images from the registry, but it also allows you to push to the registry. This is helpful if you have a CI step that builds an app in an image, or anything else where you're generating a container image and want to push it into the registry (so another step in the pipeline can pull it down and use it). My guess is that this option isn't listed with the others since it's meant for the building of container images. You probably could use it like any of the others though.
I'm using the official Docker registry image, and have configured it as a pull though cache.
My clients can log in and push/pull local images, such as this:
docker login -u username -p secret docker.example.local:5000
docker pull docker.example.local:5000/myImage
I've configured my clients to use the Docker registry server as a proxy:
root#server:/# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://docker.example.local:5000"]
}
But when my clients tries to pull images not already present on the registry server, I get an error. Example pull command:
docker pull alpine
The registry server then responds with this message in its log file:
error authorizing context: basic authentication challenge for realm \"Registry Realm\": invalid authorization credential
I came across this SO post suggesting putting a Nginx proxy server in front, but this seems like a hack and I'd prefer some cleaner way of doing this if possible.
How have others set up their registry server in a pull through cache mode - did you find a better solution than setting up an Nginx proxy in front of the registry server?
You are using wrong name of registry-server-name.
Do not use https:// prefix
#>docker login -u username -p secret docker.example.local:5000
You should ensure that you either provide environment variable REGISTRY_HTTP_HOST=https://docker.example.local:5000 or specify it in /etc/docker/registry/config.yml file of registry image
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://docker.example.local:5000
# see https://docs.docker.com/registry/configuration/
Reason is that address used in docker login should match host configuration of docker registry.
It's been a bit since I dug through that code, but I believe docker will attempt to login to your pull through cache using your Hub credentials. It only uses that registries individual credentials when you pull from it directly. So you need to run docker login without a hostname to configure the Hub login. This is only between the docker engine and the mirror.
From the pull through cache to Hub, you configure the user/password in the pull through cache and anyone that can reach the cache will use those credentials when pulling from Hub. This means you need to ensure the cache is configured with a minimal access user or is only accessible by devices on the network that you trust.
We are using the Google Container Registry to store our Docker images.
To authorize our build instances we place long-lived access tokens in .docker/config.json as described in the docs.
This works perfectly fine until someone (i.e. some Makefile) uses gcloud docker -- push ... to push to the registry (instead of e.g. docker push ...). gcloud will replace the existing, long-lived credentials with short-lived ones that expire after some time. Thus subsequent builds may fail, depending on the exact timing.
My Question: How can I prevent gcloud docker ... from messing with my provisioned credentials?
I've tried chattr +i .docker/config.json, but this just makes gcloud complain.
From https://cloud.google.com/sdk/gcloud/reference/docker:
The gcloud docker command group wraps docker commands, so that gcloud can inject the appropriate fresh authentication token into requests that interact with the docker registry.
The only thing that gcloud docker does is change these credentials, then invoke the docker CLI. If you don't want it to change the credentials, there's no reason not to just call docker directly.
One workaround might be to use an alternate configuration file location for your long-lived credentials; per https://docs.docker.com/engine/reference/commandline/cli/:
Options:
--config string Location of client config files (default "/root/.docker")