Kubernetes & Gitlab: How to store password for private registry? - docker

I want to run my application that is hosted in a private container registry on a Kubernetes cluster. I followed the instructions here and created a secret like this:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> \
--docker-username=<your-name> \
--docker-password=<your-pword> \
--docker-email=<your-email>
which is used in my deployment like this:
containers:
- image: registry.gitlab.com/xxxxx/xxxx
name: dockerdemo
resources: {}
imagePullSecrets:
- name: regcred
K8s is now able to pull the image from my private registry. Anyhow I don't feel comfortable that my user and password are stored in plain text in the cluster. Is there a better/more secure way to give the K8s cluster access to the registry maybe by a token?

Hence I am using Gitlab the solution for me know is not to store my user credentials in Kubernetes. Instead I am using a Deploy Token that can be removed any time and that only has access to the container registry.
The following steps are necessary here:
Open Gitlab and go to your project
Settings > Repository > Deploy Tokens
Create a token with scope read_registry
Create secret in K8S: kubectl create secret docker-registry regcred --docker-server=registry.gitlab.com --docker-username=<token_username> --docker-password=<token>
Thank you #Jonas for your links but this solution is what I was looking for.

Anyhow I don't feel comfortable that my user and password are stored in plain text in the cluster. Is there a better/more secure way to give the K8s cluster access to the registry maybe by a token?
See Encrypting Secret Data at Rest
for how to ensure that your Secrets is encrypted in etcd.
Alternatively you can consider to use Vault to store secrets. See e.g. How Monzo bank security team handle secrets

Related

Create kubectl secrets when pulling from private registry using crio

I want to pull images from the private registry, as without docker need to do this step.
Any alternative to this command for crio:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
This works when creating secrets and pulling using docker.
My requirement is to use create kubectl secrets for private registry url and pull images using Crio/crictl
...
imagePullSecrets:
- name: regcred
Oh here is a strange question hahaha,
Your command should work even if it's not a dockerhub registry, you just have to configure your deployment to use the generated secret:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
I mean this command don't have to be related to only dockerhub repo, It works with every private registry since you put the correct URL and credentials. If CRI-O is your container runtime no pb. Have you tried it?

Is there a way to configure docker hub pro user in kubernetes?

We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.
Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?
In the kubernetes docs I found following article: Kubernetes | Configure nodes for private registry
On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.
I've copied the config.json to the following places:
/var/lib/kubelet/config.json
/var/lib/kubelet/.dockercfg
/root/.docker/config.json
/.docker/config.json
There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.
Here's the config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "[redacted]"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.
Any ideas?
Thank you!
Kubernetes implements this using image pull secrets. This doc does a better job at walking through the process.
Using the Docker config.json:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Or you can pass the settings directly:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Then use those secrets in your pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
Or to use the secret at a user level (Add image pull secret to service account)
kubectl get serviceaccounts default -o yaml > ./sa.yaml
open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.
kind: ServiceAccount
metadata:
creationTimestamp: "2020-11-22T21:41:53Z"
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: afad07eb-f58e-4012-9ccf-0ac9762981d5
secrets:
- name: default-token-gkmp7
imagePullSecrets:
- name: regcred
Finally replace the serviceaccount with the new updated sa.yaml file
kubectl replace serviceaccount default -f ./sa.yaml
We use docker-registry as a proxy cache in our Kubernetes clusters, Docker Hub credentials may be set in the configuration. Docker daemons on Kubernetes nodes are configured to use the proxy by setting registry-mirror in /etc/docker/daemon.json.
This way, you do not need to modify any Kubernetes manifest to include pull secrets. Our complete setup is described in a blog post.
I ran into the same problem as OP. It turns out, putting docker credential files for kubelet works for kubernetes version 1.18 or higher. I have tested here and can confirm that kubelet 1.18 picks up the config.json placed in /var/lib/kubelet correctly and authenticates the docker registry.

Does Kubernetes kubelet support Docker Credential Stores for private registries?

Docker has a mechanism for retrieving Docker registry passwords from a remote store, instead of just storing them in a config file - this mechanism is called a Credentials Store. It has a similar mechanism that are used to retrieve a password for a specific registry called Credential Helpers.
Basically, it involves defining a value in ~/.docker/config.json that is interpreted as the name of an executable.
{
"credsStore": "osxkeychain"
}
The value of the credsStore key has a prefix docker-credential- pre-pended to it and if that executable (e.g. docker-credential-osxkeychain) exists on the path then it will be executed and is expected to echo the username and password to stdout, which Docker will use to log in to a private registry. The idea is that the executable reaches out to a store and retrieves your password for you, so you don't have to have lots of files laying around in your cluster with your username/password encoded in them.
I can't get a Kubernetes kubelet to make use of this credential store. It seems to just ignore it and when Kubernetes attempts to download from a private registry I get a "no basic auth credentials" error. If I just have a config.json with the username / password in it then kubelet works ok.
Does Kubernetes support Docker credential stores/credential helpers and if so, how do I get them to work?
For reference, kubelet is running through systemd, the credential store executable is on the path and the config.json file is being read.
As of the moment of writing Kubernetes v1.14 does not support credential helpers as per official docs Configuring Nodes to Authenticate to a Private Registry
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Yes, Kubernetes has the same mechanism called secrets but with extended functionality, and it includes specific secret type called docker-registry. You can create your specific secret with credentials for docker registry:
$ kubectl create secret docker-registry myregistrykey \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
and use it:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey

Openshift imagestream "Import failed (Unauthorized)" for private external secure registry

May be I'm not getting something right, but my ImageStream returnes "! error: Import failed (Unauthorized): you may not have access to the Docker image "my_registry:5000/project/my_image:latest"".
I have set up all needed steps to connect to external registry (created secret and added it to current projects's serviceaccount/default and serviceaccount/builder accounts). All deploymentconfigs with specified image: my_registry:5000/project/my_image:latest are working great, node can successfully pull the image and create a pod.
But when I am making image stream with:
from:
kind: DockerImage
name: my_registry:5000/project/my_image:latest
I get error that I am not authorised.
So what am i doing wrong? Is there any additional account I should give rights for pull?
oc describe sa/builder
Name: builder
Namespace: nginx
Labels: <none>
Image pull secrets: builder-dockercfg-8ogvt
my_registry
Mountable secrets: builder-token-v6w8q
builder-dockercfg-8ogvt
my_registry
Tokens: builder-token-0j8p5
builder-token-v6w8q
and
oc describe sa/default
Name: default
Namespace: nginx
Labels: <none>
Image pull secrets: default-dockercfg-wmm1h
my_registry
Mountable secrets: default-token-st7k9
default-dockercfg-wmm1h
Tokens: default-token-m2aoq
default-token-st7k9
The solution depends upon your particular infrastructure configuration, but here are some pointers which worked for me -
Assuming your private external registry has Certificates, please check if those certificates are properly imported, if thats not the case, then please add the registry as insecure.
Docker pull, build config, imagestream pull - all work in different manner.
Also it is recommended that pull secret name should be same as hostname of registry authentication endpoint. (If not using insecure registry).
For ex. Registry FQDN Name:5000/yourapp:latest (Certificates need this to work properly).
Please take a look here
oc secrets link default <pull_secret_name> --for=pull
I ran into the same problem when I was trying to import an image from a docker registry hosted in another Openshift cluster. After some debugging I found the problem: Unable to find a secret to match https://docker-dev.xxxx.com:443/openshift/token (docker-dev.xxxx.com:443/openshift/token)
The Openshift Docker registry is using the OAuth of Openshift. So you have to create a secret where the --docker-server is pointing to the /openshift/token endpoint. eg:
oc secrets new-dockercfg registry.example.com \
--docker-server=https://registry.example.com:443/openshift/token \
--docker-username=default/puller-sa \
--docker-password=<token> \
--docker-email=someone#example.com

Kubernetes AWS deployment can not set docker credentials

I set up a Kubernetes cluster on AWS using kube-up script with one master and two minions. I want to create a pod that uses a private docker image. So I need to add my credential to docker daemons of each minion of the cluster. But I don't know how to log into the minions created by AWS script. What is the recommended way to pass credentials to the docker demons of each minion?
Probably the best method for you is ImagePullSecrets - you will create secret (docker config), which be will be used for image pull. Read more about different concepts of using private registry http://kubernetes.io/docs/user-guide/images/#using-a-private-registry
Explained here: https://kubernetes.io/docs/concepts/containers/images/
There are 3 options for ImagePullPolicy: Always, IfNotPresent and Never
1) example of yaml:
...
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
2) By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.

Resources