I'm using JFrog repository as my private registry. And I have specified the secret in order to authenticate with the JFrog. But it says
Failed to pull image "private_registry/image_name": rpc error: code = Unknown desc = failed to pull and unpack image "private_registry/image_name": failed to resolve reference "": failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
Warning Failed 2s (x2 over 14s) kubelet, worker-0 Error: ErrImagePull
I have created a secret file too. But still it doesn't pull the image. When I do docker image pull private_registry/image_name the image get pulled.
From the question, it's clear that it's not able to authenticate for some reason,
It could be a misconfiguration of the secret also possible that the credential used is/are not valid.
Since the secret.yaml file is not present, not aware of the format.
In this case, I would suggest looking at 2 things
Check if the credentials are correct. (In this case, probably you can use docker pull after docker login with the same credential)
If you are able to pull the image successfully, take a look at the next point.
In this, you have the check the way you configure or pass the credentials in the chart is correct.
There are different ways to pass credentials in different cases,
you can take a look at this official doc.
and here in devops exchange the answer also might help to get some insight.
and here is how my secret.yaml looks like:
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Chart.Name }}-docker-credentials
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Chart.Name }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
type: kubernetes.io/dockercfg
data:
.dockercfg: {{ "{\".Values.REGISTRY.URL>>/\":{\"username\":\".Values.REGISTRY.USER_NAME>>\",\"password\":\".Values.REGISTRY.PASSWORD>>\",\"email\":\".Values.REGISTRY.EMAIL>>\",\"auth\":\".Values.REGISTRY.AUTH_TOKEN>>\"}}" | b64enc | quote }}
in the deployments i do:
spec:
imagePullSecrets:
- name: {{ .Chart.Name }}-docker-credentials
Since you have already resolved, hope this would help someone else.😃
Related
I have a private Docker Hub registry with a (rather large) image in it that I control.
I also have a Helm deployment chart that specifies an imagePullSecret, after having followed the instructions here https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
No matter what I do, though, when installing the Helm chart, I always end up with the following (taken from kubectl describe pod <pod-id>):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned default/<release>-69584657b7-vkps6 to <node>
Warning Failed 6m28s (x3 over 20m) kubelet Failed to pull image "<registry-username>/<image>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/<registry-username>/<image>:latest": failed to copy: httpReadSeeker: failed open: server message: invalid_token: authorization failed
Warning Failed 6m28s (x3 over 20m) kubelet Error: ErrImagePull
Normal BackOff 5m50s (x5 over 20m) kubelet Back-off pulling image "<registry-username>/<image>:latest"
Warning Failed 5m50s (x5 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 5m39s (x4 over 26m) kubelet Pulling image "<registry-username>/<image>:latest"
I have looked high and low on the internet for answers pertaining to this invalid_token output, but have yet to find anything concrete.
I have verified that I can run docker pull manually with the image in question both on the K8s node as well as other boxes. It works just fine.
I have tried using docker.io as the repository URI, as well as (the recommended) https://index.docker.io/v1/.
I have tried using my own Docker Hub password as well as a generated Personal Access Token (I can actually see in Docker Hub that the PAT was, in fact, used, despite the pull failing).
I've examined the secrets via kubectl to verify they're of the expected format and contain the correct data (username, password/token, etc.). They're all fine and match what I'd get when I run docker login on the command line.
I have used this node to deploy other releases via Helm and they have all worked fine (although at least one has been from a different registry).
I am relatively new to K8s and Helm, but I've used Docker for a long while now and I'm at a loss as to this invalid_token issue.
Any help would be greatly appreciated.
Thank you in advance.
UPDATE
Here's the (sanitized) output of helm template:
---
# Source: <deployment>/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-<deployment>
labels:
helm.sh/chart: <deployment>-0.1.0
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
spec:
imagePullSecrets:
- name: regcred-docker-pat
securityContext:
{}
containers:
- name: <deployment>
securityContext:
{}
image: "<registry-username>/<image>:latest"
imagePullPolicy: IfNotPresent
resources:
{}
I've also confirmed that any secrets I have tried are, in fact, in the same namespace as the pod (in this case, the default namespace).
Is the imagepullsecret created by the helm chart?
Is the imagepullsecret available when the deployment is created?
Do you apply the deployment before the imagepullsecret is available?
I remember the order matters when applying the imagepullsecret; the kube-api does not retry pulling after failure because of authentication.
I have successfully installed my helm chart and was trying to change the value of my image version by doing:
helm upgrade --set myAppVersion=1.0.7 myApp . --atomic --reuse-values
This upgrade fails with this error:
The command fails with this error:
Error: UPGRADE FAILED: an error occurred while rolling back the
release. original upgrade error: cannot patch "myappsecret" with
kind Secret: Operation cannot be fulfilled on secrets
"myappsecret": the object has been modified; please apply your
changes to the latest version and try again: cannot patch
"myappsecret" with kind Secret: Operation cannot be fulfilled on
secrets "diffgramsecret": the object has been modified; please apply
your changes to the latest version and try again
It's somehow related to a secret I have in my deployment:
This its the yaml file of the secret:
apiVersion: v1
data:
.dockerconfigjson: {{ .Values.imagePullCredentials.gcrCredentials }}
kind: Secret
metadata:
creationTimestamp: "2021-01-20T22:54:29Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2021-01-20T22:54:29Z"
name: myappsecret
namespace: default
resourceVersion: "2073"
uid: 7c99cb08-5576-4fa3-b6f9-d8d11d76d32c
type: kubernetes.io/dockerconfigjson
This secret is used on my deployments to fetch the docker image from our GCR docker registry.
I'm not sure why this is causing problems, because the only value I'm changing is the docker image tag.
Can anybody help me with this?
could somebody help me, please?
I try to build and publish my image to a private docker registry:
kind: pipeline
name: default
steps:
- name: docker
image: plugins/docker
settings:
username: ****
password: ****
repo: https://*****.com:5000/myfirstimage
registry: https://*****.com:5000
tags:
- latest
But got the next error:
Error parsing reference: "https://*****.com:5000/myfirstimage:latest" is not a valid repository/tag: invalid reference format
15921 time="2020-10-18T17:52:20Z" level=fatal msg="exit status 1"
But, when I try to push manually, all is ok.
What am I doing wrong? Will be grateful for any help.
It is not common to include the scheme in the address to docker registry and repo.
Try to write those addresses without https:// as in
repo: <registry-hostname>.com:5000/myrepo/myfirstimage
registry: <registry-hostname>.com:5000
I am running into a strange issue, docker pull works but when using kubectl create or apply -f with kind cluster, it is getting below error
Warning Failed 20m kubelet, kind-control-plane Failed to pull image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": failed to copy: httpReaderSeeker: failed open: failed to do request: Get https://d3uo42mtx6z2cr.cloudfront.net/sha256/b5/b554c0d094dd848c822804a164c7eb9cc3d41db5f2f5d2fd47aba54454d95daf?Expires=1587558576&Signature=Tt9R1O4K5zI6hFG9GYt-tLAWkwlQyLoAF0NDNouFnff2ywZnPlMSo2x2aopKcQJ5cAMYYTHvYBKm2Zwk8W80tE9cRet1PfP6CnAmo2lzsYzKnRRWbgQhgsyJK8AmAvKzw7iw6lbYdP91JjEiUcpfjMAj7dMPj97tpnEnnd72kljRew8VfgBhClblnhNFvfR9fs9lRS7wNFKrZ1WUSGpNEEJZjNcc9zBNIbOyKeDPfvIpdJ6OthQMJ3EKaFEFfVN6asiyz3lOgM2IMjJ0uBI2ChhCyDx7YHTdNZCOoYAEmw8zo5Ma0n8EQpX3EwU1qSR0IwoGNawF0qV6tFAZi5lpbQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA: x509: certificate signed by unknown authority
Here is the ./kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRNd09Gb1hEVE13TURReU1ERXhNVE13T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlvCkNiYlFYblBxbXpUV0hwdnl6ZXdPcWo5L0NCSmFLV1lrSEVCZzJHcXhjWnFhWG92aVpOdkt3NVZsQmJvTUlSOTMKVUxiWGFVeFl4MHJyQ3pWanNKU09lWDd5VjVpY3JTOXRZTkF1eHhPZzBMM1F3SElxUEFKWkY5b1JwWG55VnZMcwpIcVBDQ2ZRblhBYWRpM3VsM2J5bjcrbVFhcU5mV0NSQkZhRVJjcXF5cDltbzduRWZ2YktybVM0TUdIUHN3eUV0CkYxeXJjc041Vlo5QkM5TWlXZnhEY1dUL2R5SXIrUjFtL3hWYlU0aGNjdkowYi9CQVJ3aUhVajNHVFpnYUtmbGwKNUE5elJsVFRNMjV6c0t5WHVLOFhWcFJlSTVCNTNqUUo3VGRPM0lkc0NqelNrbnByaFI0YmNFcll5eVNhTWN6cgo4c1l0RHNWYmtWOE9rd0pFTnlNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHdEFyckYrKzdjOGZEN09RZWxHWldxSHpyazkKbFQ2MHhmZjBtTzRFdWI3YUxzZGdSTmZuSjB5UDRVelhIeXBkZEhERFhzUHFzVHZzZ2h6MXBPNFQrVTVCVmRqQQpGWjdxZW9iUWN2NkhnSERZSjhOdy9sTHFObGMyeUtPYVJSNTNsbjRuWERWYkROaTcyeEJTbUlNN0hhOFJQSVNFCmttTndHeHFKQVM3UmFOanN0SDRzbC9LR2xKcUowNFdRZnN0b1lkTUY4MERuc0prYlVuSkQyb29oOGVHTlQ5WGsKOTZPbGdoa05yZ09ybmFOR2hTZlQxYjlxdDJZOFpGUlRrKzhhZGNNczlHWW50RzZZTW1WRzVVZDh0L1phbVlRSwpIWlJ6WDRxM3NoY1p3NWRmR2JZUmRPelVTZkhBcE9scHFOQ1FmZGxyOWMyeDMxdkRpOW4vZE9RMHVNbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32768
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSWNDdHVsWUhYaVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBME1qSXhNVEV6TURoYUZ3MHlNVEEwTWpJeE1URXpNVEJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTArZ0JKWHBpUncxK09WaGEKVjU0bG5IMndHTzRMK1hIZjBnUjFadU01MnUwUFV3THQ5cDNCd2d5WDVHODhncUFIMmh3K1c4U2lYUi9WUUM5MgpJd3J3cnc1bFlmcTRrWDZhWEcxdFZLRjFsU2JMUHd4Nk4vejFMczlrbnlRb2piMHdXZkZ2dUJrOUtCMjJuSVozCmdOUEZZVmNVcWwyM2s3ck5yL0xzdGZncEJoVTRaYWdzbCsyZG53Qll2MVh4Z1M1UGFuTGxUcFVYODIxZ3RzQ0QKbUN1aFFyQlQzdzZ0NXlqUU5MSGNrZ3M4Y1JXUkdxZFNnZGMrdGtYczkzNDdoSzRjazdHYUw0OHFBMTgzZzBXKwpNZEllcDR3TUxGbU9XTCtGS2Q5dC83bXpMbjJ5RWdsRXlvNjFpUWRmV2s1S2Q1c1BqQUtVZXlWVTIrTjVBSlBLCndwaGFyUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGNXp5d1hSaitiakdzSG1OdjgwRXNvcXBjOEpSdVY4YnpNUQoxV0dkeGl3Mzk3TXBKVHFEaUNsTlZoZjZOOVNhVmJ2UXp2dFJqWW5yNmIybi9HREdvZDdyYmxMUWJhL2NLN1hWCm1ubTNHTXlqSzliNmc0VGhFQjZwUGNTa25yckRReFFHL09tbXE3Ulg5dEVCd2RRMHpXRGdVOFU0R0t3a3NyRmgKMFBYNE5xVnAwdHcyaVRDeE9lU0FpRnBCQ0QzS3ZiRTNpYmdZbHNPUko5S0Y3Y00xVkpuU0YzUTNZeDNsR3oxNgptTm9JanVHNWp2a3NDejc3TlFIL3Ztd2dXRXJLTndCZ0NDeEVQY1BjNFRZREU1SzBrUTY1aXc1MzR6bHZuaW5JCjZRTGYvME9QaHRtdC9FUFhRSU5PS0dKWEpkVFo1ZU9JOStsN0lMcGROREtkZjlGU3pNND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMCtnQkpYcGlSdzErT1ZoYVY1NGxuSDJ3R080TCtYSGYwZ1IxWnVNNTJ1MFBVd0x0CjlwM0J3Z3lYNUc4OGdxQUgyaHcrVzhTaVhSL1ZRQzkySXdyd3J3NWxZZnE0a1g2YVhHMXRWS0YxbFNiTFB3eDYKTi96MUxzOWtueVFvamIwd1dmRnZ1Qms5S0IyMm5JWjNnTlBGWVZjVXFsMjNrN3JOci9Mc3RmZ3BCaFU0WmFncwpsKzJkbndCWXYxWHhnUzVQYW5MbFRwVVg4MjFndHNDRG1DdWhRckJUM3c2dDV5alFOTEhja2dzOGNSV1JHcWRTCmdkYyt0a1hzOTM0N2hLNGNrN0dhTDQ4cUExODNnMFcrTWRJZXA0d01MRm1PV0wrRktkOXQvN216TG4yeUVnbEUKeW82MWlRZGZXazVLZDVzUGpBS1VleVZVMitONUFKUEt3cGhhclFJREFRQUJBb0lCQUZzYWsrT1pDa2VoOVhLUwpHY1V4cU5udTc1YklRVDJ0UjV6emJjWWVTdkZrbWdJR2NHaG15cmF5MDFyU3VDRXd6QzlwbFNXL0ZFOFZNSW0zCjNnS1M0WWRobVJUV3hpTkhXdllCMWM5YzIwQ1V2UzBPSUQyUjg1ZDhjclk0eFhhcXIrNzdiaHlvUFRMU0U0Q1kKRHlqRDQwaEdPQXhHM25ZVkNmbHJaM21VaDQ2bEo4YlROcXB5UzFCcVdNZnZwekt1ZDB6TElmMWtTTW9Cbm1XeQo0RzBrNC9qWVdEOWNwdGtSTGxvZXp5WVlCMTRyOVdNQjRENkQ5eE84anhLL0FlOEQraTl2WCtCaUdGOURSYllJCmVVQmRTQzE2QnQybW5XWGhXMmhSRFFqRmR2dzJIQ0gxT0ppcVZuWUlwbGJEcjFYVXI1NzFYWTZQMFJlQ0JRc3kKOUZpMG44RUNnWUVBMUQ3Nmlobm5YaEZyWFE2WkVEWnp3ZGlwcE5mbHExMGJxV0V5WUVVZmpPd2p3ZnJ4bzVEYgppUmoySm5Fei96bDhpVDFEbmh3bFdWZlBNbWo3bUdFMVYwMkFWSkJoT20vdU1tZnhYVmcvWGwxOVEzODdJT0tpCjBKSmdabGZqVjEyUGdRU3NnbnRrckdJa3dPcisrOUFaL3R0UVVkVlU0bFgxWWRuazZ5T1V6YWNDZ1lFQS81Y1kKcHJxMVhuNGZBTUgxMzZ2dVhDK2pVaDhkUk9xS1Zia2ZtWUw0dkI0dG9jL2N1c1JHaGZPaTZmdEZCSngwcDhpSgpDU1ZCdzIxbmNHeGRobDkwNkVjZml2ZG0vTXJlSmlyQmFlMlRRVWdsMjh1cmU3MWJEdXpjbWMrQVRQa1VXVDgyCmJpaDM5b3A1SEo5N2NlU3JVYU5zRTgxaEdIaXNSSzJEL2pCTjU0c0NnWUVBcUExeHJMVlQ5NnlOT1BKZENYUkQKOWFHS3VTWGxDT2xCQkwwYitSUGlKbCsyOUZtd3lGVGpMc3RmNHhKUkhHMjFDS2xFaDhVN1lXRmdna2FUcDVTWQplcGEzM0wwdzd1Yy9VQlB6RFhqWk8rdUVTbFJNU2Y2SThlSmtoOFJoRW9UWElrM0VGZENENXVZU3VkbVhxV1NkCm9LaWdFUnQ4Q1hZTVE3MFdQNFE5eHhNQ2dZQnBkVTJ0bGNJNkQrMzQ0UTd6VUR5VWV1OTNkZkVjdTIyQ3UxU24KZ1p2aCtzMjNRMDMvSGZjL1UreTNnSDdVelQxdzhWUmhtcWJNM1BwZUw4aFRKbFhWZFdzMWFxbHF5c1hvbDZHZwpkRzlhODByenF0REJ5THFtcU9MSThBNHZOR0xLQkVRUUpkQ0J3RmNDa1dkYzhnNGlMRHp1MnNJaVY4QTB3aWVCCkhTczN5d0tCZ1FDeXl2Tk45enk5S3dNOW1nMW5GMlh3WUVzMzB4bmsrNXJmTGdRMzQvVm1sSVA5Y1cyWS9oTWQKWnVlNWd4dnlYREcrZW9GU3Njc281TmcwLytWUDI0Sjk0cGJIcFJWV3FIWENvK2gxZjBnKzdET2p0dWp2aGVBRwpSb240NmF1clJRSG5HUStxeldWcWtpS2l1dDBybFpHd2ZzUGs4eWptVjcrWVJuamxES1hUWUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
I ran into a similar issue (I think) on OpenShift - I could pull images, but I couldn't push or get k8s to pull them. To resolve it, I had to update the docker config at /etc/sysconfig/docker and add the registry as an insecure registry. For openshift, the default route was required.
OPTIONS=' <some existing config stuff here> --insecure-registry=<fqdn-of-your-registry>'
Then systemctl restart docker to have the changes take effect.
You might also need to create a docker pull secret with your credentials in kubernetes to allow it to access the registry. Details here
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson