Pulling images from private registry in Kubernetes - docker

I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.

To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret

If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml

I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7

Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.

For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.

The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.

go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson

Related

Can I use `envoyExtAuthzHttp` with Anthos for OIDC?

Currently I am handling OIDC using OAuth2-proxy and Istio. We would now like to upgrade to Anthos since we are mainly on GCP. Everything works but I need to configure envoyExtAuthzHttp. Previously I would run kubectl edit configmap istio -n istio-system and add the following...
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: http-oauth-proxy.istio-system.svc.cluster.local
port: 4180
includeRequestHeadersInCheck: ['cookie']
headersToUpstreamOnAllow: ['authorization']
headersToDownstreamOnDeny: ['content-type', 'set-cookie']
However, ASM does not seem to install that config map...
Error from server (NotFound): configmaps "istio" not found
I noticed there is an istio-asm-managed config map. So I tried adding the config to that but when I do I am not sure how to restart ASM as this command I am used to using isn't working kubectl rollout restart deployment/istiod -n istio-system.
When I try to go to the site instead of being redirected I see...
RBAC: access denied
What worked for me, after studying what asmcli does when you follow the migration steps here, is setting this configmap in istio-system before enabling the Anthos Service Mesh:
apiVersion: v1
data:
mesh: |
extensionProviders:
...<your settings here>...
kind: ConfigMap
metadata:
name: istio-asm-managed-rapid
namespace: istio-system
I have not verified whether it was actually necessary to do this before enabling the ASM, but that is how I did it.

How can I set JNDI configuration in a Docker overrides.yaml file?

If I have a java configuration bean, saying:
package com.mycompany.app.configuration;
// whatever imports
public class MyConfiguration {
private String someConfigurationValue = "defaultValue";
// getters and setters etc
}
If I set that using jetty for local testing I can do so using a config.xml file in the following form:
<myConfiguration class="com.mycompany.app.configuration.MyConfiguration" context="SomeContextAttribute">
<someConfigurationValue>http://localhost:8080</someConfigurationValue>
</myConfiguration>
However in the deployed environment in which I need to test, I will need to use docker to set these configuration values, we use jboss.
Is there a way to directly set these JNDI values? I've been looking for examples for quite a while but cannot find any. This would be in the context of a yaml file which is used to configure a k8 cluster. Apologies for the psuedocode, I would post the real code but it's all proprietary so I can't.
What I have so far for the overrides.yaml snippet is of the form:
env:
'MyConfig.SomeContextAttribute':
class_name: 'com.mycompany.app.configuration.MyConfiguration'
someConfigurationValue: 'http://localhost:8080'
However this is a complete guess.
You can achieve it by using ConfigMap.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
First what you need to create ConfigMap from your file using command as below:
kubectl create configmap <map-name> <data-source>
Where <map-name> is the name you want to assign to the ConfigMap and <data-source> is the directory, file, or literal value to draw the data from. You can read more about it here.
Here is an example:
Download the sample file:
wget https://kubernetes.io/examples/configmap/game.properties
You can check what is inside this file using cat command:
cat game.properties
You will see that there are some variables in this file:
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30r
Create the ConfigMap from this file:
kubectl create configmap game-config --from-file=game.properties
You should see output that ConfigMap has been created:
configmap/game-config created
You can display details of the ConfigMap using command below:
kubectl describe configmaps game-config
You will see output as below:
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties:
----
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
You can also see how yaml of this ConfigMap will look using:
kubectl get configmaps game-config -o yaml
The output will be similar:
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: "2022-01-28T12:33:33Z"
name: game-config
namespace: default
resourceVersion: "2692045"
uid: 5eed4d9d-0d38-42af-bde2-5c7079a48518
Next goal is connecting ConfigMap to Pod. It could be added in yaml file of Podconfiguration.
As you can see under containersthere is envFrom section. As name is a name of ConfigMapwhich I created in previous step. You can read about envFrom here
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
envFrom:
- configMapRef:
name: game-config
Create a Pod from yaml file using:
kubectl apply -f <name-of-your-file>.yaml
Final step is checking environment variables in this Pod using below command:
kubectl exec -it test-pod -- env
As you can see below, there are environment variables from simple file which I downloaded in the first step:
game.properties=enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
The way to do this is as follows:
If you are attempting to set a value that looks like this in terms of fully qualified name:
com.mycompany.app.configuration.MyConfiguration#someConfigurationValue
Then that will look like the following in a yaml file:
com_mycompany_app_configuration_MyConfiguration_someConfigurationValue: 'blahValue'
It really is that simple. It does need to be set as an environment variable in the yaml, but I'm not sure whether it needs to be under env: or if that's specific to us.
I don't think there's a way of setting something in YAML that in XML would be an attribute, however. I've tried figuring that part out, but I haven't been able to.

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

kubernetes cannot pull local image

I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked

Kubernetes imagePullSecrets not working; getting "image not found"

I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:
> kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:
> kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:
> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"eng#csats.com"}}
I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)
Here's the YAML for my Replication Controller:
---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
kubectl version:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Any ideas?
Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.
For example, if your Deployment yaml looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
Then you must make sure the Secret yaml uses a matching namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.
Docker generates a config.json file in ~/.docker/
It looks like:
{
"auths": {
"index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
what you actually want is:
{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "email#company.com"}}
note 3 things:
1) there is no auths wrapping
2) there is https:// in front of the
URL
3) it's one line
then you base64 encode that and use as data for the .dockercfg name
apiVersion: v1
kind: Secret
metadata:
name: registry
data:
.dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg
Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)
Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).
The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.
A 1.8.x cluster expect a secret with the format:
{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
But the secret generated by the 1.9.x kubectl has this format:
{
"auths":{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
}
So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.
I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:
{
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "jdoe#example.com"
}
}
While the one generated by docker in my machine looks something like this:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)
I confirm you that if you just take and encode "auths", as in the example, it will work for you.
Probably the documentation should be updated. I will raise a ticket on github.

Resources