Why Kubernetes is not attaching my secret into my pod? - docker

I already created my secret as recommend by Kubernetes and followed the tutorial, but the pod isnt with my secret attached.
As you can see, i created the secret and described it.
After i created my pod.
$ kubectl get secret my-secret --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
{"auths":{"my-private-repo.com":{"username":"<username>","password":"<password>","email":"<email>","auth":"<randomAuth>="}}}
$ kubectl create -f my-pod.yaml
pod "my-pod" created
$ kubectl describe pods trunfo
Name: my-pod
Namespace: default
Node: gke-trunfo-default-pool-07eea2fb-3bh9/10.233.224.3
Start Time: Fri, 28 Sep 2018 16:41:59 -0300
Labels: <none>
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container container-trunfo
Status: Pending
IP: 10.10.1.37
Containers:
container-trunfo:
Container ID:
Image: <my-image>
Image ID:
Port: 9898/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hz4mf (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hz4mf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hz4mf
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned trunfo to gke-trunfo-default-pool-07eea2fb-3bh9
Normal SuccessfulMountVolume 4s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 MountVolume.SetUp succeeded for volume "default-token-hz4mf"
Normal Pulling 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-private-repo.com/v1/_ping: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull
Normal BackOff 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Back-off pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ImagePullBackOff
What can i do to fix it?
EDIT
This is my pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-private-repo/images/<my-image>
ports:
- containerPort: 9898
imagePullSecrets:
- name: my-secret
As we can see, the secret is defined as expected, but not attached correctly.

You did not get as far as secrets yet. Your logs say
Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-private-repo.com/v1/_ping: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull
Which means that your pod cannot event start because the image is not available. Fix that, and if you still have problem with secrets after you observer pod state "ready" post your yaml definition.

Related

MongoDb ImagePullBackOff error in Kubernetes despite trying every SOF solution

I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting ImagePullBackOff error. Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I tried to pull the image locally by using docker pull mongo, minikube image pull mongo & minikube image pull mongo-express several times while restarting docker and minikube several times.
Logining into dockerhub (both in broweser and through terminal didn't work)
I also tried to login into docker using docker login command and then modified my /etc/resolv.conf and adding nameserver 8.8.8.8 and then restartied docker using sudo systemctl restart docker but even that failed to work.
On running kubectl describe pod command I get this output:
Name: mongodb-deployment-6bf8f4c466-85b2h
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 29 Aug 2022 23:04:12 +0530
Labels: app=mongodb
pod-template-hash=6bf8f4c466
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'>
Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'>
Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vlcxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube
Warning Failed 18m (x2 over 20m) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull
Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image "mongo:latest"
Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 17m (x3 over 22m) kubelet Pulling image "mongo:latest"
Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image "mongo:latest"
Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull
Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image "mongo:latest"
Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff
PS: Ignore any any spacing errors
I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull mongo
Then load the downloaded image to minikube
minikube image load mongo
And then everything will work because now kubectl will use the image that is stored locally.

How to resolve pods failing due to exceeding pull rate limit from docker?

kubectl get all -n migration:
NAME READY STATUS RESTARTS AGE
pod/nginx2-7b8667968c-zxtq7 0/1 ImagePullBackOff 0 5m38s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx2 0/1 1 0 5m38s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx2-7b8667968c 1 1 0 5m38s
kubectl describe pod nginx2-7b8667968c-zxtq7 -n migration:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned migration/nginx2-7b8667968c-zxtq7 to k8s-master01
Normal SandboxChanged 33s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 18s (x2 over 43s) kubelet Pulling image "nginx"
Warning Failed 14s (x2 over 34s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 14s (x2 over 34s) kubelet Error: ErrImagePull
Normal BackOff 3s (x4 over 32s) kubelet Back-off pulling image "nginx"
Warning Failed 3s (x4 over 32s) kubelet Error: ImagePullBackOff
Post-logging in with different docker account:
I can manually pull an image from docker using docker pull nginx
But while deploying a deployment, it again shows the same error.
Deployment yaml is as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
namespace: migration
spec:
replicas: 1
selector:
matchLabels:
name: nginx2
template:
metadata:
labels:
name: nginx2
spec:
containers:
- name: nginx2
imagePullPolicy: Always
image: nginx
ports:
- containerPort: 3000
volumeMounts:
- name: game-demo
mountPath: /usr/src/app/config
- name: secret-basic-auth
mountPath: /usr/src/app/secret
- name: site-data2
mountPath: /var/www/html
volumes:
- name: game-demo
configMap:
name: game-demo
- name: secret-basic-auth
secret:
secretName: secret-basic-auth
- name: site-data2
persistentVolumeClaim:
claimName: demo-pvc-claim2
Also, as the nginx image is present locally, I tried with modifying imagePullPolicy to Never as well as 'IfNotPresent'.
But nothing works. Please guide.
Here is your errror:
Warning Failed 14s (x2 over 34s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Basically docker hub is now becoming a paid thing if you exceed their rate limits.
You can use the publically hosted image of "nginx" from Amazon's ECR instead:
docker pull public.ecr.aws/nginx/nginx:latest
That should be the same as the one you use using, just double check over here:
https://gallery.ecr.aws/nginx/nginx

Harbor Registry: Failed to pull image in Kubernetes

I set up a Harbor registry which worked successfully for a couple of weeks now. For each deployment and namespace I a have a secret with the credentials from my ~/.docker/config.json file to get access to the registry. Since last weekend I was not able to pull images from that registry anymore and I didn't change anything! The cluster is running on GKE v1.12.5 btw.
What works?
I can pull and push images from my local machine witch docker.
What does not work?
My Kubernetes cluster cannot pull images anymore and runs in a timeout.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned k8s-test7/nginx-k8s-test7-6f7b8fdd79-2ffmp to gke-k8s-cloudops-test-default-pool-72fccd21-hrhk
Normal SandboxChanged 12m kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk Pod sandbox changed, it will be killed and re-created.
Warning Failed 11m (x3 over 12m) kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk Failed to pull image "core.k8s-harbor-test.my-domain.com/nginx-test/nginx:1.15.10": rpc error: code = Unknown desc = Error response from daemon: Get https://core.k8s-harbor-test.my-domain.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 11m (x3 over 12m) kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk Error: ErrImagePull
Normal BackOff 11m (x7 over 12m) kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk Back-off pulling image "core.k8s-harbor-test.my-domain.com/nginx-test/nginx:1.15.10"
Normal Pulling 10m (x4 over 13m) kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk pulling image "core.k8s-harbor-test.my-domain.com/nginx-test/nginx:1.15.10"
Warning Failed 3m2s (x38 over 12m) kubelet, gke-k8s-cloudops-test-default-pool-72fccd21-hrhk Error: ImagePullBackOff
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-k8s-test7
namespace: k8s-test7
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-k8s-test7
spec:
containers:
- name: nginx-k8s-test7
image: core.k8s-harbor-test.my-domain.com/nginx-test/nginx:1.15.10
volumeMounts:
- name: webcontent
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: webcontent
configMap:
name: webcontent
imagePullSecrets:
- name: harborcred
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: webcontent
namespace: k8s-test7
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
The secret "harborcred" is part of every namespace so that the deployment can access it. The secret was created per kubernetes documentation:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret generic harborcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson \
--namespace=k8s-test7
Any help would be appreciated!
Hi at first look could you please:
Change image source and use some public one f.e. nginx to verify your deployment doesn't have other issues.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ provide also more details about inspecting the "Secrets".
Please also perform additional tests related to connectivity directly from your node as described within this post [How to debug "ImagePullBackOff"?
Additional steps to find the root cause:
1. Convert your secrets data:
kubectl get secret harborcred -n k8s-test7
--output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
2. Compare the result of decoding your "auth" field from the 1 step with your docker credentials using:
echo "your auth data" | base64 --decode
3. To find the root cause please use also:
kubectl get events -n k8s-test7 | grep pull
Please share with your logs.

Why Kubernetes pods sometimes got CrashLoopBackOff?

Using Kubernetes cluster: 3 hosts(1 master and 2 nodes).
Kubernetes version: 1.7
Deployed a Rails app to Kubernetes cluster.
Here is the deployment.yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: server
spec:
ports:
- port: 80
selector:
app: server
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: server
labels:
app: server
spec:
replicas: 3
template:
metadata:
labels:
app: server
tier: backend
spec:
containers:
- image: 192.168.33.13/myapp/server
name: server
ports:
- containerPort: 3000
name: server
imagePullPolicy: Always
Deploy it:
$ kubectl create -f deployment.yaml
Then check the pods status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-962161505-kw3jf 0/1 CrashLoopBackOff 6 9m
server-962161505-lxcfb 0/1 CrashLoopBackOff 6 9m
server-962161505-mbnkn 0/1 CrashLoopBackOff 6 9m
At the beginning, its status got Completed but went to CrashLoopBackOff soon. Is there anything wrong in the config yaml file?
(By the way, I don't want to run a entrypoint.sh script here but used a job.yaml file to call k8s Job kind to do it.)
Edit
Result of kubectl describe pod server-962161505-kw3jf:
Name: server-962161505-kw3jf
Namespace: default
Node: node1/192.168.33.11
Start Time: Mon, 13 Nov 2017 17:45:47 +0900
Labels: app=server
pod-template-hash=962161505
tier=backend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","...
Status: Running
IP: 10.42.254.104
Created By: ReplicaSet/server-962161505
Controlled By: ReplicaSet/server-962161505
Containers:
server:
Container ID: docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
Image: 192.168.33.13/myapp/server
Image ID: docker-pullable://192.168.33.13/myapp/server#sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
Port: 3000/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 14 Nov 2017 10:13:12 +0900
Finished: Tue, 14 Nov 2017 10:13:13 +0900
Ready: False
Restart Count: 26
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-csjqn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-csjqn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 22m kubelet, node1 MountVolume.SetUp succeeded for volume "default-token-csjqn"
Normal SandboxChanged 22m kubelet, node1 Pod sandbox changed, it will be killed and re-created.
Warning Failed 20m (x3 over 21m) kubelet, node1 Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
Normal BackOff 20m (x5 over 21m) kubelet, node1 Back-off pulling image "192.168.33.13/myapp/server"
Normal Pulling 4m (x7 over 21m) kubelet, node1 pulling image "192.168.33.13/myapp/server"
Normal Pulled 4m (x4 over 20m) kubelet, node1 Successfully pulled image "192.168.33.13/myapp/server"
Normal Created 4m (x4 over 20m) kubelet, node1 Created container
Normal Started 4m (x4 over 20m) kubelet, node1 Started container
Warning FailedSync 10s (x99 over 21m) kubelet, node1 Error syncing pod
Warning BackOff 10s (x91 over 20m) kubelet, node1 Back-off restarting failed container

Kubernetes doesn't pull from private Docker Registry

I've deployed a private registry and can pull from it with docker pull x.x.x/name. The thing is that I can't make Kubernetes pull from that repository. I think I've followed all the answers on other topics, but they don't seem to do the trick.
.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: x.x.x/nginx_1
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: registrypullsecret
kubectl get pods:
NAME READY STATUS RESTARTS AGE
private-image-test-1 0/1 Image: x.x.x/nginx_1 is ready, container is creating 0 4m
kubectl describe pods private-image-test-1
Name: private-image-test-1
Namespace: default
Node: 37.72.163.69/37.72.163.69
Start Time: Fri, 06 May 2016 08:04:45 +0000
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
uses-private-image:
Container ID:
Image: x.x.x/nginx_1
Image ID:
Port:
Command:
echo
SUCCESS
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: Image: x.x.x/nginx_1 is ready, container is creating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-zrn4n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrn4n
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {scheduler } scheduled Successfully assigned private-image-test-1 to 37.72.163.69
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD pulled Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD failed Failed to create docker container with error: no such image
4m 8s 30 {kubelet 37.72.163.69} failedSync Error syncing pod, skipping: no such image
Any help is welcome at this point, thanks!
In most cases where I've come across this issue, it is almost always your credential secret being incorrect. The proper format should be along the lines of
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: {BASE64 encoding of your config}
type: kubernetes.io/dockerconfigjson
From memory, the type field has changed in recent versions of k8s so definitely check that you have the correct type listed.
Also, your yaml example has bad indenting, but thats likely a SO editor issue.

Resources