Gitlab CI/CD using Kubernetes agent and the private container registry - docker

I will preface this by saying that I know very little about how to manage Kuberentes clusters, so I am probably doing something dumb. Here goes:
I have created a Kubernetes cluster using GCP's Autopilot mode, and I (think I) registered the cluster to my Gitlab repository using the "Infrastructure->Kubernetes Clusters" menu (It shows as online).
Using Gitlab's CI/CD, I have a build stage which pushes an image to the repo's container registry (I see the the image is indeed there).
I then have a deploy stage with the following script: (censored some data for privacy)
kubectl config use-context <my_username>/<my_reponame>:<my_reponame>
kubectl delete secret regcred --ignore-not-found
kubectl create secret docker-registry regcred --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD"
kubectl delete pod <my_reponame> --ignore-not-found
kubectl apply -f kube.yaml
My kube.yaml contains the following:
apiVersion: v1
kind: Pod
metadata:
name: <my_reponame>
spec:
containers:
- name: <my_reponame>
image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imagePullSecrets:
- name: regcred
I also have a .gitlab/agents/<my_reponame>/config.yaml file with the following:
ci_access:
projects:
- id: <my_username>/<my_reponame>
This is the output of the build stage:
Running with gitlab-runner 14.8.0~beta.44.g57df0d52 (57df0d52)
on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX
Preparing the "docker+machine" executor
00:12
Using Docker executor with image bitnami/kubectl:latest ...
Pulling docker image bitnami/kubectl:latest ...
Using docker image sha256:208d070e071a0165e48ad7bf20b30c054328bcaaad76b0c53a9270a5e8627480 for bitnami/kubectl:latest with digest bitnami/kubectl#sha256:51eb9cb7d811e74bba30f97700cb433424d4025aabe70e8ff80e1289a964ab9c ...
Preparing environment
00:01
Running on runner-xxurkrix-project-32365648-concurrent-0 via runner-xxurkrix-shared-1646113353-c926e727...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/<my_username>/<my_reponame>/.git/
Created fresh repository.
Checking out 23743f89 as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:13
Using docker image sha256:208d070e071a0165e48ad7bf20b30c054328bcaaad76b0c53a9270a5e8627480 for bitnami/kubectl:latest with digest bitnami/kubectl#sha256:51eb9cb7d811e74bba30f97700cb433424d4025aabe70e8ff80e1289a964ab9c ...
$ kubectl config use-context <my_username>/<my_reponame>:<my_reponame>
Switched to context "<my_username>/<my_reponame>:<my_reponame>".
$ kubectl delete secret regcred --ignore-not-found
secret "regcred" deleted
$ kubectl create secret docker-registry regcred --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD"
secret/regcred created
$ kubectl delete pod <my_reponame> --ignore-not-found
$ kubectl apply -f kube.yaml
Warning: Autopilot set default resource requests for Pod default/<my_reponame>, as resource requests were not specified. See http://g.co/gke/autopilot-defaults.
pod/<my_reponame> created
Cleaning up project directory and file based variables
00:00
Job succeeded
However, the deploy to the cluster did not actually work.
Here is some data from running in the gcloud shell
<my_username>#cloudshell:~ (<my_reponame>-342620)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
<my_reponame> 0/1 ImagePullBackOff 0 13h
<my_reponame>-69775f4b-cft8w 0/1 ImagePullBackOff 0 13h
kubectl get pod <my_reponame> -o yaml`
Name: <my_reponame>
Namespace: default
Priority: 0
Node: gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8/10.132.0.5
Start Time: Tue, 01 Mar 2022 19:11:18 +0000
Labels: <none>
Annotations: autopilot.gke.io/resource-adjustment:
{"input":{"containers":[{"name":"<my_reponame>"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requ...
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Pending
IP: 10.34.0.77
IPs:
IP: 10.34.0.77
Containers:
<my_reponame>:
Container ID:
Image: registry.gitlab.com/<my_username>/<my_reponame>:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wsd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-96wsd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s gke.io/optimize-utilization-scheduler Successfully assigned default/<my_reponame> to gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8
Warning Failed 14s kubelet Failed to pull image "registry.gitlab.com/<my_username>/<my_reponame>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to copy: httpReaderSeeker: failed open: failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
Normal BackOff 14s kubelet Back-off pulling image "registry.gitlab.com/<my_username>/<my_reponame>:latest"
Warning Failed 14s kubelet Error: ImagePullBackOff
Normal Pulling 2s (x2 over 16s) kubelet Pulling image "registry.gitlab.com/<my_username>/<my_reponame>:latest"
Warning Failed 1s (x2 over 14s) kubelet Error: ErrImagePull
Warning Failed 1s kubelet Failed to pull image "registry.gitlab.com/<my_username>/<my_reponame>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to resolve reference "registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
kubectl get pod <my_reponame> -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"<my_reponame>"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"<my_reponame>"}]},"modified":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"<my_reponame>","namespace":"default"},"spec":{"containers":[{"image":"registry.gitlab.com/<my_username>/<my_reponame>:latest","name":"<my_reponame>"}],"imagePullSecrets":[{"name":"regcred"}]}}
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: "2022-03-01T19:11:18Z"
name: <my_reponame>
namespace: default
resourceVersion: "1299911"
uid: d8a636ee-1dfc-40e9-9ba4-26b5674d3259
spec:
containers:
- image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imagePullPolicy: Always
name: <my_reponame>
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-96wsd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: regcred
nodeName: gk3-<my_reponame>-nap-73l1ao51-a8e3ee2d-wjd8
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: gke.io/optimize-utilization-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-96wsd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
message: 'containers with unready status: [<my_reponame>]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
message: 'containers with unready status: [<my_reponame>]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-03-01T19:11:18Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: registry.gitlab.com/<my_username>/<my_reponame>:latest
imageID: ""
lastState: {}
name: <my_reponame>
ready: false
restartCount: 0
started: false
state:
waiting:
message: 'rpc error: code = Unknown desc = failed to pull and unpack image
"registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to resolve reference
"registry.gitlab.com/<my_username>/<my_reponame>:latest": failed to authorize: failed
to fetch oauth token: unexpected status: 401 Unauthorized'
reason: ErrImagePull
hostIP: 10.132.0.5
phase: Pending
podIP: 10.34.0.77
podIPs:
- ip: 10.34.0.77
qosClass: Guaranteed
startTime: "2022-03-01T19:11:18Z"
It looks like from the last commands log that it was not able to pull from the container registry - I assume I passed the credentials wrong, but I could not find any example or documentation on how to achieve this.
I'm of course willing to give any info if necessary.
Thanks in advance :)

Try https://chris-vermeulen.com/using-gitlab-registry-with-kubernetes/, your cluster cannot pull from the registry because it has no access.

Related

MongoDb ImagePullBackOff error in Kubernetes despite trying every SOF solution

I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting ImagePullBackOff error. Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I tried to pull the image locally by using docker pull mongo, minikube image pull mongo & minikube image pull mongo-express several times while restarting docker and minikube several times.
Logining into dockerhub (both in broweser and through terminal didn't work)
I also tried to login into docker using docker login command and then modified my /etc/resolv.conf and adding nameserver 8.8.8.8 and then restartied docker using sudo systemctl restart docker but even that failed to work.
On running kubectl describe pod command I get this output:
Name: mongodb-deployment-6bf8f4c466-85b2h
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 29 Aug 2022 23:04:12 +0530
Labels: app=mongodb
pod-template-hash=6bf8f4c466
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'>
Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'>
Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vlcxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube
Warning Failed 18m (x2 over 20m) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull
Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image "mongo:latest"
Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 17m (x3 over 22m) kubelet Pulling image "mongo:latest"
Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image "mongo:latest"
Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull
Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image "mongo:latest"
Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff
PS: Ignore any any spacing errors
I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull mongo
Then load the downloaded image to minikube
minikube image load mongo
And then everything will work because now kubectl will use the image that is stored locally.

Rails in Kubernetes not picking up environment variables provided by configmap

I have a simple .env file with content like this
APP_PORT=5000
I add the values of that file with Kustomize. When I apply my Rails app, it crashes because it cannot find the environment vars:
I also tried to place a puts ENV['APP_PORT'] in application.rb but that one is nil
Rails Version & environment: 6.1.4.1 - development
! Unable to load application: KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
bundler: failed to load command: puma (/app/vendor/bundle/ruby/2.7.0/bin/puma)
KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
/app/config/environments/development.rb:2:in `fetch'
/app/config/environments/dev
when I change my image to image: nginx then the env vars are not there:
env
KUBERNETES_SERVICE_PORT_HTTPS=443
ELASTICSEARCH_PORT_9200_TCP_PORT=9200
KUBERNETES_SERVICE_PORT=443
ELASTICSEARCH_PORT_9200_TCP_ADDR=10.103.1.6
ELASTICSEARCH_SERVICE_HOST=10.103.1.6
HOSTNAME=myapp-backend-api-56b44c7445-h9g5m
ELASTICSEARCH_PORT=tcp://10.103.1.6:9200
PWD=/
ELASTICSEARCH_PORT_9200_TCP=tcp://10.103.1.6:9200
PKG_RELEASE=1~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
ELASTICSEARCH_SERVICE_PORT_9200=9200
NJS_VERSION=0.6.2
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ELASTICSEARCH_SERVICE_PORT=9200
ELASTICSEARCH_PORT_9200_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.21.3
_=/usr/bin/env
This is my current state:
kustomization.yml
kind: Kustomization
configMapGenerator:
- name: backend-api-configmap
files:
- .env
bases:
- ../../base
patchesStrategicMerge:
- api-deployment.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: nginx
imagePullPolicy: Never #imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image. envFrom:
- configMapRef:
name: backend-api-configma
That is the describe pod:
❯ k describe pod xxx-backend-api-56774c796d-s2zkd
Name: xxx-backend-api-56774c796d-s2zkd
Namespace: default
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/xxx-backend-api-56774c796d
Containers:
xxx-backend-api:
Container ID: docker://5ee3112b0805271ebe4b32d7d8e5d1b267d8bf4e220f990c085638f7b975c41f
Image: xxx-backend-api:latest
Image ID: docker://sha256:55d96a68267d80f19e91aa0b4d1ffb11525e9ede054fcbb6e6ec74356c6a3c7d
Port: 5000/TCP
Ready: False
Restart Count: 3
Environment Variables from:
backend-api-configmap-99fbkbc4c9 ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5czc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-w5czc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned default/xxx-backend-api-56774c796d-s2zkd to minikube
Normal Pulled 8s (x4 over 54s) kubelet Container image "xxx-backend-api:latest" already present on machine
Normal Created 8s (x4 over 54s) kubelet Created container xxx-backend-api
Normal Started 8s (x4 over 54s) kubelet Started container xxx-backend-api
Warning BackOff 4s (x4 over 48s) kubelet Back-off restarting failed container
and that is the describe configmap
❯ k describe configmaps backend-api-configmap-99fbkbc4c9
Name: backend-api-configmap-99fbkbc4c9
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
.env:
----
RAILS_MAX_THREADS=5
APPLICATION_URL=localhost:8000/backend
FRONTEND_URL=localhost:8000
APP_PORT=3000
BinaryData
====
Events: <none>
I got it:
Instead of using files: in my configMapGenerator I had to use envs: like so:
configMapGenerator:
- name: backend-api-configmap
envs:
- .env

k3s pods are not mounting secrets defined in helm deployment's imagePullSecrets

I am creating a deployment in circlci that deploys my containerized application to a k3s server I have set up. I have set up a secret using the commands found here.
The secret is created using the command:
kubectl create secret docker-registry regkeyname --docker-server=https://index.docker.io/v1/ \
--docker-username=username \
--docker-password=password \
--docker-email=my#email.com \
--namespace=external
My secret is as follows when running kubectl get secret regkeyname --namespace=external --output=yaml:
apiVersion: v1
data:
.dockerconfigjson: secretbase64stuff
kind: Secret
metadata:
creationTimestamp: "2020-11-24T13:11:07Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-11-24T13:11:07Z"
name: regkeyname
namespace: external
resourceVersion: "16929381"
selfLink: /api/v1/namespaces/external/secrets/regkeyname
uid: 51b87508-9cf2-490b-b871-0b5a342ab64c
type: kubernetes.io/dockerconfigjson
I'm using helm to deploy my application and the Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.labels.app }}
labels:
app: {{ .Values.labels.app }}
spec:
selector:
matchLabels:
app: {{ .Values.labels.app }}
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ .Values.labels.app }}
env: {{ .Values.labels.env }}
spec:
imagePullSecrets:
- name: regkeyname
containers:
- name: my-service
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 5
successThreshold: 1
after deploying however, the images fail to pull and it appears that my secret "regkeyname" is not used/mounted in the pods. the result is as follows:
Name: my-service-856454c6cd-qcp7w
Namespace: external
Priority: 0
Node: worker-2/192.168.1.13
Start Time: Tue, 24 Nov 2020 07:20:08 -0600
Labels: app=my-service
env=development
pod-template-hash=856454c6cd
Annotations: <none>
Status: Pending
IP: 10.42.2.196
Controlled By: ReplicaSet/my-service-856454c6cd
Containers:
auth-service:
Container ID:
Image: my-repo/my-service:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Readiness: http-get http://:8080/health delay=10s timeout=1s period=10s #success=1 #failure=5
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l9b4k (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-l9b4k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-l9b4k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned external/auth-service-856454c6cd-qcp7w to worker-2
Normal Pulling 32m (x4 over 34m) kubelet, worker-2 Pulling image "my-repo/my-service:latest"
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Failed to pull image "my-repo/my-service:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/my-repo/my-service:latest": failed to resolve reference "docker.io/my-repo/my-service:latest": failed to do request: Head https://registry-1.docker.io/v2/my-repo/my-service/manifests/latest: dial tcp: lookup registry-1.docker.io: Try again
Warning Failed 32m (x4 over 34m) kubelet, worker-2 Error: ErrImagePull
Warning Failed 31m (x6 over 34m) kubelet, worker-2 Error: ImagePullBackOff
Normal BackOff 3m54s (x127 over 34m) kubelet, worker-2 Back-off pulling image "my-repo/my-service:latest"
I had this working when running locally with kubernetes so I am assuming the issue must have something to do either with k3s or the fact that now the server is remote rather than local. Any insight would be greatly appreciated. Thanks in advance!
The controller is trying to pull image from the official docker registry:
failed to resolve reference "docker.io/my-repo/my-service:latest"
While creating the imagePullSecret, make sure that you put the correct URL (ie. the URL for your private registry) for performing authentication and pulling image.
$ cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": { # <------ change here
"auth": "..........="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}

How to make kubernetes docker secret work?

Data showing "xxx" has been masked.
Problem description:
Success Scenario: When i make my image public in docker registry, my pod is getting created successfully.
Failure Scenario: When i make my image private in docker registry. My image pull fails on kubernetes cluster.
Please details below and help.
I have my image published to docker registry.
Following is my kubernetes secret:
c:\xxxxxxx\temp>kubectl get secret regcredx -o yaml
apiVersion: v1
data:
.dockerconfigjson: xxxxxx
kind: Secret
metadata:
creationTimestamp: 2018-10-25T21:38:18Z
name: regcredx
namespace: default
resourceVersion: "1174545"
selfLink: /api/v1/namespaces/default/secrets/regcredx
uid: 49a71ba5-d89e-11e8-8bd2-005056b7126c
type: kubernetes.io/dockerconfigjson
Here is my pod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: whatever
spec:
containers:
- name: whatever
image: xxxxxxxxx/xxxxxx:123
imagePullPolicy: Always
command: [ "sh", "-c", "tail -f /dev/null" ]
imagePullSecrets:
- name: regcredx
Here is my pod config in cluster:
c:\Sharief\temp>kubectl get pod whatever -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 100.96.1.81/32
creationTimestamp: 2018-10-26T20:49:11Z
name: whatever
namespace: default
resourceVersion: "1302024"
selfLink: /api/v1/namespaces/default/pods/whatever
uid: 9783b81f-d960-11e8-94ca-005056b7126c
spec:
containers:
- command:
- sh
- -c
- tail -f /dev/null
image: xxxxxxxxx/xxxxxxx:123
imagePullPolicy: Always
name: whatever
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4db4c
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcredx
nodeName: xxxx-pvt
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-4db4c
secret:
defaultMode: 420
secretName: default-token-4db4c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:33Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:33Z
message: 'containers with unready status: [whatever]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2018-10-26T20:49:11Z
status: "True"
type: PodScheduled
containerStatuses:
- image: xxxxxxxxx/xxxxxxx:123
imageID: ""
lastState: {}
name: whatever
ready: false
restartCount: 0
state:
waiting:
message: Back-off pulling image "xxxxxxxxx/xxxxxxx:123"
reason: ImagePullBackOff
hostIP: xx.xxx.xx.xx
phase: Pending
podIP: xx.xx.xx.xx
qosClass: BestEffort
startTime: 2018-10-26T20:49:33Z
Here is my pod discription:
c:\xxxxxxx\temp>kubectl describe pod whatever
Name: whatever
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: co2-vmkubwrk01company-pvt/xx.xx.xx.xx
Start Time: Fri, 26 Oct 2018 15:49:33 -0500
Labels: <none>
Annotations: cni.projectcalico.org/podIP=xxx.xx.xx.xx/xx
Status: Pending
IP: xxx.xx.x.xx
Containers:
whatever:
Container ID:
Image: xxxxxxxxx/xxxxxxx:123
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
tail -f /dev/null
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4db4c (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-4db4c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4db4c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned whatever to xxx
Normal SuccessfulMountVolume 26m kubelet, co2-vmkubwrk01company-pvt MountVolume.SetUp succeeded for volume "default-token-4db4c"
Normal Pulling 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt pulling image "xxxxxxxxx/xxxxxxx:123"
Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Failed to pull image "xxxxxxxxx/xxxxxxx:123": rpc error: code = Unknown desc = repository docker.io/xxxxxxxxx/xxxxxxx not found: does not exist or no pull access
Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ErrImagePull
Normal BackOff 16m (x41 over 26m) kubelet, co2-vmkubwrk01company-pvt Back-off pulling image "xxxxxxxxx/xxxxxxx:123"
Warning Failed 1m (x106 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ImagePullBackOff
Kubernetes could not find your repository , the image path is wrong , you need to fix this:
image: xxxxxxxxx/xxxxxx:123
One thing you can try to test the assumption that pre-pull the image on the node on which the deployment is going to happen. do docker images , note the correct uri/repo:tag and update it in you deployment.

Kubernetes doesn't pull from private Docker Registry

I've deployed a private registry and can pull from it with docker pull x.x.x/name. The thing is that I can't make Kubernetes pull from that repository. I think I've followed all the answers on other topics, but they don't seem to do the trick.
.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: x.x.x/nginx_1
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: registrypullsecret
kubectl get pods:
NAME READY STATUS RESTARTS AGE
private-image-test-1 0/1 Image: x.x.x/nginx_1 is ready, container is creating 0 4m
kubectl describe pods private-image-test-1
Name: private-image-test-1
Namespace: default
Node: 37.72.163.69/37.72.163.69
Start Time: Fri, 06 May 2016 08:04:45 +0000
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
uses-private-image:
Container ID:
Image: x.x.x/nginx_1
Image ID:
Port:
Command:
echo
SUCCESS
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: Image: x.x.x/nginx_1 is ready, container is creating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-zrn4n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrn4n
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {scheduler } scheduled Successfully assigned private-image-test-1 to 37.72.163.69
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD pulled Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD failed Failed to create docker container with error: no such image
4m 8s 30 {kubelet 37.72.163.69} failedSync Error syncing pod, skipping: no such image
Any help is welcome at this point, thanks!
In most cases where I've come across this issue, it is almost always your credential secret being incorrect. The proper format should be along the lines of
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: {BASE64 encoding of your config}
type: kubernetes.io/dockerconfigjson
From memory, the type field has changed in recent versions of k8s so definitely check that you have the correct type listed.
Also, your yaml example has bad indenting, but thats likely a SO editor issue.

Resources