I've been trying to create a custom helm chart however I get ErrImagePull no matter what image I add to my chart, I can re-create this really easily:
helm create my-chart
(using default nginx docker image):
helm install my-chart .
NAME: my-chart
LAST DEPLOYED: Fri Jan 17 12:26:13 2020
NAMESPACE: example
STATUS: deployed
REVISION: 1
NOTES:
Change values.yaml for a different image (nginx -> ubuntu):
7 image:
8 repository: ubuntu
9 pullPolicy: IfNotPresent
update helm deployment:
helm upgrade my-chart .
Release "my-chart" has been upgraded. Happy Helming!
NAME: my-chart
LAST DEPLOYED: Fri Jan 17 12:30:13 2020
NAMESPACE: example
STATUS: deployed
REVISION: 2
NOTES:
Pod status:
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-chart-54fb9969dd-gnpt9 0/1 ImagePullBackOff 0 32s
my-chart-56485d7b7-hc25q 1/1 Running 0 4m32s
Describe pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned example/my-chart-54fb9969dd-gnpt9 to aw
Normal Pulling 15s (x3 over 62s) kubelet, aw Pulling image "ubuntu:1.16.0"
Warning Failed 13s (x3 over 59s) kubelet, aw Failed to pull image "ubuntu:1.16.0": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/ubuntu:1.16.0": no available registry endpoint: docker.io/library/ubuntu:1.16.0 not found
Warning Failed 13s (x3 over 59s) kubelet, aw Error: ErrImagePull
Normal BackOff 1s (x3 over 58s) kubelet, aw Back-off pulling image "ubuntu:1.16.0"
Warning Failed 1s (x3 over 58s) kubelet, aw Error: ImagePullBackOff
The issue is caused by the fact that the helm template defaults with the chart.appversion
image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
when you helm create my-chart go to the deployments.yaml and change the image: section to either use the tag variable, then from values.yaml add something like this:
8 repository: ubuntu
9 tag: latest
10 pullPolicy: IfNotPresent
Related
I have a single-node k8s server.
I want to run a k8s pod from a .tar.gz file.
I do this:
$ docker load -i myapp.tar.gz
and I verify it with:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 1.0 f6b7a8b41894 2 months ago 33MB
I have a values.yaml file:
myapp:
myapp:
image: "myapp:1.0"
imagePullPolicy: IfNotPresent
ifname: eth0
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
autoscale:
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 90
and I install it with Helm.
and then I check the deployed pod:
$ kubectl get pods -A
and I see ImagePullBackOff error.
When I describe the pod, I get this error:
Normal Scheduled 3m17s default-scheduler Successfully assigned cs1/myapp-7589554bd4-4pprj to masternode1
Normal Pulling 100s (x4 over 3m16s) kubelet Pulling image "myapp:1.0"
Warning Failed 99s (x4 over 3m14s) kubelet Failed to pull image "myapp:1.0": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/myapp:1.0": failed to resolve reference "docker.io/library/myapp:1.0": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Any idea how can I resolve this issue?
A different image is fetched with the same image name Helm does a chart install than if I do an ordinary docker pull. My values.yaml has
image:
repository: gcr.io/rsi-api-test/rsi-api
tag: latest
pullPolicy: IfNotPresent
The only image in the container registry is
gcr.io/rsi-api-test/rsi-
api#sha256:473bd9e31df8fd5d5424e11a57cabfd308179c8507683d6b9219c804bb315116
But helm somehow finds this image with this code:
gcr.io/rsi-api-test/rsi-api#sha256:c5cc78caa54ac4cf855c5fdb6a3448ff74ab641581fcda35d3e4e245c3154766
I believe it found some old version and for some reason refuses to get the latest version. Where is the "cached" or local collection of repositories used by helm and how can I force it cleaned?
The registry is a GCP Artifact Registry
>helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
(base)
UPDATE, CHANGED TAGGING
I edited the tags in the image in GCP, removing both the "latest" and "v0.1.0" tag and adding tag "v1.0.0". I changed the respective values in the Values.yaml file (above) to
image:
repository: gcr.io/rsi-api-test/rsi-api
tag: 1.0.0
pullPolicy: IfNotPresent
Then, I did
docker image pull gcr.io/rsi-api-test/rsi-api:latest
and confirmed there was no more latest (I'm not sure where it's getting these images from).
then
docker run --detached gcr.io/rsi-api-test/rsi-api:1.0.0
Docker fetched the 1.0.0 version, and it ran as I expected/wanted.
As for helm, I re-ran helm install, and it didn't work due to an image error (kubectl logs shown below).
helm install rsapi
>kubectl logs pod/sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Error from server (BadRequest): container "rsiapi" in pod "sullen-lemur-rsiapi-7b8d6d656c-7xmxh" is waiting to start: trying and failing to pull image
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1494$
>^Cep -R "\/v2" * |less
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1494$
>kubectl describe pod/sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Name: sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Namespace: default
Priority: 0
Node: gke-helm-cluster-default-pool-7c542461-sdpv/10.128.0.6
Start Time: Thu, 26 May 2022 23:19:00 -0400
Labels: app.kubernetes.io/instance=sullen-lemur
app.kubernetes.io/name=rsiapi
pod-template-hash=7b8d6d656c
Annotations: <none>
Status: Pending
IP: 10.72.1.13
IPs:
IP: 10.72.1.13
Controlled By: ReplicaSet/sullen-lemur-rsiapi-7b8d6d656c
Containers:
rsiapi:
Container ID:
Image: gcr.io/rsi-api-test/rsi-api:1.0.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:http/ delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=20s timeout=1s period=10s #success=1 #failure=3
Environment:
SPRING_CLOUD_KUBERNETES_CONFIG_NAME: sullen-lemur-rsiapi
MANAGEMENT_ENDPOINT_RESTART_ENABLED: true
SPRING_CLOUD_KUBERNETES_RELOAD_ENABLED: true
SPRING_CLOUD_KUBERNETES_RELOAD_STRATEGY: refresh
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fvgg8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-fvgg8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/sullen-lemur-rsiapi-7b8d6d656c-7xmxh to gke-helm-cluster-default-pool-7c542461-sdpv
Normal Pulling 12m (x4 over 13m) kubelet Pulling image "gcr.io/rsi-api-test/rsi-api:1.0.0"
Warning Failed 12m (x4 over 13m) kubelet Failed to pull image "gcr.io/rsi-api-test/rsi-api:1.0.0": rpc error: code = NotFound desc = failed to pull and unpack image "gcr.io/rsi-api-test/rsi-api:1.0.0": failed to resolve reference "gcr.io/rsi-api-test/rsi-api:1.0.0": gcr.io/rsi-api-test/rsi-api:1.0.0: not found
Warning Failed 12m (x4 over 13m) kubelet Error: ErrImagePull
Warning Failed 11m (x6 over 13m) kubelet Error: ImagePullBackOff
Normal BackOff 3m32s (x42 over 13m) kubelet Back-off pulling image "gcr.io/rsi-api-test/rsi-api:1.0.0"
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1495$
>helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
sullen-lemur 1 Thu May 26 23:19:00 2022 DEPLOYED rsiapi-0.1.0 1.0 default
The same image id was tied up in two SHA codes:
> docker image rm 8d0bfa85e8f9
Error response from daemon: conflict: unable to delete 8d0bfa85e8f9
(must be forced) - image is referenced in multiple repositories
>docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/rsi-api-test/rsi-api latest 8d0bfa85e8f9 10 hours ago 379MB
gcr.io/rsi-api-test/rsi-api v1.0.0 8d0bfa85e8f9 10 hours ago 379MB
>docker image rm -f 8d0bfa85e8f9
Untagged: gcr.io/rsi-api-test/rsi-api:latest
Untagged: gcr.io/rsi-api-test/rsi-api:v1.0.0
Untagged: gcr.io/rsi-api-test/rsi-api#sha256:473bd9e31df8fd5d5424e11a57cabfd308179c8507683d6b9219c804bb315116
Deleted: sha256:8d0bfa85e8f929221a7c6b66e5fd6008151e496407ed9e74072dd3e02314ad12
BONUS points to suggest a way/policy to increment the version tag in for helm. This is a Github Workflow Maven build of a spring boot application.
Also, I'm running Helm on my personal Linux machine, but want it to target a GCP cluster. However, I also tried installing helm and using it on a Minikube installation. What do I need to do to make sure I fully switch?
I am trying to pull an image locally from a gitlab repository.
The yaml file looks like this:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: tester
image: registry.gitlab.com/<my-project>/<components>
imagePullPolicy: Always
securityContext:
privileged: true
imagePullSecrets:
- name: my-token
---
apiVersion: v1
data:
.dockerconfigjson: <my-key>
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: my-token
labels:
app: tester
Then I execute: kubectl apply -f pullImage.yaml
The kubectl describe pod private-reg returns:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m1s default-scheduler Successfully assigned default/private-reg to antonis-dell
Normal Pulled 6m46s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 1m14.136699854s
Normal Pulled 6m43s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 1.808412857s
Normal Pulled 6m27s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 3.046153429s
Normal Pulled 5m56s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 4.143342874s
Normal Created 5m56s (x4 over 6m46s) kubelet Created container ches
Normal Started 5m56s (x4 over 6m46s) kubelet Started container ches
Normal Pulling 5m16s (x5 over 8m1s) kubelet Pulling image "registry.gitlab.com/<my-project>/<components>"
Normal Pulled 5m13s kubelet Successfully pulled image "regregistry.gitlab.com/<my-project>/<components>" in 2.783360345s
Warning BackOff 2m54s (x19 over 6m42s) kubelet Back-off restarting failed container
However I cannot find the image locally.
The docker image ls returns:
REPOSITORY TAG IMAGE ID CREATED SIZE
moby/buildkit buildx-stable-1 440639846006 6 days ago 142MB
registry 2 1fd8e1b0bb7e 12 months ago 26.2MB
I excpect that image registry.gitlab.com/<my-project>/<components> would be there.
Am I missing something here?
Docker Image: -
docker images | grep -i "gcc"
gcc-docker latest 84c4359e6fc9 21 mites ago 1.37GB
docker run -it gcc-docker:latest
hello,world
Kubernetes pod deployed:-
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/hello-world to master-node
Normal Pulling 4s kubelet, master-node Pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Failed to pull image "gcc-docker:latest": rpc error: code = Unknown desc = Erroresponse from daemon: pull access denied for gcc-docker, repository does not exist or may require 'docker login': denied: requested acce to the resource is denied
Warning Failed 0s kubelet, master-node Error: ErrImagePull
Normal BackOff 0s kubelet, master-node Back-off pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Error: ImagePullBackOff
-->yaml files used to deploy pod
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
type: hello-world
spec:
containers:
- name: hello-world
image: gcc-docker:latest
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 60']
ports:
- containerPort: 80
I tried pulling gcc-docker and got the same error.You may have this image present on your system already and now its not on dockerhub.
if you know the repository for this image, try to use the same and for authentication create secrets of docker type and use them as image pull secrets.
Also, one more thing you are running the container on the master node, and I assume it's minikube or some local setup.
Minikube uses a dedicated VM to run Kubernetes which is not the same as the machine on which you have installed minikube.
So images available on your laptop will not be available to minikube.
I tried to use KinD as an alternative of Minikube to bootstrap a K8S cluster in my local machine.
The cluster is created successfully.
But when I tried to create some pods/deployments from images, it failed.
$ kubectl run nginx --image=nginx
$ kubectl run hello --image=hello-world
After some minutes, use get pods to get a failed status.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 0/1 ImagePullBackOff 0 11m
nginx 0/1 ImagePullBackOff 0 22m
I am afraid this is another Global Firewall problem in China.
kubectl describe pods/nginx
Name: nginx
Namespace: default
Priority: 0
Node: dev-control-plane/172.19.0.2
Start Time: Sun, 30 Aug 2020 19:46:06 +0800
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mgq96 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mgq96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mgq96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned default/nginx to dev-control-plane
Normal BackOff 40m kubelet, dev-control-plane Back-off pulling image "nginx"
Warning Failed 40m kubelet, dev-control-plane Error: ImagePullBackOff
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: unexpected EOF
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Error: ErrImagePull
Normal Pulling 13m (x4 over 56m) kubelet, dev-control-plane Pulling image "nginx"
When I entered to the kindest/node container, but there is no docker in it. Not sure how KIND works, originally I understand it deploys a K8S cluster into a Docker container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a644f8b61314 kindest/node:v1.19.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:52301->6443/tcp dev-control-plane
$ docker exec -it a644f8b61314 /bin/bash
root#dev-control-plane:/# docker -v
bash: docker: command not found
After reading the Kind docs, I can not find an option to set a repository mirror there like that in Minikube.
BTW, I am using the latest Docker Desktop beta on a Windows 10.
First pull the image in your local system using docker pull nginx and then use below command to load that image to the kind cluster
kind load docker-image nginx --name kind-cluster-name
Kind uses containerd instead of docker as runtime, that's why docker is not installed on the nodes.
Alternatively you can use crictl tool to pull and check images inside the kind node.
crictl pull nginx
crictl images
I've run into same issue because I've exported http_proxy and https_proxy before creating cluster to a local proxy (127.0.0.1), which is unrechable in the cluster. After unset http(s)_proxy and recreate cluster, everything runs fine.