How to Helm -set an array of objects (array of maps)? - jenkins

I am trying to install Jenkins with Helm unto an Kubernetes cluster, but with tls (cert-manager, lets encrypt).
The difficulty is that the key, master.ingress.tls, takes an array, an array of objects.
helm install --name jenkins --namespace jenkins --set
master.serviceType=ClusterIP,master.ingress.enabled=true,
master.ingress.hostName=jenkins.mydomain.com,
master.ingress.annotations."certmanager\.k8s\.io\/cluster-issuer"=letsencrypt-prod,
master.ingress.tls={hosts[0]=jenkins.mydomain.com,
secretName=jenkins-cert} stable/jenkins
The relevant part is:
master.ingress.tls={hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
Different errors arise with this and also if I try changing it:
no matches found:
master.serviceType=ClusterIP,master.ingress.enabled=true,master.ingress.hostName=jenkins.mydomain.com,master.ingress.annotations.certmanager.k8s.io/cluster-issuer=letsencrypt-prod,master.ingress.tls={master.ingress.tls[0].secretName=jenkins-cert}
or
release jenkins failed: Ingress in version "v1beta1" cannot be handled
as a Ingress: v1beta1.Ingress.Spec: v1beta1.IngressSpec.TLS:
[]v1beta1.IngressTLS: readObjectStart: expect { or n, but found ",
error found in #10 byte of ...|],"tls":["secretName|..., bigger
context
...|eName":"jenkins","servicePort":8080}}]}}],"tls":["secretName:jenkins-cert"]}}
Trying this does returns the first error above.
Different solutions tried:
- {hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
- {"hosts[0]=jenkins.mydomain.com","secretName=jenkins-cert"}
- {hosts[0]:jenkins.mydomain.com,secretName:jenkins-cert}
- "{hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}"
- master.ingress.tls[0].secretName=jenkins-cert
- {master.ingress.tls[0].hosts[0]=jenkins.mydomain.com,master.ingress.tls[0].secretName=jenkins-cert}
How to Helm -set this correctly?

This was solved adding a custom my-values.yaml
my-values.yaml:
master:
jenkinsUrlProtocol: "https"
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
kubernetes.io/ssl-redirect: "true"
hostName: jenkins.mydomain.com
tls:
- hosts:
- jenkins.mydomain.com
secretName: cert-name
Install command:
helm install --name jenkins -f my-values.yaml stable/jenkins

Paul Boone describes it in his blog article
For me and my chart it worked like this:
--set 'global.ingressTls[0].secretName=abc.example.com' --set 'global.ingressTls[0].hosts[0]=abc.example.com'

Related

GCP instance group doesn't start containers

I have an instance template that is supposed to run my app in a container running on Google Cloud's Container-Optimized OS. When I create a single VM from this template it runs just fine, but when I use it to create an instance group the containers don't start.
According to the logs the machine didn't even try to start them.
I tried to compare the output from gcloud compute instances describe <instance-name> for the instance that works OK against one of the instances in the MIG, but other than some differences in the network interfaces and some that are due to the fact that one instance is managed by an instance group and the other one isn't I don't see anything unusual.
I also noticed that when I SSH to the instance that works, I get this message:
########################[ Welcome ]########################
# You have logged in to the guest OS. #
# To access your containers use 'docker attach' command #
###########################################################
but when I SSH to one of the instances in the MIG, I don't see it.
Is there a problem with using the container-optimized OS in an instance group?
My instance template is defined as follows:
creationTimestamp: '2022-11-09T03:25:29.896-08:00'
description: ''
id: '757769630202081478'
kind: compute#instanceTemplate
name: server-using-docker-hub-1
properties:
canIpForward: false
confidentialInstanceConfig:
enableConfidentialCompute: false
description: ''
disks:
- autoDelete: true
boot: true
deviceName: server-using-docker-hub
index: 0
initializeParams:
diskSizeGb: '10'
diskType: pd-balanced
sourceImage: projects/cos-cloud/global/images/cos-stable-101-17162-40-20
kind: compute#attachedDisk
mode: READ_WRITE
type: PERSISTENT
keyRevocationActionType: NONE
labels:
container-vm: cos-stable-101-17162-40-20
machineType: e2-micro
metadata:
fingerprint: 76mZ3i--POo=
items:
- key: gce-container-declaration
value: |-
spec:
containers:
- name: server-using-docker-hub-1
image: docker.io/rinbar/kwik-e-mart
env:
- name: AWS_ACCESS_KEY_ID
value: <redacted>
- name: AWS_SECRET_ACCESS_KEY
value: <redacted>
- name: SECRET_FOR_SESSION
value: <redacted>
- name: SECRET_FOR_USER
value: <redacted>
- name: MONGODBURL
value: mongodb+srv://<redacted>#cluster0.<redacted>.mongodb.net/kwik-e-mart
- name: DEBUG
value: server:*
- name: PORT
value: '80'
stdin: false
tty: false
restartPolicy: Always
# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.
kind: compute#metadata
networkInterfaces:
- kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/rons-project-364411/global/networks/default
stackType: IPV4_ONLY
subnetwork: https://www.googleapis.com/compute/v1/projects/rons-project-364411/regions/me-west1/subnetworks/default
reservationAffinity:
consumeReservationType: ANY_RESERVATION
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
preemptible: false
provisioningModel: STANDARD
serviceAccounts:
- email: 629139871582-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring.write
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
shieldedInstanceConfig:
enableIntegrityMonitoring: true
enableSecureBoot: false
enableVtpm: true
tags:
items:
- http-server
selfLink: https://www.googleapis.com/compute/v1/projects/rons-project-364411/global/instanceTemplates/server-using-docker-hub-1
I'm unable to replicate your issue; it worked for me.
I wonder whether your issue is container registry permissions? I don't use MIGs but assume a MIG runs as a service account and that perhaps yours doesn't have appropriate permission (to access the container registry)?
The one caveat is that gcloud compute instance-templates create-with-container is confusing and I am unable to resolve how to use --create-disk and --disk flags. I ended up using the Console to create the template. The Console's tool to generate the equivalent gcloud command is also incorrect (submitted feedback).
Q="74331370"
PROJECT="$(whoami)-$(date +%y%m%d)-${Q}"
ZONE="us-west1-c"
TEMPLATE="tmpl"
GROUP="group"
IMAGE="gcr.io/kuar-demo/kuard-amd64:blue"
SIZE="2"
MIN=${SIZE}
MAX=${MIN}
# This command is confusing
# Ultimately I used the console to save time
gcloud compute instance-templates create-with-container ${TEMPLATE} \
--project=${PROJECT} \
--machine-type=f1-micro \
--tags=http-server \
--container-image=${IMAGE} \
--create-disk=image-project=cos-cloud,image-family=cos-stable,mode=rw,size=10,type=pd-balanced \
--disk=auto-delete=yes,boot=yes,device-name=${TEMPLATE}
gcloud beta compute instance-groups managed create ${GROUP} \
--project=${PROJECT} \
--base-instance-name=${GROUP} \
--size=${SIZE} \
--template=${TEMPLATE} \
--zone=${ZONE} \
--list-managed-instances-results=PAGELESS
gcloud beta compute instance-groups managed set-autoscaling ${GROUP} \
--project=${PROJECT} \
--zone=${ZONE} \
--min-num-replicas=${MIN} \
--max-num-replicas=${MAX} \
--mode=off
INSTANCES=$(\
gcloud compute instance-groups managed list-instances ${GROUP} \
--project=${PROJECT} \
--zone=${ZONE} \
--format="value(instance)")
for INSTANCE in ${INSTANCES}
do
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--command="docker container ls"
done
Yields (edited for clarity):
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
dd902f2d5e29 ${IMAGE} "/kuard" 4 minutes ago Up 4 minutes klt-tmpl-rqhp
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0182f3e7f3dc ${IMAGE} "/kuard" 4 minutes ago Up 4 minutes klt-tmpl-azxs
Since the instances in the group have no external IP addresses, you need to enable Private Google Access or Cloud NAT to allow the instances to pull the container image from Container Registry / Artifact Registry / Docker Hub / any other container registry.

issues with adding second nginx ingress controller in AKS with Helm chart

I needed to spin up a second nginx ingress controller on my AKS cluster. I’ve followed https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ especially the or if installing with Helm section since the ingress installation is being done with Helm.
So I’ve basically added to my something.yaml file that I’m referencing at installation time the following lines as described in the article:
controller:
replicaCount: 2
image:
registry: imnottheer.azurecr.io
digest: ""
pullPolicy: Always
ingressClassResource:
# -- Name of the ingressClass
name: "internal-nginx"
# -- Is this ingressClass enabled or not
enabled: true
# -- Is this the default ingressClass for the cluster
default: false
# -- Controller-value of the controller that is processing this ingressClass
controllerValue: "k8s.io/internal-ingress-nginx"
admissionWebhooks:
patch:
image:
registry: imnottheer.azurecr.io
digest: ""
service:
annotations:
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": myAKSsubnet
loadBalancerIP: "10.0.0.10"
watchIngressWithoutClass: true
ingressClassResource:
default: true
defaultBackend:
enabled: true
image:
registry: imnottheer.azurecr.io
digest: ""
So, the changes I’ve made are basically in line with the documentation Installing with Helm
However, when I run helm install internal-nginx2 oci://imnotthere.azurecr.io/helm/ingress-nginx --version 4.1.3 -n ingress-nginx2 -f something.yaml getting:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "internal-nginx2": current value is "nginx-ingress"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ingress-nginx2": current value is "ingress-nginx"
But it works when I add --set controller.ingressClassResource.name="nginx-devices" to the command --> helm install internal-nginx2 oci://imnotthere.azurecr.io/helm/ingress-nginx --version 4.1.3 -n ingress-nginx2 --set controller.ingressClassResource.name="nginx-devices" -f something.yaml
It creates the second ingress but not while inspecting it I don't understand where this values are comming from:
ingress inspection
What am I doing wrrong ?

Error when deploying stable jenkins charts via kubernetes : curl performs SSL certificate verification by default

I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).
The cluster is up and running. We are behind Corp proxy.
1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.
2) Installed helm
3) Configured helm referencing Rancher Helm Init
4) Tried installing Jenkins charts via helm
helm install --namespace jenkins --name jenkins -f values.yaml stable/jenkins
The values.yaml has proxy details.
---
Master:
ServiceType: "ClusterIP"
AdminPassword: "adminpass111"
Cpu: "200m"
Memory: "256Mi"
InitContainerEnv:
- name: "http_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
- name: "https_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
ContainerEnv:
- name: "http_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
- name: "https_proxy"
value: "http://proxyuserproxypass#proxyname:8080"
JavaOpts: >-
-Dhttp.proxyHost=proxyname
-Dhttp.proxyPort=8080
-Dhttp.proxyUser=proxyuser
-Dhttp.proxyPassword=proxypass
-Dhttps.proxyHost=proxyname
-Dhttps.proxyPort=8080
-Dhttps.proxyPassword=proxypass
-Dhttps.proxyUser=proxyuser
Persistence:
ExistingClaim: "jk-volv-pvc"
Size: "10Gi"
5) The workloads are created. However the Pods are stuck.Logs complains about SSL certificate verification.
How to turn SSL verification off. I dont see an option to set in values.yaml.
We cannot turn off installing plugins during deployment as well.
Do we need to add SSL cert when deploying charts?
Any idea how to solve this issue?
I had the same issues as you had. In my case it was due to the fact that my DNS domain had a wildcard A record. So updates.jenkins.io.mydomain.com would resolve fine. After removing the wildcard, that fails now, so the host will then properly interpret updates.jenkins.io, as updates.jenkins.io.
This is fully documented here:
https://github.com/kubernetes/kubernetes/issues/64924

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

kubernetes cannot pull local image

I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked

Resources