issues with adding second nginx ingress controller in AKS with Helm chart - azure-aks

I needed to spin up a second nginx ingress controller on my AKS cluster. I’ve followed https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ especially the or if installing with Helm section since the ingress installation is being done with Helm.
So I’ve basically added to my something.yaml file that I’m referencing at installation time the following lines as described in the article:
controller:
replicaCount: 2
image:
registry: imnottheer.azurecr.io
digest: ""
pullPolicy: Always
ingressClassResource:
# -- Name of the ingressClass
name: "internal-nginx"
# -- Is this ingressClass enabled or not
enabled: true
# -- Is this the default ingressClass for the cluster
default: false
# -- Controller-value of the controller that is processing this ingressClass
controllerValue: "k8s.io/internal-ingress-nginx"
admissionWebhooks:
patch:
image:
registry: imnottheer.azurecr.io
digest: ""
service:
annotations:
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": myAKSsubnet
loadBalancerIP: "10.0.0.10"
watchIngressWithoutClass: true
ingressClassResource:
default: true
defaultBackend:
enabled: true
image:
registry: imnottheer.azurecr.io
digest: ""
So, the changes I’ve made are basically in line with the documentation Installing with Helm
However, when I run helm install internal-nginx2 oci://imnotthere.azurecr.io/helm/ingress-nginx --version 4.1.3 -n ingress-nginx2 -f something.yaml getting:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "internal-nginx2": current value is "nginx-ingress"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ingress-nginx2": current value is "ingress-nginx"
But it works when I add --set controller.ingressClassResource.name="nginx-devices" to the command --> helm install internal-nginx2 oci://imnotthere.azurecr.io/helm/ingress-nginx --version 4.1.3 -n ingress-nginx2 --set controller.ingressClassResource.name="nginx-devices" -f something.yaml
It creates the second ingress but not while inspecting it I don't understand where this values are comming from:
ingress inspection
What am I doing wrrong ?

Related

Openshift: any deployment resulted in Application is not available

Fist time deploying to OpenShift (actually minishift in my Windows 10 Pro). Any sample application I deploied successfully resulted in:
From Web Console I see a weird message "Build #1 is pending" although I saw it was successfully from PowerShell
I found someone fixing similiar issue changing to 0.0.0.0 (enter link description here) but I give a try and it isn't the solution in my case.
Here are the full logs and how I am deploying
PS C:\to_learn\docker-compose-to-minishift\first-try> oc new-app https://github.com/openshift/nodejs-ex warning: Cannot check if git requires authentication.
--> Found image 93de123 (16 months old) in image stream "openshift/nodejs" under tag "10" for "nodejs"
Node.js 10.12.0
---------------
Node.js available as docker container is a base platform for building and running various Node.js applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs-10.12.0
* The source repository appears to match: nodejs
* A source build using source code from https://github.com/openshift/nodejs-ex will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Use 'start-build' to trigger a new build
* WARNING: this source repository may require credentials.
Create a secret with your git credentials and use 'set build-secret' to assign it to the build config.
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"
--> Creating resources ...
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc get bc/nodejs-ex -o yaml apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2020-02-20T20:10:38Z
labels:
app: nodejs-ex
name: nodejs-ex
namespace: samplepipeline
resourceVersion: "1123211"
selfLink: /apis/build.openshift.io/v1/namespaces/samplepipeline/buildconfigs/nodejs-ex
uid: 1003675e-541d-11ea-9577-080027aefe4e
spec:
failedBuildsHistoryLimit: 5
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: nodejs-ex:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
uri: https://github.com/openshift/nodejs-ex
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: nodejs:10
namespace: openshift
type: Source
successfulBuildsHistoryLimit: 5
triggers:
- github:
secret: c3FoC0RRfTy_76WEOTNg
type: GitHub
- generic:
secret: vlKqJQ3ZBxfP4HWce_Oz
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: 172.30.1.1:5000/openshift/nodejs#sha256:3cc041334eef8d5853078a0190e46a2998a70ad98320db512968f1de0561705e
type: ImageChange
status:
lastVersion: 1

Kubernetes Helm install stable/jenkins deprecation error calling Master.* values

Hey trying to install jenkins on GKE cluster with this command
helm install stable/jenkins -f test_values.yaml --name myjenkins
My version of helm and kubectl if matters
helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Values downloaded with this command helm inspect values stable/jenkins > test_values.yaml and modified:
cat test_values.yaml
Master:
adminPassword: 34LbGfq5LWEUgw // local testing
resources:
limits:
cpu: '500m'
memory: '1024'
podLabels:
nodePort: 32323
serviceType: ClusterIp
Persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
and some weird new error after update
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
As I can see you're trying to install stable/jekins which isn't in the helm repo instead of stable/jenkins. Please update your question if it's just misspelling and I'll update my answer , but I've tried your command:
$helm install stable/jekins --name myjenkins -f test_values.yaml
and got the same error:
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
EDIT To solve next errors like:
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:258:11: executing "jenkins/templates/deprecation.yaml" at <fail "Master.* values have been renamed, please check the documentation">: error calling fail: Master.* values have been renamed, please check the documentation
and
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:354:10: executing "jenkins/templates/deprecation.yaml" at <fail "Persistence.* values have been renamed, please check the documentation">: error calling fail: Persistence.* values have been renamed, please check the documentation
and so on you also need to edit test_values.yaml
master:
adminPassword: 34LbGfq5LWEUgw
resources:
limits:
cpu: 500m
memory: 1Gi
podLabels:
nodePort: 32323
serviceType: ClusterIP
persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
And after that it's deployed successfully:
$helm install stable/jenkins --name myjenkins -f test_values.yaml
NAME: myjenkins
LAST DEPLOYED: Wed Jan 8 15:14:51 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
myjenkins 1s
myjenkins-tests 1s
==> v1/Deployment
NAME AGE
myjenkins 0s
==> v1/PersistentVolumeClaim
NAME AGE
myjenkins 1s
==> v1/Pod(related)
NAME AGE
myjenkins-6c68c46b57-pm5gq 0s
==> v1/Role
NAME AGE
myjenkins-schedule-agents 1s
==> v1/RoleBinding
NAME AGE
myjenkins-schedule-agents 0s
==> v1/Secret
NAME AGE
myjenkins 1s
==> v1/Service
NAME AGE
myjenkins 0s
myjenkins-agent 0s
==> v1/ServiceAccount
NAME AGE
myjenkins 1s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=myjenkins" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8080
kubectl --namespace default port-forward $POD_NAME 8080:8080
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
The repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub

How to Helm -set an array of objects (array of maps)?

I am trying to install Jenkins with Helm unto an Kubernetes cluster, but with tls (cert-manager, lets encrypt).
The difficulty is that the key, master.ingress.tls, takes an array, an array of objects.
helm install --name jenkins --namespace jenkins --set
master.serviceType=ClusterIP,master.ingress.enabled=true,
master.ingress.hostName=jenkins.mydomain.com,
master.ingress.annotations."certmanager\.k8s\.io\/cluster-issuer"=letsencrypt-prod,
master.ingress.tls={hosts[0]=jenkins.mydomain.com,
secretName=jenkins-cert} stable/jenkins
The relevant part is:
master.ingress.tls={hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
Different errors arise with this and also if I try changing it:
no matches found:
master.serviceType=ClusterIP,master.ingress.enabled=true,master.ingress.hostName=jenkins.mydomain.com,master.ingress.annotations.certmanager.k8s.io/cluster-issuer=letsencrypt-prod,master.ingress.tls={master.ingress.tls[0].secretName=jenkins-cert}
or
release jenkins failed: Ingress in version "v1beta1" cannot be handled
as a Ingress: v1beta1.Ingress.Spec: v1beta1.IngressSpec.TLS:
[]v1beta1.IngressTLS: readObjectStart: expect { or n, but found ",
error found in #10 byte of ...|],"tls":["secretName|..., bigger
context
...|eName":"jenkins","servicePort":8080}}]}}],"tls":["secretName:jenkins-cert"]}}
Trying this does returns the first error above.
Different solutions tried:
- {hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
- {"hosts[0]=jenkins.mydomain.com","secretName=jenkins-cert"}
- {hosts[0]:jenkins.mydomain.com,secretName:jenkins-cert}
- "{hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}"
- master.ingress.tls[0].secretName=jenkins-cert
- {master.ingress.tls[0].hosts[0]=jenkins.mydomain.com,master.ingress.tls[0].secretName=jenkins-cert}
How to Helm -set this correctly?
This was solved adding a custom my-values.yaml
my-values.yaml:
master:
jenkinsUrlProtocol: "https"
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
kubernetes.io/ssl-redirect: "true"
hostName: jenkins.mydomain.com
tls:
- hosts:
- jenkins.mydomain.com
secretName: cert-name
Install command:
helm install --name jenkins -f my-values.yaml stable/jenkins
Paul Boone describes it in his blog article
For me and my chart it worked like this:
--set 'global.ingressTls[0].secretName=abc.example.com' --set 'global.ingressTls[0].hosts[0]=abc.example.com'

Jenkins X extraValues.yaml overrides helm values in preview environment

I'm using ECR to store docker images. In a preview environment, I'm making a few changes in values.yaml so that image gets pulled from ECR.
cat pim/dam/preview/values.yaml
expose:
Annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: hook-succeeded
config:
exposer: Ingress
http: true
tlsacme: false
cleanup:
Args:
- --cleanup
Annotations:
helm.sh/hook: pre-delete
helm.sh/hook-delete-policy: hook-succeeded
preview:
image:
repository: abc.dkr.ecr.us-east-1.amazonaws.com/pim-dam
tag:
pullPolicy: IfNotPresent
When i run jx preview --app pim-dam --dir ../.. i can see extraValues.yaml file is getting created which overrides my values.yamlfile.The problem with extraValues.preview.image.repository is it adds organisation after registry name which is not the case with ECR.
How do I override extraValues.yaml ? or How do I tell Jenkinsx not to include $ORG in extraValues .yaml?
current:
extraValues.preview.image.repository: $DOCKER_REGISTRY/$ORG/$APPNAME
required:
extraValues.preview.image.repository: $DOCKER_REGISTRY/$APPNAME
cat extraValues.yaml
expose:
Annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: hook-succeeded
config:
domain: 54.183.236.166.nip.io
exposer: Ingress
http: "true"
preview:
image:
repository: abcd.dkr.ecr.us-east-1.amazonaws.com/tejesh-unbxd/pim-dam
tag: 0.0.0-SNAPSHOT-PR-11-2
The output of jx version is:
NAME VERSION
jx 1.3.980
jenkins x platform 0.0.3513
Kubernetes cluster v1.10.6
kubectl v1.12.1
helm client v2.11.0+g2e55dbe
helm server v2.12.2+g7d2b0c7
git git version 2.14.4
Operating System Unkown Linux distribution Linux version 4.14.72-73.55.amzn2.x86_64 (mockbuild#ip-10-0-1-219) (gcc version 7.3.1 20180303 (Red Hat 7.3.1-5) (GCC)) #1 SMP Thu Sep 27 23:37:24 UTC 2018

kubernetes cannot pull local image

I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked

Resources