Please see my images below:
I then run this:
kubectl run my-app --image=iansimage:latest --port=5000
and this:
kubectl expose deployment my-app --type=LoadBalancer --port=8080 --target-port=5000
However, I then see this:
Notice the warning in the above screenshot: "Error response from daemon: pull access denied for iansimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied".
Why is Kubectl trying to find iansimage:latest on the internet? iansimage:latest is a local image I created as per my last question: Create an image from a Dockerfile
Please note that I am new to Kubernetes so this may be simple?
Update
Following on from Burak Serdars's comment. Say I have a com,and like this, which would nomally build an image: docker build -t "app:latest" .
How would I build this image inside a Kubernetes pod?
"Latest" is a special tag, it means that Docker always check if the downloaded image is the latest available searching the registry.
Retag your image with other tag than latest, like this :
docker tag iansimage:latest iansimage:v1
Then change your Yaml and use iansimage:v1
That solve your problem.
When you use kubectl run I believe it will create a deployment resource with a member named ImagePullPolicy which I believe defaults to Always. You might be able to change this with kubectl edit deployment my-app to set this field to IfNotPresent.
You might also consider #Enzo's suggestion to tag the image to a particular version.
Related
I am trying to run a docker image that I have build locally with Kubernetes.
Getting below error
Failed to pull image "myImage": rpc error: code = Unknown desc = Error response from daemon: pull access denied for myImage, repository does not exist or may require 'docker login'
In yaml file i have given as
image: myImage
imagePullPolicy: IfNotPresent
In local i am using docker-desktop and minikube.
I have tried multiple ways but only thing is working on to make tar of myImage and load in minikube.
I have tried using eval $(minikube docker-env) but after this my image is not able to build because it's pulling base image from organization nexus server.
Can anyone suggest anyother way?
Unfortunately I can't comment yet so I have to post an answer. The image you're trying to pull, myImage does not exist in the local image cache of your kubernetes cluster. Running a docker image ls command should yield a list of images that are available locally. If docker doesn't find an image locally, it will (by default) then go to Docker Hub to find the image. Seeing as the image listed has no prefix like someOrganization\ the image is assumed to be an officially published image from DockerHub themselves. Since your locally built image isn't an official Dockerhub image it doesn't know what to run. So the core of the problem is that your minikube doesn't have access to wherever you built your image. Unfortunately I haven't used minikube before so i'm unable to comment on any intricacies of how to work with it. I would be remiss if I left my answer like that, though, so looking at the docs for minikube ( REF: https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env ) you're doing the right thing with the eval.
Sooo... your minikube isn't stock/vanilla and it pulls from a company repo? Sounds like you need to alter your minikube or you should re-evaluate the base image you're using and fix the Dockerfile.
To fix this, set imagePullPolicy to Never.
Ensure to set eval $(minikube docker-env) before building the image.
Maybe this is a little bit late but... if anyone has the same issue, here is how I solved something like this:
You need to pass "Never" as imagePullPolicy:
imagePullPolicy: Never
You need to load the image inside minikube:
minikube image load myImage
After all this, just continue as usual:
kubectl apply -f whereverTheFileIs.yaml
In K8s world and in the shell script i have below content
#!/bin/bash
docker build -t imagename:1 .
kubectl delete -f services/nodeport.service.yml
kubectl delete -f deployments/springapsimplehellopod.deployment.yml
kubectl apply -f deployments/springapsimplehellopod.deployment.yml
kubectl apply -f services/nodeport.service.yml
kubectl apply -f services/loadbalancer.service.yml
Am i following correct standards ? What is disadvantages of not using image registry rather built image will be kept in local docker and from the newly created image i am creating deployments and services
As you are using script, you can just point folder with your YAMLs. It's mentioned in Kubernetes docs
Use kubectl apply -f <directory>. This looks for Kubernetes configuration in all .yaml, .yml, and .json files in and passes it to apply.
$ kubectl apply -f ./tst
service/nginx created
statefulset.apps/web created
service/nginx unchanged
persistentvolume/pv0003 created
Also you don't need to delete your deployments, as kubectl apply will reconfigure deployment.
$ kubectl apply -f deployment.yaml
deployment.apps/mywebtestapp-deployment created
user#cloudshell:~ (project)$ vi deployment.yaml # editing container name from nginx to httpd
user#cloudshell:~ (project)$ kubectl apply -f deployment.yaml
deployment.apps/mywebtestapp-deployment configured
Until you will want to change immutable field.
The Service "nginx" is invalid: spec.clusterIP: Invalid value: "": field is immutable
In this situation you would need to delete this resource and create new one.
Regarding best practice using image registry it might be opinion based and depends on your needs.
You can check this article or check this thread.
You could also check Helm - The package manager for Kubernetes if you would like for example divide your application into helm charts.
Let's say I have an image foo with tag v1.
So I deployed it on Kubernetes by foo:v1.
However, for some reason(e.g. monoversion in monorepo), I pushed the exact same image to container registry with tag v2.
And I changed k8s manifest to foo:v2.
In this situation, I want to update the pod only when the image digest of v1 and v2 are different. So in the case of foo, the digest are same, so container of foo:v1 should keep running.
Is this possible? If so, how?
Thanks
There is no way to update tag image without restarting pod.
The only way to make it work is too use digest explicitly instead of tags.
So now image spec would look like this:
spec:
image: foo#sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566
This way your image does not depended on tags. Digests can be found either on dockerhub or by running command docker images --digests <image-name>
After a recent update to Docker I find myself unable to create any new containers in Docker. I've already rebooted my operating system and Docker itself. I've tried specifying the tags to specific versions any way I could. I can manually pull the images I want with Docker. But it refuses to run or create any new containers. Already existing containers start up just fine. The full error message is below.
Unable to find image 'all:latest' locally
Error response from daemon: pull access denied for all, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
These aren't from private repositories. These are all public projects from Docker Hub. Any suggestions?
This is correct. You're trying to build using an image called all:latest but if you look on the docker registry that doesn't exist.
https://hub.docker.com/_/all
Are you sure you're not trying to build from a private repository?
I found the issue. I started taking my Docker command apart and found there was an environment variable that had the word "all" in it. Docker was completely ignoring whatever I had for the image and using the environment variable for the image. As soon as I removed this environment variable Docker started working again correctly.
The variable in question is -e NVIDIA_VISIBLE_DEVICES: "all" \ to make sure the Plex container can see that there is an nVidia GPU available. I was using the wrong guide and found out it's supposed to be -e NVIDIA_VISIBLE_DEVICES=all \ instead.
I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.