I am trying to push my container up to GCP Kubernetes in my cluster. My pod runs locally but it doesn't want to run on GCP. It comes back with this error Error response from daemon: No command specified: CreateContainerError
It worked if I run it locally in docker but once I push it up to the container registry on gcp and apply the deployment yaml using kubectl apply -f in my namespace it never brings it up and just keeps saying
gce-exporting-fsab83222-85sc4 0/1 CreateContainerError 0 5m6s
I can't get any logs out of it either:
Error from server (BadRequest): container "gce" in pod "gce-exporting-fsab83222-85sc4" is waiting to start: CreateContainerError
Heres my files below:
Dockerfile:
FROM alpine:3.8
WORKDIR /build
COPY test.py /build
RUN chmod 755 /build/test.py
CMD ["python --version"]
CMD ["python", "test.py"]
Python Script:
#!/usr/bin/python3
import time
def your_function():
print("Hello, World")
while True:
your_function()
time.sleep(10) #make function to sleep for 10 seconds
yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gce-exporting
namespace: "monitoring"
spec:
selector:
matchLabels:
app: gce
template:
metadata:
labels:
app: gce
spec:
containers:
- name: gce
image: us.gcr.io/lab-manager/lab/gce-exporting:latest
I have tried using CMD and Entrypoint at the end to make sure the pod is running but no luck.
This is the output of the describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 60s default-scheduler Successfully assigned monitoring/gce-exporting-fsab83222-85sc4 to gke-dev-lab-standard-8efad9b6-8m66
Normal Pulled 5s (x7 over 59s) kubelet, gke-dev-lab-standard-8efad9b6-8m66 Container image "us.gcr.io/lab-manager/lab/gce-exporting:latest" already present on machine
Warning Failed 5s (x7 over 59s) kubelet, gke-dev-lab-standard-8efad9b6-8m66 Error: Error response from daemon: No command specified
It was a malformed character in my Dockerfile and caused it to crash.
You might need to update your Dockerfile with following:
FROM python
WORKDIR /build
ENV PYTHONUNBUFFERED=1
ENV PYTHONIOENCODING=UTF-8
COPY test.py /build
RUN chmod 755 /build/test.py
CMD ["python", "test.py"]
Then build and push the docker image and recreate pod. Hope it helps!
I found some library that can replace docker-compose in podman but it is still under development, so my question is how can I run multiple container together, currently I am using my bash script to run all but it is good just for the first time not updating the container.
I'd prefer at first if there is any way in podman rather than using some other tool.
library (under development) --> https://github.com/muayyad-alsadi/podman-compose
I think the Kubernetes Pod concept is what you're looking for, or at least it allows you to run multiple containers together by following a well-established standard.
My first approach was like you, to do everything as a command to see it working, something like:
# Create a pod, publishing port 8080/TCP from internal 80/TCP
$ podman pod create \
--name my-pod \
--publish 8080:80/TCP \
--publish 8113:113/TCP
# Create a first container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont1-name \
--env MY_VAR="my val" \
nginxdemos/hello
# Create a second container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont2-name \
--env MY_VAR="my val" \
greboid/nullidentd
# Check by
$ podman container ls; podman pod ls
Now that you have a pod, you can export it as a Pod manifest by using podman generate kube my-pod > my-pod.yaml.
As soon as you try your own examples, you will see how not everything is exported as you would expect (like networks or volumes), but at least it serves you as a base where you can continue to work.
Assuming the same example, in a YAML Pod manifest, it looks like this my-pod.yaml:
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
spec:
containers:
# Create the first container: Dummy identd server on 113/TCP
- name: cont2-name
image: docker.io/greboid/nullidentd:latest
command: [ "/usr/sbin/inetd", "-i" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 113
hostPort: 8113
protocol: TCP
workingDir: /
# Create a second container.
- name: cont1-name
image: docker.io/nginxdemos/hello:latest
command: [ "nginx", "-g", "daemon off;" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
workingDir: /
restartPolicy: Never
status: {}
When this file is used like this:
# Use a Kubernetes-compatible Pod manifest to create and run a pod
$ podman play kube my-pod.yaml
# Check
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a53a5c0f076 docker.io/nginxdemos/hello:latest nginx -g daemon o... 8 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont1-name
351065b66b55 docker.io/greboid/nullidentd:latest /usr/sbin/inetd -... 10 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont2-name
e61c68752e35 k8s.gcr.io/pause:3.2 14 seconds ago Up 7 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp b586ca581129-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
b586ca581129 my-pod Running 14 seconds ago e61c68752e35 3
You will be able to access the 'Hello World' served by nginx at 8080, and the dummy identd server at 8113.
This is a bit tricky,I have a K8s cluster up and running and i am able to execute a docker image inside that cluster and i can see contents of command “kubectl get pods -o wide” .Now i have Gitlab setted up with this K8 cluster
I have set up variables $KUBE_URL $KUBE_USER and $KUBE_PASSWORD respectively in Gitlab with above K8 cluster
Here Gitlab runner console displays all these information as shown in console log below,at the end it fails for
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
Here is full console log:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on WotC-Docker-ip-10-102-0-70 d457d50a
Using Docker executor with image docker:latest …
Pulling docker image docker:latest …
Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest …
Running on runner-d457d50a-project-185-concurrent-0 via ip-10-102-0-70…
Fetching changes…
Removing cluster1-config
HEAD is now at 25846c4 Initial commit
From https://git.com/core-systems/gatling
25846c4…bcaa89b master -> origin/master
Checking out bcaa89bf as master…
Skipping Git submodules setup
$ uname -a
Linux runner-d457d50a-project-185-concurrent-0 4.14.67-66.56.amzn1.x86_64 #1 SMP Tue Sep 4 22:03:21 UTC 2018 x86_64 Linux
$ apk add --no-cache curl
fetch htt p://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch ht tp://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.32.0-r0)
(2/4) Installing libssh2 (1.8.0-r3)
(3/4) Installing libcurl (7.61.1-r1)
(4/4) Installing curl (7.61.1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 6 MiB in 18 packages
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s ht tps : //storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
95 37.3M 95 35.8M 0 0 37.8M 0 --:–:-- --:–:-- --:–:-- 37.7M
100 37.3M 100 37.3M 0 0 38.3M 0 --:–:-- --:–:-- --:–:-- 38.3M
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
$ kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true
Cluster “nosebit” set.
$ kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
User “admin” set.
$ kubectl config set-context default --cluster=nosebit --user=admin
Context “default” created.
$ kubectl config use-context default
Switched to context “default”.
$ cat $HOME/.kube/config
apiVersion: v1
clusters:
cluster:
insecure-skip-tls-verify: true
server: https://18.216.8.240:443
name: nosebit
contexts:
context:
cluster: nosebit
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
name: admin
user:
password: |-
MIIDOzCCAiOgAwIBAgIJALOrUrxmhgpHMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV
BAMMDTEzLjU4LjE3OC4yNDEwHhcNMTgxMTI1MjIwNzE1WhcNMjgxMTIyMjIwNzE1
WjAYMRYwFAYDVQQDDA0xMy41OC4xNzguMjQxMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA4jmyesjEiy6T2meCdnzzLfSE1VtbY//0MprL9Iwsksa4xssf
PXrwq97I/aNNE2hWZhZkpPd0We/hNKh2rxwNjgozQTNcXqjC01ZVjfvpvwHzYDqj
4cz6y469rbuKqmXHKsy/1docA0IdyRKS1JKWz9Iy9Wi2knjZor6/kgvzGKdH96sl
ltwG7hNnIOrfNQ6Bzg1H6LEmFP+HyZoylWRsscAIxD8I/cmSz7YGM1L1HWqvUkRw
GE23TXSG4uNYDkFaqX46r4nwLlQp8p7heHeCV/mGPLd0QCUaCewqSR+gFkQz4nYX
l6BA3M0Bo4GHMIGEMB0GA1UdDgQW
BBQqsD7FUt9vBW2LcX4xbqhcO1khuTBIBgNVHSMEQTA/gBQqsD7FUt9vBW2LcX4x
bqhcO1khuaEcpBowGDEWMBQGA1UEAwwNMTMuNTguMTc4LjI0MYIJALOrUrxmhgpH
MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAY
6mxGeQ90mXYdbLtoVxOUSvqk9+Ded1IzuoQMr0joxkDz/95HCddyTgW0gMaYsv2J
IZVH7JQ6NkveTyd42QI29fFEkGfPaPuLZKn5Chr9QgXJ73aYrdFgluSgkqukg4rj
rrb+V++hE9uOBtDzcssd2g+j9oNA5j3VRKa97vi3o0eq6vs++ok0l1VD4wyx7m+l
seFx50RGXoDjIGh73Gh9Rs7/Pvc1Pj8uAGvj8B7ZpAMPEWYmkkc4F5Y/14YbtfGc
2VlUJcs5p7CbzsqI5Tqm+S9LzZXtD1dVnsbbbGqWo32CIm36Cxz/O/FCf8tbITpr
u2O7VjBs5Xfm3tiW811k
username: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdzZqdDYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFiMjc2YzIxLWYxMDAtMTFlOC04YjM3LTAyZDhiMzdkOTVhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNQifQ.RCQQWjDCSkH8YckBeck-EIdvOnTKBmUACXVixPfUp9gAmUnit5qIPvvFnav-C-orfYt552NQ5GTLOA3yR5-jmxoYJwCJBfvPRb1GqqgiiJE2pBsu5Arm30MOi2wbt5uCNfKMAqcWiyJQF98M2PFc__jH6C1QWPXgJokyk7i8O6s3TD69KrrXNj_W4reDXourLl7HwHWoWwNKF0dgldanug-_zjvE06b6VZBI-YWpm9bpe_ArIOrMEjl0JRGerWahcQFVJsmhc4vgw-9-jUsfKPUYEfDItJdQKyV9dgdwShgzMINuuHlU7w7WBxmJT6cqMIvHRnDHuno3qMKTJTuh-g
$ kubectl config view --minify > cluster1-config
$ export KUBECONFIG=$HOME/.kube/config
$ kubectl --kubeconfig=cluster1-config config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
default nosebit admin
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
==================================================================================================
Here is my .gitlab-ci.yml content, could you suggest why kubectl get pods not displaying pods of the remote cluster even when KUBECONFIG set up is done successfully?
image : docker:latest
variables:
CONTAINER_DEV_IMAGE: https://hub.docker.com/r/tarunkumard/gatling/:$CI_COMMIT_SHA
stages:
deploy
deploy:
stage: deploy
tags:
- docker
script:
‘uname -a’
‘apk add --no-cache curl’
‘curl -LO http s://storage.go ogleapis.com/kubernetes-release/release/$(curl -s htt ps:// storage.googlea pis .com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl’
‘chmod +x ./kubectl’
‘mv ./kubectl /usr/local/bin/kubectl’
‘kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true’
‘kubectl config set-credentials admin --username=" " --password="$KUBE_PASSWORD"’
‘kubectl config set-context default --cluster=nosebit --user=admin’
‘kubectl config use-context default’
‘cat $HOME/.kube/config’
‘kubectl config view --minify > cluster1-config’
‘export KUBECONFIG=$HOME/.kube/config’
‘kubectl --kubeconfig=cluster1-config config get-contexts’
'kubeconfig=cluster1-config kubectl get pods -o wide ’
Why gitlab runner failing to get pods from Kubernetes cluster(Note This cluster is up and running using and I am able to see pods using kubectl get pods command )
Basically,
kubectl config view --minify > cluster1-config
Won't do it, because the output will be something like this with no actual credentials/certs:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<kube-apiserver>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
You need:
kubectl config view --raw > cluster1-config
If that's not the issue. It could be that your credentials don't have the right RBAC permissions. I would try to find the ClusterRoleBinding or RoleBinding that is bound for that admin user. Something like:
$ kubectl get clusterrolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
$ kubectl get rolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
Once you find the role, you can see if it has the right permissions to view pods. For example:
$ kubectl get clusterrole cluster-admin -o=yaml
I'm trying (for tests purpose) to expose to kubernetes a very simple image pong http:
FROM golang:onbuild
EXPOSE 8000
I built the docker image:
docker build -t pong .
I started a private registry (with certificates):
docker run -d --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2.6.2
I created a secret:
kubectl create secret docker-registry regsecret --docker-server=localhost --docker-username=johndoe --docker-password=johndoe --docker-email=johndoe#yopmail.com
I uploaded the image:
docker tag 9c0bb659fea1 localhost/pong
docker push localhost/pong
I had an insecure registry configuration
{
"storage-driver" : "aufs",
"insecure-registries" : [
"localhost"
],
"debug" : true,
"experimental" : true
}
So I tried to create my kubernetes pods with:
apiVersion: v1
kind: Pod
metadata:
name: pong
spec:
containers:
- name: pong
image: localhost/pong:latest
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
I'm on MacOS with docker Version 17.12.0-ce-mac49 (21995).
If I use image: localhost/pong:latest I got:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: error
parsing HTTP 404 response body: invalid character ''d'' looking for beginning
of value: "default backend - 404"'
reason: ErrImagePull
I'm stuck on it since the beginning of the week, without success.
It was not a problem of registry configuration.
I forgot to mention that I used minikube.
For the flags to be taken into account, I had to delete the minikube configuration and recreate it
minikube delete
minikube start --insecure-registry="10.0.4.0/24"
Hey try to browse your registry using this nice front end app https://hub.docker.com/r/konradkleine/docker-registry-frontend/
Perhaps this will give you some hint , it looks like the registry has some configuration issue...
instead of deleting the cluster first (minikube delete) the configuration json may be editied at ~/.minikube/config/config.json to add this section accordingly:
{
...
"HostOptions": {
...
"InsecureRegistry": [
"private.docker.registry:5000"
],
...
},
...
}
...
}
this only works on started clusters, as the configuration file won't be populated otherwise. the answer above using minikube --insecure-registry="" is fine.
I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked