What registry I have to pass when installing kamel on kubernetes? - docker

I have installed kubernetes in AWS ec2 instance. I'm not using any minikube or openshift. I'm trying to install kamel on top of kubernetes to run my integration code. When I tried to run kamel install command its throwing below error,
Error: cannot find automatically a registry where to push images
When I tried running as root user below error is thrown,
Error: cannot get current namespace: open /root/.kube/config: no such file or directory
I'd like to know what registry I have to pass while running kamel install command. I have docker hub account with a demo repository. Should I pass something like,
kamel install --registry hubusername/reponame
What I'm not getting is after I passed value, I'm getting below success message,
Camel K installed in namespace default
When I tried to run a sample groovy script its getting hanged after following message
kamel run hello.groovy --dev
Integration "hello" created
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default camel-k-operator-587b579567-m26rs 0/1 Pending 0 30m <none> <none> <none> <none>
Name: camel-k-operator-587b579567-m26rs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: camel.apache.org/component=operator
name=camel-k-operator
pod-template-hash=587b579567
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/camel-k-operator-587b579567
Containers:
camel-k-operator:
Image: docker.io/apache/camel-k:0.3.3
Port: <none>
Host Port: <none>
Command:
camel-k
Environment:
WATCH_NAMESPACE: default (v1:metadata.namespace)
OPERATOR_NAME: camel-k
POD_NAME: camel-k-operator-587b579567-m26rs (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from camel-k-operator-token-prjhp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
camel-k-operator-token-prjhp:
Type: Secret (a volume populated by a Secret)
SecretName: camel-k-operator-token-prjhp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 38s (x23 over 31m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Can you please help me out here? Thank you for your time.

If you installed a single node Kubernetes chances are your only node is a master node. Which is why Kubernetes won't schedule your job.
Check this by running:
kubectl get node
If your only node shows 'master' in its ROLES column - than you need to untaint it to allow scheduling:
kubectl taint nodes --all node-role.kubernetes.io/master-
Try to rerun you kamel job afer that.

Related

Kubernetes can't pull private image from docker

I hope somebody can help me.
I'm trying to pull a private docker image with no success. I already tried some solutions that I found, but without success.
Docker, Gitlab, Gitlab-Runner, Kubernetes all run on the same server
Insecure Registry
$ sudo cat /etc/docker/daemon.json
{ "insecure-registries":["10.0.10.20:5555"]}
Config.json
$ cat .docker/config.json
{
"auths": {
"10.0.10.20:5555": {
"auth": "NDUwNjkwNDcwODoxMjM0NTZzIQ=="
},
"https://index.docker.io/v1/": {
"auth": "NDUwNjkwNDcwODpGcGZHMXQyMDIyQCE="
}
}
}
Secret
$ kubectl create secret generic regcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I'm trying to create a Kubernetes pod from a private docker image. However, I get the following error:
Name: private-reg
Namespace: default
Priority: 0
Node: 10.0.10.20
Start Time: Thu, 12 May 2022 12:44:22 -0400
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.61
IPs:
IP: 10.244.0.61
Containers:
private-reg-container:
Container ID:
Image: 10.0.10.20:5555/development/app-image-base:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stjn4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-stjn4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 2m7s (x465 over 107m) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Normal Pulling 17s (x3 over 53s) kubelet Pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 17s (x3 over 53s) kubelet Failed to pull image "10.0.10.20:5555/development/expedicao-api-image-base:latest": rpc error: code = Unknown desc = failed to pull and unpack image "10.0.10.20:5555/development/app-image-base:latest": failed to resolve reference "10.0.10.20:5555/development/app-image-base:latest": failed to do request: Head "https://10.0.10.20:5555/v2/development/app-image-base/manifests/latest": http: server gave HTTP response to HTTPS client
Warning Failed 17s (x3 over 53s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 29s) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 3s (x2 over 29s) kubelet Error: ImagePullBackOff
When I pull the image directly in docker, no problem occurs even with the secret
Pull image
$ docker login 10.0.10.20:5555
Username: 4506904708
Password:
WARNING! Your password will be stored unencrypted in ~/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull 10.0.10.20:5555/development/app-image-base:latest
latest: Pulling from development/app-image-base
Digest: sha256:1385a8aa2bc7bac1a8d3e92ead66fdf5db3d6625b736d908d1fec61ba59b6bdc
Status: Image is up to date for 10.0.10.20:5555/development/app-image-base:latest
10.0.10.20:5555/development/app-image-base:latest
Can someone help me?
First, you need to create a file in /etc/containerd/config.toml
# Config file is parsed as version 1 by default.
# To use the long form of plugin names set "version = 2"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.0.10.20:5555"]
endpoint = ["http://10.0.10.20:5555"]
Second, restart contained
$ systemctl restart containerd

Why I get exec failed: container_linux.go:380 when I go inside Kubernetes pod?

I started learning about Kubernetes and I installed minikube and kubectl on Windows 7.
After that I created a pod with command:
kubectl run firstpod --image=nginx
And everything is fine:
[![enter image description here][1]][1]
Now I want to go inside the pod with this command: kubectl exec -it firstpod -- /bin/bash but it's not working and I have this error:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container
process caused: exec: "C:/Program Files/Git/usr/bin/bash.exe": stat C:/Program
Files/Git/usr/bin/bash.exe: no such file or directory: unknown
command terminated with exit code 126
How can I resolve this problem?
And another question is about this firstpod pod. With this command kubectl describe pod firstpod I can see information about the pod:
Name: firstpod
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Mon, 08 Nov 2021 16:39:07 +0200
Labels: run=firstpod
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Containers:
firstpod:
Container ID: docker://59f89dad2ddd6b93ac4aceb2cc0c9082f4ca42620962e4e692e3d6bcb47d4a9e
Image: nginx
Image ID: docker-pullable://nginx#sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Nov 2021 16:39:14 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9b8mx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-9b8mx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/firstpod to minikube
Normal Pulling 32m kubelet Pulling image "nginx"
Normal Pulled 32m kubelet Successfully pulled image "nginx" in 3.677130128s
Normal Created 31m kubelet Created container firstpod
Normal Started 31m kubelet Started container firstpod
So I can see it is a docker container id and it is started, also there is the image, but if I do docker images or docker ps there is nothing. Where are these images and container? Thank you!
[1]: https://i.stack.imgur.com/xAcMP.jpg
One error for certain is gitbash adding Windows the path. You can disable that with a double slash:
kubectl exec -it firstpod -- //bin/bash
This command will only work if you have bash in the image. If you don't, you'll need to pick a different command to run, e.g. /bin/sh. Some images are distroless or based on scratch to explicitly not include things like shells, which will prevent you from running commands like this (intentionally, for security).

CrashLoopBackOff while deploying pod using image from private registry

I am trying to create a pod using my own docker image on localhost.
This is the dockerfile used to create the image :
FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
The yaml file used to create the pod is this :
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root#node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
The image has been pushed to localhost registry.
Following is the error I receive.
root#node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
The output of describe pod :
root#node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server#sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
I get no logs :
root#node1:~/test/server# kubectl logs -f server
root#node1:~/test/server#
I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.
Posting this as Community Wiki.
As pointed by #David Maze in comment section.
If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.
Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.
3.1) Exit Code 0
This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.
In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`
Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.
$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.
Hope it clarified.

Unable to create a RabbitMQ instance using RabbitMQ cluster Kubernetes operator

I'm trying to create a RabbitMQ instance using RabbitMQ cluster Kubernetes operator, but there is an issue with PersistentVolumeClaims. I'm running Kubernetes 1.18.8 using Docker Desktop for Windows.
I have installed the operator like this:
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: nccrabbitmqcluster
It seems to create all of the objects it is supposed to create, but the pod gets stuck on pending state:
$ kubectl get all | grep rabbit
pod/nccrabbitmqcluster-server-0 0/1 Pending 0 14m
service/nccrabbitmqcluster ClusterIP 10.100.186.115 <none> 5672/TCP,15672/TCP 14m
service/nccrabbitmqcluster-nodes ClusterIP None <none> 4369/TCP,25672/TCP 14m
statefulset.apps/nccrabbitmqcluster-server 0/1 14m
There seems to be an unbound PVC according to the pod's events:
$ kubectl describe pod/nccrabbitmqcluster-server-0 | tail -n 5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "nccrabbitmqcluster-server-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "nccrabbitmqcluster-server-0": pod has unbound immediate PersistentVolumeClaims
According to the events of the PVC, it is waiting for a volume to be created:
$ kubectl describe pvc persistence-nccrabbitmqcluster-server-0
Name: persistence-nccrabbitmqcluster-server-0
Namespace: default
StorageClass: hostpath
Status: Pending
Volume:
Labels: app.kubernetes.io/component=rabbitmq
app.kubernetes.io/name=nccrabbitmqcluster
app.kubernetes.io/part-of=rabbitmq
Annotations: volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: nccrabbitmqcluster-server-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 27s (x23 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
My understanding is that docker.io/hostpath is the correct provisioner:
$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 20d
I can't see any PVs related to PCS:
$ kubectl get pv | grep rabbit
Why isn't the volume created automatically and what should I do?
Yes, your local hostpath can not work as dynamic volume provisioner. This operator needs an storageclassname which can dynamically create PVs.
In your case, your operator waiting continuously for PV to get created. In stead you can manually create an PV and PVC if you are doing in local machine.
Check this example - https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/multiple-disks/rabbitmq.yaml
If you are going to try any cloud provider like AWS then its pretty easy. Deploy EBS CSI driver in your cluster which will create an storageclass for you and that storageclass will provision dynamic volumes.

Kind kubernetes cluster failed to pull docker images

I tried to use KinD as an alternative of Minikube to bootstrap a K8S cluster in my local machine.
The cluster is created successfully.
But when I tried to create some pods/deployments from images, it failed.
$ kubectl run nginx --image=nginx
$ kubectl run hello --image=hello-world
After some minutes, use get pods to get a failed status.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 0/1 ImagePullBackOff 0 11m
nginx 0/1 ImagePullBackOff 0 22m
I am afraid this is another Global Firewall problem in China.
kubectl describe pods/nginx
Name: nginx
Namespace: default
Priority: 0
Node: dev-control-plane/172.19.0.2
Start Time: Sun, 30 Aug 2020 19:46:06 +0800
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mgq96 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mgq96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mgq96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned default/nginx to dev-control-plane
Normal BackOff 40m kubelet, dev-control-plane Back-off pulling image "nginx"
Warning Failed 40m kubelet, dev-control-plane Error: ImagePullBackOff
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: unexpected EOF
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Error: ErrImagePull
Normal Pulling 13m (x4 over 56m) kubelet, dev-control-plane Pulling image "nginx"
When I entered to the kindest/node container, but there is no docker in it. Not sure how KIND works, originally I understand it deploys a K8S cluster into a Docker container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a644f8b61314 kindest/node:v1.19.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:52301->6443/tcp dev-control-plane
$ docker exec -it a644f8b61314 /bin/bash
root#dev-control-plane:/# docker -v
bash: docker: command not found
After reading the Kind docs, I can not find an option to set a repository mirror there like that in Minikube.
BTW, I am using the latest Docker Desktop beta on a Windows 10.
First pull the image in your local system using docker pull nginx and then use below command to load that image to the kind cluster
kind load docker-image nginx --name kind-cluster-name
Kind uses containerd instead of docker as runtime, that's why docker is not installed on the nodes.
Alternatively you can use crictl tool to pull and check images inside the kind node.
crictl pull nginx
crictl images
I've run into same issue because I've exported http_proxy and https_proxy before creating cluster to a local proxy (127.0.0.1), which is unrechable in the cluster. After unset http(s)_proxy and recreate cluster, everything runs fine.

Resources