I’m trying to setup ingress using ambassador for my local cluster and working off this guide here https://kind.sigs.k8s.io/docs/user/ingress
but I receive an rpc and timeout error. Some things i’ve attempted in trying to fix include: Using the Ingress Nginx instead but I received same error, amending the dockerhub path by just having the username and image in the url and deleting and recreating the pods. I've also seen other previous questions and the solutions do not seem to work.
NAMESPACE NAME READY STATUS RESTARTS AGE
ambassador ambassador-operator-67668967b8-w28b2 0/1 ImagePullBackOff 0 28m
default bar-app 0/1 ErrImagePull 0 9m19s
default foo-app 0/1 ImagePullBackOff 0 9m19s
kube-system coredns-74ff55c5b-m7s8r 1/1 Running 0 38m
kube-system coredns-74ff55c5b-tgcdg 1/1 Running 0 38m
kube-system etcd-kind8-control-plane 1/1 Running 0 38m
kube-system kindnet-dch9w 1/1 Running 0 37m
kube-system kindnet-dm5gn 1/1 Running 0 38m
kube-system kindnet-sxxdk 1/1 Running 0 37m
kube-system kube-apiserver-kind8-control-plane 1/1 Running 0 38m
kube-system kube-controller-manager-kind8-control-plane 1/1 Running 0 38m
kube-system kube-proxy-n84kf 1/1 Running 0 38m
kube-system kube-proxy-twtsf 1/1 Running 0 37m
kube-system kube-proxy-zjq6t 1/1 Running 0 37m
kube-system kube-scheduler-kind8-control-plane 1/1 Running 0 38m
local-path-storage local-path-provisioner-78776bfc44-kkrht 1/1 Running 0 38m
This is the error log. kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp'
kube-system 28m Normal Created pod/coredns-74ff55c5b-tgcdg Created container coredns
kube-system 28m Normal Created pod/coredns-74ff55c5b-m7s8r Created container coredns
kube-system 28m Normal Started pod/coredns-74ff55c5b-m7s8r Started container coredns
local-path-storage 28m Normal Started pod/local-path-provisioner-78776bfc44-kkrht Started container local-path-provisioner
local-path-storage 28m Normal Created pod/local-path-provisioner-78776bfc44-kkrht Created container local-path-provisioner
local-path-storage 28m Normal LeaderElection endpoints/rancher.io-local-path local-path-provisioner-78776bfc44-kkrht_c5300431-393d-4ce5-bee6-9fa03b2567e8 became leader
kube-system 28m Normal Started pod/coredns-74ff55c5b-tgcdg Started container coredns
ambassador 20m Normal ScalingReplicaSet deployment/ambassador-operator Scaled up replica set ambassador-operator-67668967b8 to 1
ambassador 20m Normal SuccessfulCreate replicaset/ambassador-operator-67668967b8 Created pod: ambassador-operator-67668967b8-w28b2
ambassador 20m Normal Scheduled pod/ambassador-operator-67668967b8-w28b2 Successfully assigned ambassador/ambassador-operator-67668967b8-w28b2 to kind8-worker
ambassador 15m Normal Pulling pod/ambassador-operator-67668967b8-w28b2 Pulling image "docker.io/datawire/ambassador-operator:v1.2.9"
ambassador 3s Warning Failed pod/ambassador-operator-67668967b8-w28b2 Error: ImagePullBackOff
ambassador 5m1s Normal BackOff pod/ambassador-operator-67668967b8-w28b2 Back-off pulling image "docker.io/datawire/ambassador-operator:v1.2.9"
ambassador 14m Warning Failed pod/ambassador-operator-67668967b8-w28b2 Error: ErrImagePull
ambassador 19m Warning Failed pod/ambassador-operator-67668967b8-w28b2 Failed to pull image "docker.io/datawire/ambassador-operator:v1.2.9": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/datawire/ambassador-operator:v1.2.9": failed to resolve reference "docker.io/datawire/ambassador-operator:v1.2.9": failed to do request: Head https://registry-1.docker.io/v2/datawire/ambassador-operator/manifests/v1.2.9: dial tcp 18.214.230.110:443: i/o timeout
ambassador 17m Warning Failed pod/ambassador-operator-67668967b8-w28b2 Failed to pull image "docker.io/datawire/ambassador-operator:v1.2.9": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/datawire/ambassador-operator:v1.2.9": failed to resolve reference "docker.io/datawire/ambassador-operator:v1.2.9": failed to do request: Head https://registry-1.docker.io/v2/datawire/ambassador-operator/manifests/v1.2.9: dial tcp 3.211.199.249:443: i/o timeout
ambassador 16m Warning Failed pod/ambassador-operator-67668967b8-w28b2 Failed to pull image "docker.io/datawire/ambassador-operator:v1.2.9": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/datawire/ambassador-operator:v1.2.9": failed to resolve reference "docker.io/datawire/ambassador-operator:v1.2.9": failed to do request: Head https://registry-1.docker.io/v2/datawire/ambassador-operator/manifests/v1.2.9: dial tcp 54.236.165.68:443: i/o timeout
ambassador 14m Warning Failed pod/ambassador-operator-67668967b8-w28b2 Failed to pull image "docker.io/datawire/ambassador-operator:v1.2.9": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/datawire/ambassador-operator:v1.2.9": failed to resolve reference "docker.io/datawire/ambassador-operator:v1.2.9": failed to do request: Head https://registry-1.docker.io/v2/datawire/ambassador-operator/manifests/v1.2.9: dial tcp 54.236.131.166:443: i/o timeout
default 38s Normal Scheduled pod/foo-app Successfully assigned default/foo-app to kind8-worker
default 38s Normal Scheduled pod/bar-app Successfully assigned default/bar-app to kind8-worker
default 37s Normal Pulling pod/bar-app Pulling image "hashicorp/http-echo:0.2.3"
This is the yaml.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ambassadorinstallations.getambassador.io
spec:
additionalPrinterColumns:
- JSONPath: .spec.version
name: VERSION
type: string
- JSONPath: .spec.updateWindow
name: UPDATE-WINDOW
type: integer
- JSONPath: .status.lastCheckTime
description: Last time checked
name: LAST-CHECK
type: string
- JSONPath: .status.conditions[?(#.type=='Deployed')].status
description: Indicates if deployment has completed
name: DEPLOYED
type: string
- JSONPath: .status.conditions[?(#.type=='Deployed')].reason
description: Reason for deployment completed
name: REASON
priority: 1
type: string
- JSONPath: .status.conditions[?(#.type=='Deployed')].message
description: Message for deployment completed
name: MESSAGE
priority: 1
type: string
- JSONPath: .status.deployedRelease.appVersion
description: Deployed version of Ambassador
name: DEPLOYED-VERSION
type: string
- JSONPath: .status.deployedRelease.flavor
description: Deployed flavor of Ambassador (OSS or AES)
name: DEPLOYED-FLAVOR
type: string
group: getambassador.io
names:
kind: AmbassadorInstallation
listKind: AmbassadorInstallationList
plural: ambassadorinstallations
singular: ambassadorinstallation
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: AmbassadorInstallation is the Schema for the ambassadorinstallations
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AmbassadorInstallationSpec defines the desired state of AmbassadorInstallation
properties:
baseImage:
description: An (optional) image to use instead of the image specified
in the Helm chart.
type: string
helmRepo:
description: An (optional) Helm repository.
type: string
installOSS:
description: 'Installs [Ambassador OSS](https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/)
instead of [AES](https://www.getambassador.io/docs/latest/topics/install/).
Default is false which means it installs AES by default. TODO: 1.
AES/AOSS is not installed and the user installs using `installOSS:
true`, then we straightaway install AOSS. 2. AOSS is installed via
operator and the user sets `installOSS: false`, then we perform the
migration as detailed here - https://www.getambassador.io/docs/latest/topics/install/upgrade-to-edge-stack/
3. AES is installed and the user sets `installOSS: true`, then we
point users to the docs which gives them pointers on how to do
that themselves.'
type: boolean
logLevel:
description: 'An (optional) log level: debug, info...'
enum:
- info
- debug
- warn
- warning
- error
- critical
- fatal
type: string
updateWindow:
description: "`updateWindow` is an optional item that will control when
the updates can take place. This is used to force system updates to
happen late at night if that’s what the sysadmins want. \n * There
can be any number of `updateWindow` entries (separated by commas).
\ * `Never` turns off automatic updates even if there are other entries
in the comma-separated list. `Never` is used by sysadmins to disable
all updates during blackout periods by doing a `kubectl apply`
or using our Edge Policy Console to set this. * Each `updateWindow`
is in crontab format (see https://crontab.guru/) Some examples of
`updateWindows` are: - `* 0-6 * * * SUN`: every Sunday, from _0am_
to _6am_ - `* 5 1 * * *`: every first day of the month, at _5am_
* The Operator cannot guarantee minute time granularity, so specifying
\ a minute in the crontab expression can lead to some updates happening
\ sooner/later than expected."
type: string
version:
description: "We are using SemVer for the version number and it can
be specified with any level of precision and can optionally end in
`*`. These are interpreted as: \n * `1.0` = exactly version 1.0 *
`1.1` = exactly version 1.1 * `1.1.*` = version 1.1 and any bug fix
versions `1.1.1`, `1.1.2`, `1.1.3`, etc. * `2.*` = version 2.0 and
any incremental and bug fix versions `2.0`, `2.0.1`, `2.0.2`, `2.1`,
`2.2`, `2.2.1`, etc. * `*` = all versions. * `3.0-ea` = version `3.0-ea1`
and any subsequent EA releases on `3.0`. Also selects the final
3.0 once the final GA version is released. * `4.*-ea` = version `4.0-ea1`
and any subsequent EA release on `4.0`. Also selects the final GA
`4.0`. Also selects any incremental and bug fix versions `4.*` and
`4.*.*`. Also selects the most recent `4.*` EA release i.e., if
`4.0.5` is the last GA version and there is a `4.1-EA3`, then this
\ selects `4.1-EA3` over the `4.0.5` GA. \n You can find the reference
docs about the SemVer syntax accepted [here](https://github.com/Masterminds/semver#basic-comparisons)."
type: string
type: object
status:
description: AmbassadorInstallationStatus defines the observed state of
AmbassadorInstallation
properties:
conditions:
description: List of conditions the installation has experienced.
items:
description: AmbInsCondition defines an Ambassador installation condition,
as well as the last time there was a transition to this condition..
properties:
lastTransitionTime:
format: date-time
type: string
message:
type: string
reason:
type: string
status:
type: string
type:
type: string
required:
- status
- type
type: object
type: array
deployedRelease:
description: the currently deployed Helm chart
nullable: true
properties:
appVersion:
type: string
flavor:
type: string
manifest:
type: string
name:
type: string
version:
type: string
type: object
lastCheckTime:
description: Last time a successful update check was performed.
format: date-time
nullable: true
type: string
required:
- conditions
type: object
type: object
version: v2
versions:
- name: v2
served: true
storage: true
I figured this out. I'd previously disabled docker bridge0 using this entry under
/etc/docker/daemon.json
{
"iptables": false,
"bridge": "none"
}
To fixed i simply deleted and restarted docker:
systemctl restart docker
Related
I hope somebody can help me.
I'm trying to pull a private docker image with no success. I already tried some solutions that I found, but without success.
Docker, Gitlab, Gitlab-Runner, Kubernetes all run on the same server
Insecure Registry
$ sudo cat /etc/docker/daemon.json
{ "insecure-registries":["10.0.10.20:5555"]}
Config.json
$ cat .docker/config.json
{
"auths": {
"10.0.10.20:5555": {
"auth": "NDUwNjkwNDcwODoxMjM0NTZzIQ=="
},
"https://index.docker.io/v1/": {
"auth": "NDUwNjkwNDcwODpGcGZHMXQyMDIyQCE="
}
}
}
Secret
$ kubectl create secret generic regcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I'm trying to create a Kubernetes pod from a private docker image. However, I get the following error:
Name: private-reg
Namespace: default
Priority: 0
Node: 10.0.10.20
Start Time: Thu, 12 May 2022 12:44:22 -0400
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.61
IPs:
IP: 10.244.0.61
Containers:
private-reg-container:
Container ID:
Image: 10.0.10.20:5555/development/app-image-base:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stjn4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-stjn4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 2m7s (x465 over 107m) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Normal Pulling 17s (x3 over 53s) kubelet Pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 17s (x3 over 53s) kubelet Failed to pull image "10.0.10.20:5555/development/expedicao-api-image-base:latest": rpc error: code = Unknown desc = failed to pull and unpack image "10.0.10.20:5555/development/app-image-base:latest": failed to resolve reference "10.0.10.20:5555/development/app-image-base:latest": failed to do request: Head "https://10.0.10.20:5555/v2/development/app-image-base/manifests/latest": http: server gave HTTP response to HTTPS client
Warning Failed 17s (x3 over 53s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 29s) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 3s (x2 over 29s) kubelet Error: ImagePullBackOff
When I pull the image directly in docker, no problem occurs even with the secret
Pull image
$ docker login 10.0.10.20:5555
Username: 4506904708
Password:
WARNING! Your password will be stored unencrypted in ~/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull 10.0.10.20:5555/development/app-image-base:latest
latest: Pulling from development/app-image-base
Digest: sha256:1385a8aa2bc7bac1a8d3e92ead66fdf5db3d6625b736d908d1fec61ba59b6bdc
Status: Image is up to date for 10.0.10.20:5555/development/app-image-base:latest
10.0.10.20:5555/development/app-image-base:latest
Can someone help me?
First, you need to create a file in /etc/containerd/config.toml
# Config file is parsed as version 1 by default.
# To use the long form of plugin names set "version = 2"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.0.10.20:5555"]
endpoint = ["http://10.0.10.20:5555"]
Second, restart contained
$ systemctl restart containerd
I created a cronjob with the following spec in GKE:
# cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: collect-data-cj-111
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Allow
startingDeadlineSeconds: 100
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: collect-data-cj-111
image: collect_data:1.3
restartPolicy: OnFailure
I create the cronjob with the following command:
kubectl apply -f collect_data.yaml
When I later watch if it is running or not (as I scheduled it to run every 5th minute for for the sake of testing), here is what I see:
$ kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 0s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ContainerCreating 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 3s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 17s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 30s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 44s
It does not seem to be able to pull the image from Artifact Registry. I have both GKE and Artifact Registry created under the same project.
What can be the reason? After spending several hours in docs, I still could not make progress and I am quite new in the world of GKE.
If you happen to recommend me to check anything, I really appreciate if you also describe where in GCP I should check/control your recommendation.
ADDENDUM:
When I run the following command:
kubectl describe pods
The output is quite large but I guess the following message should indicate the problem.
Failed to pull image "collect_data:1.3": rpc error: code = Unknown
desc = failed to pull and unpack image "docker.io/library/collect_data:1.3":
failed to resolve reference "docker.io/library/collect_data:1.3": pull
access denied, repository does not exist or may require authorization:
server message: insufficient_scope: authorization failed
How do I solve this problem step by step?
From the error shared, I can tell that the image is not being pulled from Artifact Registry, and the reason for failure is because, by default, GKE pulls it directly from Docker Hub unless specified otherwise. Since there is no collect_data image there, hence the error.
The correct way to specify an image stored in Artifact Registry is as follows:
image: <location>-docker.pkg.dev/<project>/<repo-name>/<image-name:tag>
Be aware that the registry format has to be set to "docker" if you are using a docker-containerized image.
Take a look at the Quickstart for Docker guide, where it is specified how to pull and push docker images to Artifact Registry along with the permissions required.
I am trying to create a pod using my own docker image on localhost.
This is the dockerfile used to create the image :
FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
The yaml file used to create the pod is this :
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root#node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
The image has been pushed to localhost registry.
Following is the error I receive.
root#node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
The output of describe pod :
root#node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server#sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
I get no logs :
root#node1:~/test/server# kubectl logs -f server
root#node1:~/test/server#
I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.
Posting this as Community Wiki.
As pointed by #David Maze in comment section.
If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.
Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.
3.1) Exit Code 0
This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.
In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`
Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.
$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.
Hope it clarified.
I tried to use KinD as an alternative of Minikube to bootstrap a K8S cluster in my local machine.
The cluster is created successfully.
But when I tried to create some pods/deployments from images, it failed.
$ kubectl run nginx --image=nginx
$ kubectl run hello --image=hello-world
After some minutes, use get pods to get a failed status.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 0/1 ImagePullBackOff 0 11m
nginx 0/1 ImagePullBackOff 0 22m
I am afraid this is another Global Firewall problem in China.
kubectl describe pods/nginx
Name: nginx
Namespace: default
Priority: 0
Node: dev-control-plane/172.19.0.2
Start Time: Sun, 30 Aug 2020 19:46:06 +0800
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mgq96 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mgq96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mgq96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned default/nginx to dev-control-plane
Normal BackOff 40m kubelet, dev-control-plane Back-off pulling image "nginx"
Warning Failed 40m kubelet, dev-control-plane Error: ImagePullBackOff
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: unexpected EOF
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Error: ErrImagePull
Normal Pulling 13m (x4 over 56m) kubelet, dev-control-plane Pulling image "nginx"
When I entered to the kindest/node container, but there is no docker in it. Not sure how KIND works, originally I understand it deploys a K8S cluster into a Docker container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a644f8b61314 kindest/node:v1.19.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:52301->6443/tcp dev-control-plane
$ docker exec -it a644f8b61314 /bin/bash
root#dev-control-plane:/# docker -v
bash: docker: command not found
After reading the Kind docs, I can not find an option to set a repository mirror there like that in Minikube.
BTW, I am using the latest Docker Desktop beta on a Windows 10.
First pull the image in your local system using docker pull nginx and then use below command to load that image to the kind cluster
kind load docker-image nginx --name kind-cluster-name
Kind uses containerd instead of docker as runtime, that's why docker is not installed on the nodes.
Alternatively you can use crictl tool to pull and check images inside the kind node.
crictl pull nginx
crictl images
I've run into same issue because I've exported http_proxy and https_proxy before creating cluster to a local proxy (127.0.0.1), which is unrechable in the cluster. After unset http(s)_proxy and recreate cluster, everything runs fine.
I have installed kubernetes in AWS ec2 instance. I'm not using any minikube or openshift. I'm trying to install kamel on top of kubernetes to run my integration code. When I tried to run kamel install command its throwing below error,
Error: cannot find automatically a registry where to push images
When I tried running as root user below error is thrown,
Error: cannot get current namespace: open /root/.kube/config: no such file or directory
I'd like to know what registry I have to pass while running kamel install command. I have docker hub account with a demo repository. Should I pass something like,
kamel install --registry hubusername/reponame
What I'm not getting is after I passed value, I'm getting below success message,
Camel K installed in namespace default
When I tried to run a sample groovy script its getting hanged after following message
kamel run hello.groovy --dev
Integration "hello" created
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default camel-k-operator-587b579567-m26rs 0/1 Pending 0 30m <none> <none> <none> <none>
Name: camel-k-operator-587b579567-m26rs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: camel.apache.org/component=operator
name=camel-k-operator
pod-template-hash=587b579567
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/camel-k-operator-587b579567
Containers:
camel-k-operator:
Image: docker.io/apache/camel-k:0.3.3
Port: <none>
Host Port: <none>
Command:
camel-k
Environment:
WATCH_NAMESPACE: default (v1:metadata.namespace)
OPERATOR_NAME: camel-k
POD_NAME: camel-k-operator-587b579567-m26rs (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from camel-k-operator-token-prjhp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
camel-k-operator-token-prjhp:
Type: Secret (a volume populated by a Secret)
SecretName: camel-k-operator-token-prjhp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 38s (x23 over 31m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Can you please help me out here? Thank you for your time.
If you installed a single node Kubernetes chances are your only node is a master node. Which is why Kubernetes won't schedule your job.
Check this by running:
kubectl get node
If your only node shows 'master' in its ROLES column - than you need to untaint it to allow scheduling:
kubectl taint nodes --all node-role.kubernetes.io/master-
Try to rerun you kamel job afer that.