Hey trying to install jenkins on GKE cluster with this command
helm install stable/jenkins -f test_values.yaml --name myjenkins
My version of helm and kubectl if matters
helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Values downloaded with this command helm inspect values stable/jenkins > test_values.yaml and modified:
cat test_values.yaml
Master:
adminPassword: 34LbGfq5LWEUgw // local testing
resources:
limits:
cpu: '500m'
memory: '1024'
podLabels:
nodePort: 32323
serviceType: ClusterIp
Persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
and some weird new error after update
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
As I can see you're trying to install stable/jekins which isn't in the helm repo instead of stable/jenkins. Please update your question if it's just misspelling and I'll update my answer , but I've tried your command:
$helm install stable/jekins --name myjenkins -f test_values.yaml
and got the same error:
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
EDIT To solve next errors like:
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:258:11: executing "jenkins/templates/deprecation.yaml" at <fail "Master.* values have been renamed, please check the documentation">: error calling fail: Master.* values have been renamed, please check the documentation
and
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:354:10: executing "jenkins/templates/deprecation.yaml" at <fail "Persistence.* values have been renamed, please check the documentation">: error calling fail: Persistence.* values have been renamed, please check the documentation
and so on you also need to edit test_values.yaml
master:
adminPassword: 34LbGfq5LWEUgw
resources:
limits:
cpu: 500m
memory: 1Gi
podLabels:
nodePort: 32323
serviceType: ClusterIP
persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
And after that it's deployed successfully:
$helm install stable/jenkins --name myjenkins -f test_values.yaml
NAME: myjenkins
LAST DEPLOYED: Wed Jan 8 15:14:51 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
myjenkins 1s
myjenkins-tests 1s
==> v1/Deployment
NAME AGE
myjenkins 0s
==> v1/PersistentVolumeClaim
NAME AGE
myjenkins 1s
==> v1/Pod(related)
NAME AGE
myjenkins-6c68c46b57-pm5gq 0s
==> v1/Role
NAME AGE
myjenkins-schedule-agents 1s
==> v1/RoleBinding
NAME AGE
myjenkins-schedule-agents 0s
==> v1/Secret
NAME AGE
myjenkins 1s
==> v1/Service
NAME AGE
myjenkins 0s
myjenkins-agent 0s
==> v1/ServiceAccount
NAME AGE
myjenkins 1s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=myjenkins" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8080
kubectl --namespace default port-forward $POD_NAME 8080:8080
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
The repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub
Related
I am trying to create a Kubernetes cluster using kubeadm tool. For this I installed the supported docker version as specified here
I could also install kubeadm successfully. I initiated the cluster with below command
sudo kubeadm init --pod-network-cidr=10.244.0.0/14 --apiserver-advertise-address=172.16.0.11
and I got the message to use kubeadm join to join the cluster as shown below
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.0.11:6443 --token ptdibx.r2uu0s772n6fqubc \
--discovery-token-ca-cert-hash sha256:f3e36a8e82fb8166e0faf407235f12e256daf87d0a6d0193394f4ce31b50255c
Used flannel for networking purpose
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
After that when I try to check the kubectl command to check the pods/nodes status it fails
$ sudo kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ sudo kubectl get pods --all-namespaces
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ echo $KUBECONFIG
/etc/kubernetes/admin.conf:/home/ltestzaman/.kube/config
Docker and kubernetes version are as follows:
$ sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2",
GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean",
BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
$ sudo docker version
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 08:10:07 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.0-ce
How to make the cluster work?
Output of admin.conf is as follows:
$ sudo cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpN
QjRYRFRJd01EUXhPREV6TlRnek1sb1hEVE13TURReE5qRXpOVGd6TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjdvCnhERGVP
UWd4TGsyWE05K0pxc0hYRkRPWHB2YXYxdXNyeXNYMldOZzlOOVFaeDVucFA2UTRpYkQrZDFjWFcyL0oKY0ZXUjhBSDliaG5hZlhLRUhiUVZWT0R2UTcwbFRmNXVtVlQ4Qk5ZUjRQTmlCWjFxVDNMNnduWlYrekl6a0ZKMwp0
eVVNK0prUy80K2dMazI3b01HMFZ4Rnpjd1ozREMxWEFqRXVxb3FrYVF5UGUzMk9XZmZ2N082TjhxeWNCNkdnClNxbWxNWldBMk1DL0J1cFpZWXRYNkUyYUtNUloxZjkzRlpCaFdYNG9DYjVQSGdSUEdoNTFiNERKZExoYlk4
aWMKdVRSa0EyTi95UDVrTlRIMW5pSTU3bTlUY2phcDZpV0p3dFhsdlpOTUpCYmhQS1VjTEFhZG1tTHFtWTNMTmhiaApGZ2orK0s4T3hXVk5KYWVuQnI4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0Ex
VWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHS2VwQURtZTEva0orZWpob3p4RXVIdXFwQTYKT3dkK3VNQlNPMWYzTTBmSzkxQmhWYkxWakZZeEUwSjVqc1BLNzNJM3cxRU5rb2p2UGdnc0pV
NHBjNnoyeGdsVgpCQ0tESWhWSEVPOVlzRVNpdERnd2g4QUNyQitpeEc4YjBlbnFXTzhBVjZ6dGNESGtJUXlLdDAwNmgxNUV1bi9YCmg0ZUdBMDQrRmNTZVltZndSWHpMVmFFS3F2UHZZWVdkTHBJTktWRFNHZ3J3U3cvbnU5
K2g1U09Ddms1YncwbEYKODhZNnlTaHk3U1B6amRNUHdRcks5cmhWY1ZXK1VvS3d6SE80aUZHdWpmNDR0WHRydTY4L1NGVm5uVnRHWkYyKwo2WmJYeE81Z3I2c1NBQU9NK0E1RmtkQlNmcXZDdmwvUzZFQk04V2czaGNjOUZL
cEFCV0tadHNoRlMxOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://172.16.0.11:6443
name: kubernetes
contexts:contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS2RLRGs4MUpNKzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp
1WlhSbGN6QWVGdzB5TURBME1UZ3hNelU0TXpKYUZ3MHlNVEEwTVRneE16VTRNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUl
CSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW44elI0RlVVM2F1QjluVGkKeWxJV1ZpZ3JkV1dHSXY0bmhsRnZuQU1mWVJyVklrMGN6eTZPTmZBQzNrb01tZ3ZPQnA5MmpsWmlvYXpJUGg1aAovaUR
xalE3dzN4cFhUN1QxT1kySy9mVyt1S1NRUVI3VUx1bjM4MTBoY1ZRSm5NZmV4UGJsczY2R3RPeE9WL2RQCm1tcEEyUFlzL0lwWWtLUEhqNnNvb0NXU1JEMUZIeG1SdWFhYXhpL0hYQXdJODZEN01uWS90KzZJQVIyKzZRM0s
KY2pPRFdEWlRpbHYyMXBCWFBadW9FTndoZ0s2bWhKUU5VRmc5VmVFNEN4NEpEK2FYbmFSUW0zMndid29oYXk1OAo3L0FnUjRoMzNXTjNIQ1hPVGUrK2Z4elhDRnhwT1NUYm13Nkw1c1RucFkzS2JFRXE5ZXJyNnorT2x5ZVl
GMGMyCkZCV3J4UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCVDVKckdCS1NxS1VkclNYNUEyQVZHNmFZSHl
1TkNkeWp1cQpIVzQrMU5HdWZSczJWZW1kaGZZN2VORDJrSnJXNldHUnYyeWlBbDRQUGYzYURVbGpFYm9jVmx0RjZHTXhRRVU2CkJtaEJUWGMrVndaSXRzVnRPNTUzMHBDSmZwamJrRFZDZWFFUlBYK2RDd3hiTnh0ZWpacGZ
XT28zNGxTSGQ3TFYKRDc5UHAzYW1qQXpyamFqZE50QkdJeHFDc3lZWE9Rd1BVL2Nva1RJVHRHVWxLTVV5aUdOQk5rQ3NPc3RiZHI2RApnQVRuREg5YWdNck9CR2xVaUlJN0Qvck9rU3IzU2QvWnprSGdMM1c4a3V5dXFUWWp
wazNyNEFtU3NXS1M4UUdNCjZ6bHUwRk4rYnVJbGRvMnBLNmhBZlBnSTBjcDZWQWk1WW5lQVVGQ2EyL2pJeXI3cXRiND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbjh6UjRGVVUzYXVCOW5UaXlsSVdWaWdyZFdXR0l2NG5obEZ2bkFNZllSclZJazBjCnp5Nk9OZkFDM2tvTW1
ndk9CcDkyamxaaW9heklQaDVoL2lEcWpRN3czeHBYVDdUMU9ZMksvZlcrdUtTUVFSN1UKTHVuMzgxMGhjVlFKbk1mZXhQYmxzNjZHdE94T1YvZFBtbXBBMlBZcy9JcFlrS1BIajZzb29DV1NSRDFGSHhtUgp1YWFheGkvSFh
Bd0k4NkQ3TW5ZL3QrNklBUjIrNlEzS2NqT0RXRFpUaWx2MjFwQlhQWnVvRU53aGdLNm1oSlFOClVGZzlWZUU0Q3g0SkQrYVhuYVJRbTMyd2J3b2hheTU4Ny9BZ1I0aDMzV04zSENYT1RlKytmeHpYQ0Z4cE9TVGIKbXc2TDV
zVG5wWTNLYkVFcTllcnI2eitPbHllWUYwYzJGQldyeFFJREFRQUJBb0lCQVFDQXhjRHJFaVQ2Yk5jUwpFQ2NoK3Z4YytZbnIxS0EvV3FmbktZRFRMQUVCYzJvRmRqYWREbHN6US9KTHgwaFlhdUxmbTJraVVxS3d2bGV2CkZ
6VElZU1loL2NSRlJTak81bmdtcE5VNHlldWpSNW1ub0h4RVFlNjVnbmNNcURnR3kxbk5SMWpiYnV6R3B4YUsKOUpTRlR0SnJCQlpFZkFmYXB1Q04rZE9IR2ovQUZJbWt2ZXhSckwyTXdIem0zelJkMG5UdkRyOUh1dy9IMjE
1RAprNXBHZjluV1ZsNnZxSGZFYVF0S0NNemY2WE5MdEFjcEJMcmNwSExwWEFObVNMWTAvcFFnV0s5eVpkbVA5b0xCCjhvU1J0eFRsZlU0V1RLdERpNlozK0tTSytidnF4MDRGZTJYb2RlVUM3eDN1d3lwamszOXZjSG55UkM
2Tmhlem4KTExJcnVEbVJBb0dCQU1VbG8zRkpVTUJsczYrdHZoOEpuUjJqN2V6bU9SNllhRmhqUHVuUFhkeTZURFVxOFM2aQprSTZDcG9FZEFkeUE4ejhrdU01ZlVFOENyOStFZ05DT3lHdGVEOFBaV2FCYzUxMit6OXpuMXF
3SVg3QjY1M01lCk5hS2Y1Z3FYbllnMmdna2plek1lbkhQTHFRLzZDVjZSYm93Q3lFSHlrV0FXS3I4cndwYXNNRXQ3QW9HQkFNK0IKRGZZRU50Vmk5T3VJdFNDK0pHMHY1TXpkUU9JT1VaZWZRZTBOK0lmdWwrYnRuOEhNNGJ
aZmRUNmtvcFl0WmMzMQptakhPNDZ5NHJzcmEwb1VwalFscEc5VGtVWDRkOW9zRHoydlZlWjBQRlB2em53R1JOUGxzaTF1cUZHRkdyY0dTClJibzZiTjhKMmZqV0hGb2ppekhVb3Rkb1BNbW1qL0duM0RmVEw2Ry9Bb0dBQk8
rNVZQZlovc2ROSllQN003bkEKNW1JWmJnb2h1Z05rOFhtaXRLWU5tcDVMbERVOERzZmhTTUE2dlJibDJnaWNqcU16d1c4ZmlxcnRqbkk1NjM3Mwp3OEI2TXBRNXEwdElPOCt3VXI2M1lGMWhVQUR6MUswWCtMZDZRaCtqd1N
wa1BTaFhTR05tMVh0dkEwaG1mYWkwCmxPcm82c1hSSUEvT0NEVm5UUENJMFFzQ2dZQWZ3M0dQcHpWOWxKaEpOYlFFUHhiMFg5QjJTNmdTOG40cTU0WC8KODVPSHUwNGxXMXFKSUFPdEZ3K3JkeWdzTk9iUWtEZjZSK0V5SDF
NaVdqeS9oWXpCVkFXZW9SU1lhWjNEeWVHRwpjRGNkZzZHQ3I5ZzNOVE1XdXpiWjRUOGRaT1JVTFQvZk1mSlljZm1iemFxcFlhZDlDVCtrR2FDMGZYcXJVemF5CmxQRkZvUUtCZ0E4ck5IeG1DaGhyT1ExZ1hET2FLcFJUUWN
xdzd2Nk1PVEcvQ0lOSkRiWmdEU3g3dWVnVHFJVmsKd3lOQ0o2Nk9kQ3VOaGpHWlJld2l2anJwNDhsL2plM1lsak03S2k1RGF6NnFMVllqZkFDSm41L0FqdENSZFRxTApYQ3B1REFuTU5HMlFib0JuSU1UaXNBaXJlTXduS1Z
BZjZUMzM4Tjg5ZEo2Wi93QUlOdWNYCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
Not sure why most of the entries are null as shown below
$ sudo kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Most probably the kubeconfig file is not setup properly.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Also this should work
sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
Changed environmental variable KUBECONFIG to /home/ltestzaman/.kube/config and it works
$ echo $KUBECONFIG
/home/ltestzaman/.kube/config
$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 117m v1.18.2
Or need to mention --config as identified by #Arghya Sadhu
$ sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
NAME STATUS ROLES AGE VERSION
kubemaster-001 Ready master 120m v1.18.2
You have to start the kubernetes cluster. Either with minikube start or if you are connecting to any cloud service, make sure kubernetes is started.
This is a bit tricky,I have a K8s cluster up and running and i am able to execute a docker image inside that cluster and i can see contents of command “kubectl get pods -o wide” .Now i have Gitlab setted up with this K8 cluster
I have set up variables $KUBE_URL $KUBE_USER and $KUBE_PASSWORD respectively in Gitlab with above K8 cluster
Here Gitlab runner console displays all these information as shown in console log below,at the end it fails for
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
Here is full console log:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on WotC-Docker-ip-10-102-0-70 d457d50a
Using Docker executor with image docker:latest …
Pulling docker image docker:latest …
Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest …
Running on runner-d457d50a-project-185-concurrent-0 via ip-10-102-0-70…
Fetching changes…
Removing cluster1-config
HEAD is now at 25846c4 Initial commit
From https://git.com/core-systems/gatling
25846c4…bcaa89b master -> origin/master
Checking out bcaa89bf as master…
Skipping Git submodules setup
$ uname -a
Linux runner-d457d50a-project-185-concurrent-0 4.14.67-66.56.amzn1.x86_64 #1 SMP Tue Sep 4 22:03:21 UTC 2018 x86_64 Linux
$ apk add --no-cache curl
fetch htt p://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch ht tp://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.32.0-r0)
(2/4) Installing libssh2 (1.8.0-r3)
(3/4) Installing libcurl (7.61.1-r1)
(4/4) Installing curl (7.61.1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 6 MiB in 18 packages
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s ht tps : //storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
95 37.3M 95 35.8M 0 0 37.8M 0 --:–:-- --:–:-- --:–:-- 37.7M
100 37.3M 100 37.3M 0 0 38.3M 0 --:–:-- --:–:-- --:–:-- 38.3M
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
$ kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true
Cluster “nosebit” set.
$ kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
User “admin” set.
$ kubectl config set-context default --cluster=nosebit --user=admin
Context “default” created.
$ kubectl config use-context default
Switched to context “default”.
$ cat $HOME/.kube/config
apiVersion: v1
clusters:
cluster:
insecure-skip-tls-verify: true
server: https://18.216.8.240:443
name: nosebit
contexts:
context:
cluster: nosebit
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
name: admin
user:
password: |-
MIIDOzCCAiOgAwIBAgIJALOrUrxmhgpHMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV
BAMMDTEzLjU4LjE3OC4yNDEwHhcNMTgxMTI1MjIwNzE1WhcNMjgxMTIyMjIwNzE1
WjAYMRYwFAYDVQQDDA0xMy41OC4xNzguMjQxMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA4jmyesjEiy6T2meCdnzzLfSE1VtbY//0MprL9Iwsksa4xssf
PXrwq97I/aNNE2hWZhZkpPd0We/hNKh2rxwNjgozQTNcXqjC01ZVjfvpvwHzYDqj
4cz6y469rbuKqmXHKsy/1docA0IdyRKS1JKWz9Iy9Wi2knjZor6/kgvzGKdH96sl
ltwG7hNnIOrfNQ6Bzg1H6LEmFP+HyZoylWRsscAIxD8I/cmSz7YGM1L1HWqvUkRw
GE23TXSG4uNYDkFaqX46r4nwLlQp8p7heHeCV/mGPLd0QCUaCewqSR+gFkQz4nYX
l6BA3M0Bo4GHMIGEMB0GA1UdDgQW
BBQqsD7FUt9vBW2LcX4xbqhcO1khuTBIBgNVHSMEQTA/gBQqsD7FUt9vBW2LcX4x
bqhcO1khuaEcpBowGDEWMBQGA1UEAwwNMTMuNTguMTc4LjI0MYIJALOrUrxmhgpH
MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAY
6mxGeQ90mXYdbLtoVxOUSvqk9+Ded1IzuoQMr0joxkDz/95HCddyTgW0gMaYsv2J
IZVH7JQ6NkveTyd42QI29fFEkGfPaPuLZKn5Chr9QgXJ73aYrdFgluSgkqukg4rj
rrb+V++hE9uOBtDzcssd2g+j9oNA5j3VRKa97vi3o0eq6vs++ok0l1VD4wyx7m+l
seFx50RGXoDjIGh73Gh9Rs7/Pvc1Pj8uAGvj8B7ZpAMPEWYmkkc4F5Y/14YbtfGc
2VlUJcs5p7CbzsqI5Tqm+S9LzZXtD1dVnsbbbGqWo32CIm36Cxz/O/FCf8tbITpr
u2O7VjBs5Xfm3tiW811k
username: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdzZqdDYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFiMjc2YzIxLWYxMDAtMTFlOC04YjM3LTAyZDhiMzdkOTVhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNQifQ.RCQQWjDCSkH8YckBeck-EIdvOnTKBmUACXVixPfUp9gAmUnit5qIPvvFnav-C-orfYt552NQ5GTLOA3yR5-jmxoYJwCJBfvPRb1GqqgiiJE2pBsu5Arm30MOi2wbt5uCNfKMAqcWiyJQF98M2PFc__jH6C1QWPXgJokyk7i8O6s3TD69KrrXNj_W4reDXourLl7HwHWoWwNKF0dgldanug-_zjvE06b6VZBI-YWpm9bpe_ArIOrMEjl0JRGerWahcQFVJsmhc4vgw-9-jUsfKPUYEfDItJdQKyV9dgdwShgzMINuuHlU7w7WBxmJT6cqMIvHRnDHuno3qMKTJTuh-g
$ kubectl config view --minify > cluster1-config
$ export KUBECONFIG=$HOME/.kube/config
$ kubectl --kubeconfig=cluster1-config config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
default nosebit admin
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
==================================================================================================
Here is my .gitlab-ci.yml content, could you suggest why kubectl get pods not displaying pods of the remote cluster even when KUBECONFIG set up is done successfully?
image : docker:latest
variables:
CONTAINER_DEV_IMAGE: https://hub.docker.com/r/tarunkumard/gatling/:$CI_COMMIT_SHA
stages:
deploy
deploy:
stage: deploy
tags:
- docker
script:
‘uname -a’
‘apk add --no-cache curl’
‘curl -LO http s://storage.go ogleapis.com/kubernetes-release/release/$(curl -s htt ps:// storage.googlea pis .com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl’
‘chmod +x ./kubectl’
‘mv ./kubectl /usr/local/bin/kubectl’
‘kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true’
‘kubectl config set-credentials admin --username=" " --password="$KUBE_PASSWORD"’
‘kubectl config set-context default --cluster=nosebit --user=admin’
‘kubectl config use-context default’
‘cat $HOME/.kube/config’
‘kubectl config view --minify > cluster1-config’
‘export KUBECONFIG=$HOME/.kube/config’
‘kubectl --kubeconfig=cluster1-config config get-contexts’
'kubeconfig=cluster1-config kubectl get pods -o wide ’
Why gitlab runner failing to get pods from Kubernetes cluster(Note This cluster is up and running using and I am able to see pods using kubectl get pods command )
Basically,
kubectl config view --minify > cluster1-config
Won't do it, because the output will be something like this with no actual credentials/certs:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<kube-apiserver>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
You need:
kubectl config view --raw > cluster1-config
If that's not the issue. It could be that your credentials don't have the right RBAC permissions. I would try to find the ClusterRoleBinding or RoleBinding that is bound for that admin user. Something like:
$ kubectl get clusterrolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
$ kubectl get rolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
Once you find the role, you can see if it has the right permissions to view pods. For example:
$ kubectl get clusterrole cluster-admin -o=yaml
Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).
I am facing a weird issue with my pods. I am launching around 20 pods in my env and every time some random 3-4 pods out of them hang with Init:0/1 status. On checking the status of pod, Init container shows running status, which should terminate after task is finished, and app container shows Waiting/Pod Initializing stage. Same init container image and specs are being used in across all 20 pods but this issue is happening with some random pods every time. And on terminating these stuck pods, it stucks in Terminating state. If i ssh on node at which this pod is launched and run docker ps, it shows me init container in running state but on running docker exec it throws error that container doesn't exist. This init container is pulling configs from Consul Server and on checking volume (got from docker inspect), i found that it has pulled all the key-val pairs correctly and saved it in defined file name. I have checked resources on all the nodes and more than enough is available on all.
Below is detailed example of on the pod acting like this.
Kubectl Version :
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Pods :
kubectl get pods -n dev1|grep -i session-service
session-service-app-75c9c8b5d9-dsmhp 0/1 Init:0/1 0 10h
session-service-app-75c9c8b5d9-vq98k 0/1 Terminating 0 11h
Pods Status :
kubectl describe pods session-service-app-75c9c8b5d9-dsmhp -n dev1
Name: session-service-app-75c9c8b5d9-dsmhp
Namespace: dev1
Node: ip-192-168-44-18.ap-southeast-1.compute.internal/192.168.44.18
Start Time: Fri, 27 Apr 2018 18:14:43 +0530
Labels: app=session-service-app
pod-template-hash=3175746185
release=session-service-app
Status: Pending
IP: 100.96.4.240
Controlled By: ReplicaSet/session-service-app-75c9c8b5d9
Init Containers:
initpullconsulconfig:
Container ID: docker://c658d59995636e39c9d03b06e4973b6e32f818783a21ad292a2cf20d0e43bb02
Image: shr-u-nexus-01.myops.de:8082/utils/app-init:1.0
Image ID: docker-pullable://shr-u-nexus-01.myops.de:8082/utils/app-init#sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd
Port: <none>
Args:
-consul-addr=consul-server.consul.svc.cluster.local:8500
State: Running
Started: Fri, 27 Apr 2018 18:14:44 +0530
Ready: False
Restart Count: 0
Environment:
CONSUL_TEMPLATE_VERSION: 0.19.4
POD: sand
SERVICE: session-service-app
ENV: dev1
Mounts:
/var/lib/app from shared-volume-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bthkv (ro)
Containers:
session-service-app:
Container ID:
Image: shr-u-nexus-01.myops.de:8082/sand-images/sessionservice-init:sitv12
Image ID:
Port: 8080/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/etc/appenv from shared-volume-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bthkv (ro)
Conditions:
Type Status
Initialized False
Ready False
PodScheduled True
Volumes:
shared-volume-sidecar:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-bthkv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bthkv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Container Status on Node :
sudo docker ps|grep -i session
c658d5999563 shr-u-nexus-01.myops.de:8082/utils/app-init#sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd "/usr/bin/consul-t..." 10 hours ago Up 10 hours k8s_initpullconsulconfig_session-service-app-75c9c8b5d9-dsmhp_dev1_c2075f2a-4a18-11e8-88e7-02929cc89ab6_0
da120abd3dbb gcr.io/google_containers/pause-amd64:3.0 "/pause" 10 hours ago Up 10 hours k8s_POD_session-service-app-75c9c8b5d9-dsmhp_dev1_c2075f2a-4a18-11e8-88e7-02929cc89ab6_0
f53d48c7d6ec shr-u-nexus-01.myops.de:8082/utils/app-init#sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd "/usr/bin/consul-t..." 10 hours ago Up 10 hours k8s_initpullconsulconfig_session-service-app-75c9c8b5d9-vq98k_dev1_42837d12-4a12-11e8-88e7-02929cc89ab6_0
c26415458d39 gcr.io/google_containers/pause-amd64:3.0 "/pause" 10 hours ago Up 10 hours k8s_POD_session-service-app-75c9c8b5d9-vq98k_dev1_42837d12-4a12-11e8-88e7-02929cc89ab6_0
On running Docker exec (same result with kubectl exec) :
sudo docker exec -it c658d5999563 bash
rpc error: code = 2 desc = containerd: container not found
A Pod can be stuck in Init status due to many reasons.
PodInitializing or Init Status means that the Pod contains an Init container that hasn't finalized (Init containers: specialized containers that run before app containers in a Pod, init containers can contain utilities or setup scripts). If the pods status is ´Init:0/1´ means one init container is not finalized; init:N/M means the Pod has M Init Containers, and N have completed so far.
Gathering information
For those scenario the best would be to gather information, as the root cause can be different in every PodInitializing issue.
kubectl describe pods pod-XXX with this command you can get many info of the pod, you can check if there's any meaningful event as well. Save the init container name
kubectl logs pod-XXX this command prints the logs for a container in a pod or specified resource.
kubectl logs pod-XXX -c init-container-xxx This is the most accurate as could print the logs of the init container. You can get the init container name describing the pod in order to replace "init-container-XXX" as for example to "copy-default-config" as below:
The output of kubectl logs pod-XXX -c init-container-xxx can thrown meaningful info of the issue, reference:
In the image above we can see that the root cause is that the init container can't download the plugins from jenkins (timeout), here now we can check connection config, proxy, dns; or just modify the yaml to deploy the container without the plugins.
Additional:
kubectl describe node node-XXX describing the pod will give you the name of its node, which you can also inspect with this command.
kubectl get events to list the cluster events.
journalctl -xeu kubelet | tail -n 10 kubelet logs on systemd (journalctl -xeu docker | tail -n 1 for docker).
Solutions
The solutions depends on the information gathered, once the root cause is found.
When you find a log with an insight of the root cause, you can investigate that specific root cause.
Some examples:
1 > In there this happened when init container was deleted, can be fixed deleting the pod so it would be recreated, or redeploy it. Same scenario in 1.1.
2 > If you found "bad address 'kube-dns.kube-system'" the PVC may not be recycled correctly, solution provided in 2 is running /opt/kubernetes/bin/kube-restart.sh.
3 > There, a sh file was not found, the solution would be to modify the yaml file or remove the container if unnecessary.
4 > A FailedSync was found, and it was solved restarting docker on the node.
In general you can modify the yaml, for example to avoid using an outdated URL, try to recreate the affected resource, or just remove the init container that causes the issue from your deployment. However the specific solution will depend on the specific root cause.
My problem was related to the ebs-csi-controller (AWS EKS 1.24)
The ebs addin needs access to a role, and in my case the role trust relationship was broken. It uses OIDC, so I had to add my cluster's OIDC provider manually into the IAM identity provider section
kubectl logs deployment/ebs-csi-controller -n kube-system -c ebs-plugin
helped diagnose this, as well as
https://aws.amazon.com/premiumsupport/knowledge-center/eks-troubleshoot-ebs-volume-mounts/
I'm trying to run my first kubernetes pod locally.
I've run the following command (from here):
export ARCH=amd64
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
Then, I've trying to run the following:
kubectl create -f ./run-aii.yaml
run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 2
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Now, when I run: kubectl get pods
I'm getting:
NAME READY STATUS RESTARTS AGE
aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m
aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m
aii-pod 0/3 ImagePullBackOff 0 23h
k8s-etcd-127.0.0.1 1/1 Running 0 2d
k8s-master-127.0.0.1 4/4 Running 0 2d
k8s-proxy-127.0.0.1 1/1 Running 0 2d
nginx-198147104-9kajo 1/1 Running 0 2d
BTW: docker images return:
REPOSITORY TAG IMAGE ID CREATED SIZE
ws latest fa7c5f6ef83a 7 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB
aii latest bd12c4acbbaf 9 days ago 1.421 GB
node 4.4 1a93433cee73 11 days ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB
spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB
wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB
centos latest 61b442687d68 4 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB
why do I get the ImagePullBackOff ??
By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.
You can run a local Kubernetes registry with the registry cluster addon.
Then tag your images with localhost:5000:
docker tag aii localhost:5000/dev/aii
Push the image to the Kubernetes registry:
docker push localhost:5000/dev/aii
And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.
Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.
An authentication error may cause an imagePullBackOff.
I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.
You can specify also imagePullPolicy: Never in the container's spec:
containers:
- name: nginx
imagePullPolicy: Never
image: custom-nginx
ports:
- containerPort: 80
The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry.
k8s Engine enables 3 types of ImagePullPolicy mentioned :
Always : It always pull the image in container irrespective of changes in the image
Never : It will never pull the new image on the container
IfNotPresent : It will pull the new image in cluster if the image is not present.
Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.
I too had this problem, when I checked I image that I was pulling from a private registry was removed
If we describe pod it will show pulling event and the image it's trying to pull
kubectl describe pod <POD_NAME>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 18h (x35 over 20h) kubelet, gsk-kub Pulling image "registeryName:tag"
Normal BackOff 11m (x822 over 20h) kubelet, gsk-kub Back-off pulling image "registeryName:tag"
Warning Failed 91s (x858 over 20h) kubelet, gsk-kub Error: ImagePullBackOff
Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:
The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.
That's exactly what I wanted, but didn't seem to work.
Reading further said the following:
If you would like to always force a pull, you can do one of the following:
omit the imagePullPolicy and use :latest as the tag for the image to use.
When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.
$ kubectl create deployment presto-coordinator \
--image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
presto-coordinator 1/1 1 1 3s
Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.
Debugging step:
kubectl get pod [name] -o yaml
Run this command to get the YAML configuration of the pod (Get YAML for deployed Kubernetes services?). In my case, it was under this section:
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: Get
https://repository:9999/v2/abc/location/image/manifests/tag:
unauthorized: BAD_CREDENTIAL'
reason: ErrImagePull
My issue got resolved upon adding the appropriate tag to the image I wanted to pull from the DockerHub.
Previously:
containers:
- name: nginx
image: alex/my-app-image
Corrected Version:
containers:
- name: nginx
image: alex/my-app-image:1.1
The image has only one version, which was 1.1. Since I skipped that initially, it has thrown an error.
After correctly mentioning the version, it worked fine!!
I had similar problem when using minikube over hyperv with 2048GB memory.
I found that in HyperV manager the Memory Demand was higher than allocated.
So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!
I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.
I had face same issue.
imagePullBackOff means it is not able to pull docker image from registry or smoking issue with your registry.
the solution would be as below.
1. Check you image registry name.
2. check image pull secrets.
3. check image is present with same tag or name.
4. check you registry is working.
ImagepullBackoff mesns you have not passed secret in your yaml or secret is wrong and might be you image name is wrong.
If you pulling image from private registry you have to provide image pull secret then it will able to pull image.
you also need to creat secrete before you deploy the pod. you can use below command to create secrete.
kubectl create secret docker-registry regcred --docker-server=artifacts.exmple.int --docker-username=<username> --docker-password=<password> -n <namespace>
you can pass secret in yaml like below.
imagePullSecrets:
- name: regcred
I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.
Note: This error occurs when kubernetes is unable to pull the specified image from the repository.
I had the same issue.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-zcr5d 0/1 ImagePullBackOff 0 6m21s
Later I found that the docker on which the pod is created is using a private registry for images and Nginx was not present in it.
I have changed the docker registry to default and reloaded the daemon.
Post that issue got resolved.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-7cbjf 1/1 Running 0 33s
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$ kubectl exec -it nginx-598b589c46-7cbjf -- /bin/bash
root#nginx-598b589c46-7cbjf:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root#nginx-598b589c46-7cbjf:/#
For my case, Kubernetes was not able to communicate to my private registry running on localhost:5000 after update to MacOS Monterey. It was running fine previously. The reason was Apple Airplay now listen to port 5000.
In order to resolve this issue, I disabled Apple Airplay receiver.
Go To System preference > Sharing > Disable checkbox for Airplay receiver.
Source Link: https://developer.apple.com/forums/thread/682332
To handle this error, Just have to create Kubernetes secrets and use it in manifest.yaml file
If it is private repository then it is mandatory to use user secrets
To generate secrets -
kubectl create secret docker-registry docker-secrets --docker-server=https://index.docker.io/v1/ --docker-username=ExamplaName --docker-password=ExamplePassword --docker-email=example#gmail.com
for --docker-server, use https://index.docker.io/v1/
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ExampleUsername/test:tagname
ports:
- containerPort: 3015
imagePullSecrets:
- name: docker-secrets