Trying to Implement Jupyterhub on Kubernetes - docker

I am trying to implement Jupyterhub on a set of 8 unclustered completely identical computers in my school. My instructions were first to cluster the 8 systems (all running Ubuntu 18.04 LTS) and to implement Jupyterhub on that cluster.
After searching the net, these are the instructions that I followed-
Installed docker on both systems using this instructions
(Tried) Implemented a Kubernetes cluster using this instructions and this
Implement Jupyterhub using zero-to-jupyterhub instructions
Using the instructions I managed to do steps 1 and 2 already. But after installing helm using the instructions of zero-to-jupyterhub, I came across the error when doing step 2 of Installing Jupyterhub section in this webpage.
My exact error is:
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%D(MISSING)jhub%!(MISSING)OWNER%D(MISSING)TILLER%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
Error: UPGRADE FAILED : Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%D(MISSING)jhub%!(MISSING)OWNER%D(MISSING)TILLER%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
then when I view the link I get this: [https://10.96.0.1:443/api/v1/namespaces/...]
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "configmaps is forbidden: User \"system:anonymous\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"kind": "configmaps"
},
"code": 403
}
Has anyone encountered this problem? What did you do?
Thank you for anyone that would answer...
Also, feel free to tell me I'm wrong in the implementation as I am open to new Ideas. If you have any better way to this please leave instructions on how to implement it. Thank you very much.

It looks like you have RBAC enabled and are trying to access the resources that are not permitted to be accessed from your account.
Did you follow the instructions to set up Helm/Tiller? There should be two commands that will create the proper permissions to deploy JupyterHub:
kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
Hope this helps!

I had exactly the same issue when I upgraded my minikube. In my case I had to delete the cluster and init it again - everything worked fine from there.
In your case it seems like requests from Tiller are blocked and they can't reach the API. In case of your fresh cluster I think that the issue might be incorrect CNI configuration, but to confirm that you would have to add information on what CNI did you use and if you used --pod-network-cidr= flag or any other steps that could end up with conflict or blocking the Tiller requests.
Before adding that information I can only recommend running:
kubeadm reset
lets assume you want to use Calico:
kubeadm init --pod-network-cidr=192.168.0.0/16
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml`
Install Helm:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
Now follow Jupyter Hub tutorial:
Create the config.yaml as described here.
And install JupyterHub:
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.0 \
--values config.yaml

Related

How do i check that which runtime OCI, CRI-O , Containerd or runc is used by me?

I have deployed pods using kubectl apply command and I can see pods running:
$kubectl describe pod test-pod -n sample | grep -i container
Containers:
Container ID: containerd://ce6cd9XXXXXX69538XXX
ContainersReady True
Can I say that it's using contained runtime technology? How do I verify the runtime used by containers.
I am also getting some errors like below in pod:
kubectl logs test-pod -n sample
'docker.images' is not supported: Cannot fetch data: Get http://1.28/images/json: dial unix /var/run/docker.sock: connect: no such file or directory.
Is it because I am not using docker runtime?
As i already mentioned in a comment the command is
kubectl get nodes -o wide
It will returns the container runtime for each node.

Unable to install Jenkins on Minikube using Helm due to the permission

I've been trying to install Jenkins by using Helm on Minikube according to the official article
https://www.jenkins.io/doc/book/installing/kubernetes/
It turns out that I can't bring up the Jenkins Pod, kubectl logs -f jenkins-0 -c init -n jenkins gives me this error
disable Setup Wizard
/var/jenkins_config/apply_config.sh: 4: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.UpgradeWizard.state: Permission denied
From my assumption, this issue obviously relates with permission in Dockerfile
or it might relates to the defined values in jenkins-values.yaml. I've changed some parameters as the recommended values.
storageClass: jenkins-pv
serviceAccount:
create: false
name: jenkins
annotations: {}
serviceType: NodePort
release detail
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
jenkins jenkins 1 2021-01-04 15:58:00.022465588 +0700 +07 deployed jenkins-3.0.14 2.263.1
is there anyway to fix this?
Thanks
It seems that for some reason the volume is mounted with not-enough access rights. You can try running your container with the root user. It may solve the issue. Put these lines into your values.yaml.
runAsUser: 0
fsGroup: 0

Minikube running in Docker mode returns 503 when launching the dashboard

I have started to learn Minikube using some of this tutorial and a bit of this one. My plan is to use the "none" driver to use Docker rather than the standard Virtual Box.
My purpose is to learn some infra/operations techniques that are more flexible than Docker Swarm. There are a few docker run switches that Swarm does not support, so I am looking at alternatives.
When setting this up, I had a couple of false starts, as I did not specify the --vm-driver=none initially, and I had to do a sudo -rf ~/.minikube and/or a sudo minikube delete to not use VirtualBox. (Although I don't think it is relevant, I will mention anyway that I am working inside a VirtualBox Linux Mint VM as a matter of long-standing security preference).
So, I think I have a mostly working installation of Minikube, but something is not right with the dashboard, and since the Hello World tutorial asks me to get that working, I would like to persist with this.
Here is the command and error:
$ sudo minikube dashboard
🔌 Enabling dashboard ...
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
💣 http://127.0.0.1:41303/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
{snipped many more of these}
Minikube itself looks OK:
$ sudo minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.0.2.15
However it looks like some components have not been able to start, but there is no indication why they are having trouble:
$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-2br2c 0/1 CrashLoopBackOff 16 62m
kube-system coredns-fb8b8dccf-nq4b8 0/1 CrashLoopBackOff 16 62m
kube-system etcd-minikube 1/1 Running 2 60m
kube-system kube-addon-manager-minikube 1/1 Running 3 61m
kube-system kube-apiserver-minikube 1/1 Running 2 61m
kube-system kube-controller-manager-minikube 1/1 Running 3 61m
kube-system kube-proxy-dzqsr 1/1 Running 0 56m
kube-system kube-scheduler-minikube 1/1 Running 2 60m
kube-system kubernetes-dashboard-79dd6bfc48-94c8l 0/1 CrashLoopBackOff 12 40m
kube-system storage-provisioner 1/1 Running 3 62m
I am assuming that a zero in the READY column means that something was not able to start.
I have been issuing commands either with or without sudo, so that might be related. Sometimes there are config files in my non-root ~/.minikube folder that are owned by root, and I have been forced to use sudo to progress further.
This seems to look OK:
Kubernetes master is running at https://10.0.2.15:8443
KubeDNS is running at https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Incidentally, I don't really know what these various status commands do, or whether they are relevant - I have found some similar posts here and on GitHub, and their respective authors used these commands to write questions and bug reports.
This API status looks like it is in a pickle, but I don't know whether it is relevant (I found it through semi-random digging):
https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
{
"kind": "Status",
"apiVersion": "v1",
"metadata": { },
"status": "Failure",
"message": "services \"kube-dns:dns\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "kube-dns:dns",
"kind": "services"
},
"code": 403
}
I also managed to cause a Go crash too, seen in sudo minikube logs:
panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "secrets" in API group "" in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc42011c2e0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x35e
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1367500, 0xc4200d0120, 0xc4200d0120, 0x1213a6e)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x64
main.initAuthManager(0x13663e0, 0xc420301b00, 0xc4204cdcd8, 0x1)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:185 +0x12c
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:103 +0x26b
I expect that would correspond to the 503 I am getting, which is a server error of some kind.
Some versions:
$ minikube version
minikube version: v1.0.0
$ docker --version
Docker version 18.09.2, build 6247962
$ sudo kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Related links:
503 dashboard errors on a Mac with Hyperkit - I am on Linux Mint and not using Hyperkit.
Lots of folks with 503 dashboard errors - the main advice here is to delete the cluster with minikube delete, which I have already done for other reasons.
What can I try next to debug this?
It looks like I needed the rubber-ducking of this question in order to find an answer. The Go crash was the thing to have researched, and is documented in this bug report.
The commands to create a missing role is:
$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created
Then we need to get the name of the system pod for the dashboard:
$ sudo kubectl get pods -n kube-system
Finally, use the ID of the dashboard pod instead of kubernetes-dashboard-5498ccf677-dq2ct:
$ kubectl delete pods -n kube-system kubernetes-dashboard-5498ccf677-dq2ct
pod "kubernetes-dashboard-5498ccf677-dq2ct" deleted
I think this removes the misconfigured dashboard, leaving a new one to spawn in its place when you issue this command:
sudo minikube dashboard
To my mind, the Go error looks sufficiently naked and unhandled that it needs catching, but then I am not au fait with Go. The bug report has been auto-closed by a CI bot, and several attempts to reopen it seem to have failed.
At a guess, I could have avoided this pain with setting the role config to start with. However, this is not noted in the Hello World tutorial, so it would not be reasonable to expect beginners not to step into this trap:
sudo minikube start --vm-driver=none --extra-config='apiserver.Authorization.Mode=RBAC'

How to fix "failed to ensure load balancer" error for nginx ingress

When setting a new nginx-ingress using helm and a static ip on Azure the nginx controller never gets the static IP assigned. It always says <pending>.
I install the helm chart as follows -
helm install stable/nginx-ingress --name <my-name> --namespace <my-namespace> --set controller.replicaCount=2 --set controller.service.loadBalancerIP="<static-ip-address>"
It says it installs correctly but there is an error listed as well
E0411 06:44:17.063913 13264 portforward.go:303] error copying from
remote stream to local connection: readfrom tcp4
127.0.0.1:57881->127.0.0.1:57886: write tcp4 127.0.0.1:57881->127.0.0.1:57886: wsasend: An established connection was aborted by the software in your host machine.
I then do a kubectl get all -n <my-namespace> and everything is listed correctly just with the external IP as <pending> for the controller.
I then do a kubectl describe -n <my-namespace> service/<my-name>-nginx-ingress-controller and this error is listed under Events -
Warning CreatingLoadBalancerFailed 11s (x4 over 47s)
service-controller Error creating load balancer (will retry): failed
to ensure load balancer for service
my-namespace/my-name-nginx-ingress-controller: timed out waiting for the
condition.
Thank you kindly
For your issue, the possible reason is that your public IP is not in the same resource group and region with the AKS cluster. See the steps in Create an ingress controller with a static public IP address in Azure Kubernetes Service (AKS).
You can get the AKS group through the CLI command like this:
az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
When your public IP in a different group and region, then it will give the time out error as you.
Make sure that your ingress is in the node resource group, and also that the sku for the ingress is Basic not Standard

Failed to set up Kubernetes plugin for Jenkins

I have a brand new Kubernetes v1.8 cluster with two nodes (RBAC enabled). Jenkins is deployed as a StatefulSet and recommended ServiceAccount/Role and RoleBindings were created as well (from here). Cluster info:
$ kubectl cluster-info
Kubernetes master is running at https://10.182.255.35:6443
When I'm trying to set up Kubernetes cloud in Jenkins settings I'm getting an error 403 (Forbidden). I followed pugin guide and created 'Kubernetes Service Account' credentials in Jenkins and trying to configure new cloud. Jenkins configuration screenshot. Here is the debug log from plugin:
Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesFactoryAdapter
Creating Kubernetes client: KubernetesFactoryAdapter [serviceAddress=https://10.182.255.35:6443, namespace=default, caCertData=null, credentials=org.csanchez.jenkins.plugins.kubernetes.ServiceAccountCredential#99ee54b6, skipTlsVerify=true, connectTimeout=0, readTimeout=0]
Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
Error connecting to https://10.182.255.35:6443
java.io.IOException: Unexpected response code for CONNECT: 403
at okhttp3.internal.connection.RealConnection.createTunnel(RealConnection.java:371)
...(skipped)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:605)
Caused: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
...(skipped)
At the same time if I try to make an API call using this serviceAccount from the pod, it's working:
$ kubectl exec -ti jenkins-0 bash (ssh into the pod)
bash-4.3$ KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
bash-4.3$ curl -sSk -H "Authorization: Bearer $KUBE_TOKEN"
https://10.182.255.35:6443/api/v1/namespaces/default/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "90645"
},
"items": [
{
...(skipped)
Answering my own question: the problem was with my proxy settings. You need to specify instance IP in no_proxy environment variable during cluster setup.
I don't have enough points to vote up, but I just want to confirm that this was related to proxy settings as mentioned by #Symydo. So either add the IP instance in the NO_PROXY env variable of the Pod or remove proxy settings if not necessary.

Resources