kubernetes stack not writable - docker

I was able to follow all instructions mentioned here and create a cluster.
https://github.com/digitalocean/doks-example
I changed the image to my own, custom image.
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" apply -f manifest1.yaml
service/doks-example1 created
deployment.extensions/doks-example1 created
The new app is deployed successfully...
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
doks-example LoadBalancer 10.245.92.169 139.59.48.36 80:31378/TCP 14m
doks-example1 LoadBalancer 10.245.250.95 139.59.49.155 8887:32137/TCP 3m1s
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 22m
But I am not able to create a new jupyter notebook after logging in. I get a "Forbidden" error.
How do I make the container writable?

This seems related to cache issue discussed in this issue. Pressing F5 worked quickly with your configuration

Related

Kubernetes - how to solve secret exchange problems during pod creation

This question belongs to the problem
Deployment of Ingress-controler with Helm failed
but i want also understand more about the background.
Basic situation is: Pod creation fails with error:
{"err":"Get "https://10.96.0.1:443/api/v1/namespaces/ingress-nginx/secrets/ingress-nginx-admission": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2022-02-22T10:47:49Z"}
i can see that the pod tries to get something from my kubernetes cluster-IP which listen on 443:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121d
default nextcloud-service ClusterIP 10.98.154.93 <none> 82/TCP 13d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d
My questions are:
Can i - somehow - check with a command if the URL-path really exist?
When will this secret be created, and how can i observe this?
Can i manipulate the cluster to use another port for this, like 8080 (non secure) or so?
When i check my secrets with command kubectl get secrets -A i see following results
NAMESPACE NAME TYPE DATA AGE
default default-token-95b8q kubernetes.io/service-account-token 3 122d
ingress-nginx default-token-fbvmd kubernetes.io/service-account-token 3 21h
ingress-nginx ingress-nginx-admission-token-cdfbf kubernetes.io/service-account-token 3 11m
can i somehow tell the deployment script (in values.yaml) the exact name of this secret?

Unable to install Jenkins on Minikube using Helm due to the permission on mac

I`vw tried to install jenkins on minikube according this article
https://www.jenkins.io/doc/book/installing/kubernetes/
When I type kubectl logs pod/jenkins-0 init -n jenkins
I get
disable Setup Wizard
/var/jenkins_config/apply_config.sh: 4: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.UpgradeWizard.state: Permission denied
I almost sure that I have some problems with file system on mac.
I did not create serviceAccount from article because helm have not seen it and returns error.
Instead of it I changed in jenkins-values.yaml
serviceAccount:
create: true
name: jenkins
annotations: {}
Then I tried set next values to 0. It have no affect.
runAsUser: 1000
fsGroup: 1000
Addition info:
kubectl get all -n jenkins
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 7 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins ClusterIP 10.104.114.29 <none> 8080/TCP 15m
service/jenkins-agent ClusterIP 10.104.207.201 <none> 50000/TCP 15m
NAME READY AGE
statefulset.apps/jenkins 0/1 15m
Also tried to use different directories for volume live /Volumes/data and add 777 permissions to it.
There are a couple potentials in here, but there is a solution without switching to runAsUser 0 (which breaks security assessments).
The folder /data/jenkins-volume is created as root by default, with a 755 permission set so you can't create persistent data in this dir with the default jenkins build.
To fix this, enter minikube with $ minikube ssh and run: $ chown 1000:1000 /data/jenkins-volume
The other thing that could be biting you (after fixing the folder permissions) is SELinux policies, when you are running your Kubernetes on a RHEL based OS.
To fix this: $ chcon -R -t svirt_sandbox_file_t /data/jenkins-volume
It was resolved
I just set runAsUser to 0 everywhere.
runAsUser to 0 everywhere worked, but this not the ideal solution due to potential security issues. Good for dev environment but not for prod.

Kubernetes cannot access grafana and prometheus fron Google cloud platform

I have followed this link to install Grafana/Prometheus in Google cloud kubernetes. I hope it is deployed successfully please find the following response for reference,
Service Created successfully :
kubectl --namespace=monitoring get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.27.249.8 <none> 3000:32703/TCP 1h
prometheus NodePort 10.27.249.233 <none> 9090:31939/TCP 1h
Namespace created successfully :
kubectl get namespaces
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 1h
PODS response :
kubectl --namespace=monitoring get pods
NAME READY STATUS RESTARTS AGE
grafana-1323121529-8614m 1/1 Running 0 1h
node-exporter-lw8cr 0/1 CrashLoopBackOff 17 1h
node-exporter-nv85s 0/1 CrashLoopBackOff 17 1h
node-exporter-r2rfl 0/1 CrashLoopBackOff 17 1h
prometheus-3259208887-x2zjc 1/1 Running 0 1h
Now i am trying to expose the external-Ip for Grafana but i couldn't keep on getting following exception "Error from server (AlreadyExists): services "prometheus" already exists"
kubectl --namespace=monitoring expose deployment/prometheus --type=LoadBalancer
Error from server (AlreadyExists): services "prometheus" already exists
Edited
kubectl -n monitoring edit service prometheus
Edit cancelled, no changes made.
As you have already deployed the Prometheus service manifest file in the monitoring namespace. However, you are trying to deploy a service with the same name at the same namespace.That's not acceptable As Two Service cannot co-exist in the same namespace with same name.
Solutions for your problem
I would use the following command to edit the already deployed Service.
kubectl -n monitoring edit service prometheus
Then your favourite text editor would pop up, you just need to update
type: LoadBalancer
Basically, your service will be edited.
Edited
If you are not able to use the above command, then you do following steps :
you need to edit the Prometheus service manifest file and update it with type: LoadBalancer.
Now you need to apply kubectl apply -f prometheus-service.yaml

Flannel fails in kubernetes cluster due to failure of subnet manager

I am running etcd, kube-apiserver, kube-scheduler, and kube-controllermanager on a master node as well as kubelet and kube-proxy on a minion node as follows (all kube binaries are from kubernetes 1.7.4):
# [master node]
./etcd
./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0
./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080
./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080
# [minion node]
./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true
./kube-proxy --master=http://$MASTER_IP:8080
After this, if I execute kubectl get all --all-namespaces and kubectl get nodes, I get
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes 10.10.10.1 <none> 443/TCP 27m
NAME STATUS AGE VERSION
minion-1 Ready 27m v1.7.4+793658f2d7ca7
Then, I apply flannel as follows:
kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml
Now, I see a pod is created, but with error:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-p8tcb 1/2 CrashLoopBackOff 4 2m
When I check the logs inside the failed container in the minion node, I see the following error:
Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?
Maybe it is your flannel yaml file has something wrong,
you can try this to install your flannel,
check the old ip link
ip link
if it show flannel,please delete it
ip link delete flannel.1
and install , its default pod network cdir is 10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
You could try to pass --etcd-prefix=/your/prefix and --etcd-endpoints=address to flanneld instead of --kube-subnet-mgr so flannel get net-conf from etcd server and not from api server.
Keep in mind that you must to push net-conf to etcd server.
UPDATE
The problem (/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory) can appear when execute apiserver without --admission-control=...,ServiceAccount,... or if kubelet is inside a container (eg: hypercube) and this last was my case. If you want execute k8s components inside a container you need to pass 'shared' option to kubelet volume
/var/lib/kubelet/:/var/lib/kubelet:rw,shared
Furthermore enable same option to docker in docker.service
MountFlags=shared
Now the question is: is there a security hole with shared mount?

Google Container Engine: Kubernetes is not exposing external IP after creating container

I am trying to create a "Hello Node" sample application in Google Container Engine, following this tutorial
However even after running the command kubectl expose rc hello-node --type="LoadBalancer", it is not exposing an external-IP to access the port.
vagrant#docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080
replicationcontroller "hello-node" created
vagrant#docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer"
service "hello-node" exposed
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s
After a few moments, the external IP of the load balancer is listed in
the IP(s) column of the service
Usually it's 1-2 minutes. You was waiting only 23seconds. Try to wait a few moments more and it'll be OK.

Resources