Kubernetes - how to solve secret exchange problems during pod creation - docker

This question belongs to the problem
Deployment of Ingress-controler with Helm failed
but i want also understand more about the background.
Basic situation is: Pod creation fails with error:
{"err":"Get "https://10.96.0.1:443/api/v1/namespaces/ingress-nginx/secrets/ingress-nginx-admission": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2022-02-22T10:47:49Z"}
i can see that the pod tries to get something from my kubernetes cluster-IP which listen on 443:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121d
default nextcloud-service ClusterIP 10.98.154.93 <none> 82/TCP 13d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d
My questions are:
Can i - somehow - check with a command if the URL-path really exist?
When will this secret be created, and how can i observe this?
Can i manipulate the cluster to use another port for this, like 8080 (non secure) or so?
When i check my secrets with command kubectl get secrets -A i see following results
NAMESPACE NAME TYPE DATA AGE
default default-token-95b8q kubernetes.io/service-account-token 3 122d
ingress-nginx default-token-fbvmd kubernetes.io/service-account-token 3 21h
ingress-nginx ingress-nginx-admission-token-cdfbf kubernetes.io/service-account-token 3 11m
can i somehow tell the deployment script (in values.yaml) the exact name of this secret?

Related

Kubernetes: Why my NodePort can not get an external ip?

Environment information:
Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
Recently, I try to deploy "Zero to Jupyterhub" on kubernetes. My jupyterhub config file such below:
config.yaml
proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
singleuser:
storage:
dynamic:
storageClass: local-storage
capacity: 10Gi
Note: I set the service type as NodePort, because I not have any cloud provider(deploy on my lab servers cluster), and I try using nginx-ingress also then got failure, that reason why I do not using LoadBalance.
But when I using this config file to install jupyterhub via Helm, I can not access jupyterhub from browser, even all Pods running. These pods detail like below:
kubectl get pod --namespace jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-8gxxk 1/1 Running 0 27m
continuous-image-puller-8tmdh 1/1 Running 0 27m
continuous-image-puller-lwdcx 1/1 Running 0 27m
continuous-image-puller-pszsr 1/1 Running 0 27m
hub-7b9cbbcf59-fbppq 1/1 Running 0 27m
proxy-6b699b54c8-2pxmb 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-9vmfr 1/1 Running 0 27m
user-scheduler-65f4cbb9b7-lqfrh 1/1 Running 0 27m
and its services like this:
kubectl get service --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.10.55.78 <none> 8081/TCP 28m
proxy-api ClusterIP 10.10.27.133 <none> 8001/TCP 28m
proxy-public NodePort 10.10.97.11 <none> 443:30443/TCP,80:30080/TCP 28m
Is seem to work well, right? (I guessed.) But the fact is that I can not use ip 10.10.97.11 to access the jupyter main page, and I did not get any external ip also.
So, my problems are:
Do my config have any wrong?
How to get an external ip?
Finally, thank you for save my day so much!
For NodePort service you will not get EXTERNAL-IP. You can not use the CLUSTER-IP to access it from outside the kubernetes cluster because CLUSTER-IP is for accessing it from inside the kubernetes cluster typically from another pod.For accessing from outside the kubernetes cluster you need to use NodeIP:NodePort where NodeIP is your kubernetes nodes IP address.

kubernetes stack not writable

I was able to follow all instructions mentioned here and create a cluster.
https://github.com/digitalocean/doks-example
I changed the image to my own, custom image.
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" apply -f manifest1.yaml
service/doks-example1 created
deployment.extensions/doks-example1 created
The new app is deployed successfully...
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
doks-example LoadBalancer 10.245.92.169 139.59.48.36 80:31378/TCP 14m
doks-example1 LoadBalancer 10.245.250.95 139.59.49.155 8887:32137/TCP 3m1s
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 22m
But I am not able to create a new jupyter notebook after logging in. I get a "Forbidden" error.
How do I make the container writable?
This seems related to cache issue discussed in this issue. Pressing F5 worked quickly with your configuration

Having problem to access deployed application in multiclustering kubernetes environment in VirtualBox

I have create multicluster kubernetes environment and my node details is:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
16-node-121 Ready <none> 32m v1.14.1 192.168.0.121 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
master-16-120 Ready master 47m v1.14.1 192.168.0.120 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.9.2
And I created a service and exposed the service using following command:
$kubectl expose deployment hello-world --port=80 --target-port=8080
The is created and exposed. My service detail information is:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world ClusterIP 10.105.7.156 <none> 80/TCP 33m
I exposed my deployment by following command:
kubectl expose deployment hello-world --port=80 --target-port=8080
service/hello-world exposed
Unfortunately when I try to access my service using curl command I'm getting timeout error:
My service details are following:
master-16-120#master-16-120:~$ kubectl describe service hello-world
Name: hello-world
Namespace: default
Labels: run=hello-world
Annotations: <none>
Selector: run=hello-world
Type: ClusterIP
IP: 10.105.7.156
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.1.2:8080
Session Affinity: None
Events: <none>
curl http://10.105.7.156:80
curl: (7) Failed to connect to 10.105.7.156 port 80: Connection timed out
Here I'm using calico for my multicluster network which is :
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
My Pod networking specification is:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
At last I have got the solution. Thanks to Daniel's comment which helps me to reach my solution.
I change my kubernetis pod network CIDR and calico as follow:
--pod-network-cidr=10.10.0.0/16
And also configure master which is master-16-120 Hosts (/etc/hosts):
master-16-120 192.168.0.120
16-node-121 192.168.0.121
And in the node which is 16-node-121 Hosts (/etc/hosts)
master-16-120 192.168.0.120
16-node-121 192.168.0.121
Now my kubernetes is ready to go.

Kubernetes cannot access grafana and prometheus fron Google cloud platform

I have followed this link to install Grafana/Prometheus in Google cloud kubernetes. I hope it is deployed successfully please find the following response for reference,
Service Created successfully :
kubectl --namespace=monitoring get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.27.249.8 <none> 3000:32703/TCP 1h
prometheus NodePort 10.27.249.233 <none> 9090:31939/TCP 1h
Namespace created successfully :
kubectl get namespaces
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 1h
PODS response :
kubectl --namespace=monitoring get pods
NAME READY STATUS RESTARTS AGE
grafana-1323121529-8614m 1/1 Running 0 1h
node-exporter-lw8cr 0/1 CrashLoopBackOff 17 1h
node-exporter-nv85s 0/1 CrashLoopBackOff 17 1h
node-exporter-r2rfl 0/1 CrashLoopBackOff 17 1h
prometheus-3259208887-x2zjc 1/1 Running 0 1h
Now i am trying to expose the external-Ip for Grafana but i couldn't keep on getting following exception "Error from server (AlreadyExists): services "prometheus" already exists"
kubectl --namespace=monitoring expose deployment/prometheus --type=LoadBalancer
Error from server (AlreadyExists): services "prometheus" already exists
Edited
kubectl -n monitoring edit service prometheus
Edit cancelled, no changes made.
As you have already deployed the Prometheus service manifest file in the monitoring namespace. However, you are trying to deploy a service with the same name at the same namespace.That's not acceptable As Two Service cannot co-exist in the same namespace with same name.
Solutions for your problem
I would use the following command to edit the already deployed Service.
kubectl -n monitoring edit service prometheus
Then your favourite text editor would pop up, you just need to update
type: LoadBalancer
Basically, your service will be edited.
Edited
If you are not able to use the above command, then you do following steps :
you need to edit the Prometheus service manifest file and update it with type: LoadBalancer.
Now you need to apply kubectl apply -f prometheus-service.yaml

openshift internal docker registry repo address is no same as docker-registry service cluster ip

My steps in one of my cluster Master server:
Create router:
# oadm router ose-router --replicas=1 --credentials='/etc/origin/master/openshift-router.kubeconfig' --images='openshift-register.com.cn:5000/openshift3/ose-${component}:v3.4' --service-account=router
Create docker registry:
# oadm registry --config=/etc/origin/master/admin.kubeconfig --service-account=registry --images='openshift-register.com.cn:5000/openshift3/ose-${component}:v3.4'
Router and registry are created successfully and we can see the docker-registry cluster-IP address is 172.30.182.170:
# oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.182.170 <none> 5000/TCP 22s
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 6d
ose-router 172.30.80.196 <none> 80/TCP,443/TCP,1936/TCP 1m
Query the iamge stems in namespace openshift, the Docker repo IP is 172.30.137.159:
# oc get is -n openshift
NAME DOCKER REPO TAGS
jenkins 172.30.137.159:5000/openshift/jenkins 2,1
mariadb 172.30.137.159:5000/openshift/mariadb 10.1
mongodb 172.30.137.159:5000/openshift/mongodb 3.2,2.6,2.4
mysql 172.30.137.159:5000/openshift/mysql 5.7,5.6,5.5
nodejs 172.30.137.159:5000/openshift/nodejs 4,0.10
perl 172.30.137.159:5000/openshift/perl 5.24,5.20,5.16
php 172.30.137.159:5000/openshift/php 5.5,7.0,5.6
postgresql 172.30.137.159:5000/openshift/postgresql 9.5,9.4,9.2
python 172.30.137.159:5000/openshift/python 3.4,3.3,2.7 +
more...
redis 172.30.137.159:5000/openshift/redis 3.2
ruby 172.30.137.159:5000/openshift/ruby 2.3,2.2,2.0
My concern and question are the docker repo IP address should be using my docker-registry service cluster IP by default, but why it generated a new IP address and I have no idea how/where to find this docker repo IP address? I'm really new openshift users, so if you can give any suggestion that would be appreciated.
[root#ocp-master01 ~]# oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.182.170 <none> 5000/TCP 49m
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 7d
ose-router 172.30.80.196 <none> 80/TCP,443/TCP,1936/TCP 50m
[root#ocp-master01 ~]# oc get svc -o yaml | grep IP
clusterIP: 172.30.182.170
sessionAffinity: ClientIP
type: ClusterIP
clusterIP: 172.30.0.1
sessionAffinity: ClientIP
type: ClusterIP
clusterIP: 172.30.80.196

Resources