Google Container Engine: Kubernetes is not exposing external IP after creating container - docker

I am trying to create a "Hello Node" sample application in Google Container Engine, following this tutorial
However even after running the command kubectl expose rc hello-node --type="LoadBalancer", it is not exposing an external-IP to access the port.
vagrant#docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080
replicationcontroller "hello-node" created
vagrant#docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer"
service "hello-node" exposed
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s
vagrant#docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s

After a few moments, the external IP of the load balancer is listed in
the IP(s) column of the service
Usually it's 1-2 minutes. You was waiting only 23seconds. Try to wait a few moments more and it'll be OK.

Related

Kubernetes - how to solve secret exchange problems during pod creation

This question belongs to the problem
Deployment of Ingress-controler with Helm failed
but i want also understand more about the background.
Basic situation is: Pod creation fails with error:
{"err":"Get "https://10.96.0.1:443/api/v1/namespaces/ingress-nginx/secrets/ingress-nginx-admission": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2022-02-22T10:47:49Z"}
i can see that the pod tries to get something from my kubernetes cluster-IP which listen on 443:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 121d
default nextcloud-service ClusterIP 10.98.154.93 <none> 82/TCP 13d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d
My questions are:
Can i - somehow - check with a command if the URL-path really exist?
When will this secret be created, and how can i observe this?
Can i manipulate the cluster to use another port for this, like 8080 (non secure) or so?
When i check my secrets with command kubectl get secrets -A i see following results
NAMESPACE NAME TYPE DATA AGE
default default-token-95b8q kubernetes.io/service-account-token 3 122d
ingress-nginx default-token-fbvmd kubernetes.io/service-account-token 3 21h
ingress-nginx ingress-nginx-admission-token-cdfbf kubernetes.io/service-account-token 3 11m
can i somehow tell the deployment script (in values.yaml) the exact name of this secret?

kubernetes stack not writable

I was able to follow all instructions mentioned here and create a cluster.
https://github.com/digitalocean/doks-example
I changed the image to my own, custom image.
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" apply -f manifest1.yaml
service/doks-example1 created
deployment.extensions/doks-example1 created
The new app is deployed successfully...
# /usr/local/bin/kubectl --kubeconfig="k8s-1-14-2-do-0-blr1-1558848628228-kubeconfig.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
doks-example LoadBalancer 10.245.92.169 139.59.48.36 80:31378/TCP 14m
doks-example1 LoadBalancer 10.245.250.95 139.59.49.155 8887:32137/TCP 3m1s
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 22m
But I am not able to create a new jupyter notebook after logging in. I get a "Forbidden" error.
How do I make the container writable?
This seems related to cache issue discussed in this issue. Pressing F5 worked quickly with your configuration

Routing between Kubernetes cluster and Docker container in the VM

I have setup Kubernates cluster in the VM (Ubuntu 18.04.1 LTS) on the Azure cloud using preconfigured scripts.
MongoDB docker container is running along with K8s cluster. My aim is to connect MongoDB to CMS container which is running inside the K8s.
Docker containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3883f7b397cf mongo "docker-entrypoint.s…" 5 hours ago Up 5 hours 0.0.0.0:27017->27017/tcp mongodb
299239d90cbb mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-2
34c8bd5fad2e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 8080/tcp kube-node-1
15a2d6521e6e mirantis/kubeadm-dind-cluster:v1.12 "/sbin/dind_init sys…" 27 hours ago Up 27 hours 127.0.0.1:32768->8080/tcp kube-master
Kubernates nodes:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 18s app=mycms,tier=frontend
Kubernates service:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
mycms LoadBalancer 10.97.53.114 <pending> 80:31664/TCP 112s app=mycms,tier=frontend
Kubernates pods:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mycms-dc4978ffc-khvj2 1/1 Running 0 4m8s 10.244.2.13 kube-node-1 <none>
MongoDB container's IP address is 172.17.0.2
Kubernates master container IP address is 10.192.0.2
Kubernates node 1 container IP address is 10.192.0.3
Kubernates node 2 container IP address is 10.192.0.4
As CMS pod is running on 10.244.2.13 which is inside the k8s container.
For testing, I have installed mongo-client on the host and test the connection which works well.
But CMS doesn't reach MongoDB container (I am passing Mongo String to pod in an environmental variable).
CMS pod's log
MongoError: failed to connect to server [172.17.0.2:27017] on first connect [MongoError: connect EHOSTUNREACH 172.17.0.2:27017]
How do I route MongoDB container and CMS container? Is anything wrong/missed in my approach?
Please let me know if you need further information. Thanks!
You need to use the IP address of the host where Docker is installed, not internal MongoDB container's IP address, to connect to MongoDB from the Kubernetes cluster or form any other host. According to the results of your docker ps -a, you have exposed 27017 port for MongoDB container, therefore <hostIP>:27017 should be used, not 172.17.0.2:27017.
By default in Kubernetes, there are no restrictions to connect outside the cluster.
Also, you may have firewall rules in Azure that forbid connections between hosts.

Kubernetes cannot access grafana and prometheus fron Google cloud platform

I have followed this link to install Grafana/Prometheus in Google cloud kubernetes. I hope it is deployed successfully please find the following response for reference,
Service Created successfully :
kubectl --namespace=monitoring get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.27.249.8 <none> 3000:32703/TCP 1h
prometheus NodePort 10.27.249.233 <none> 9090:31939/TCP 1h
Namespace created successfully :
kubectl get namespaces
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 1h
PODS response :
kubectl --namespace=monitoring get pods
NAME READY STATUS RESTARTS AGE
grafana-1323121529-8614m 1/1 Running 0 1h
node-exporter-lw8cr 0/1 CrashLoopBackOff 17 1h
node-exporter-nv85s 0/1 CrashLoopBackOff 17 1h
node-exporter-r2rfl 0/1 CrashLoopBackOff 17 1h
prometheus-3259208887-x2zjc 1/1 Running 0 1h
Now i am trying to expose the external-Ip for Grafana but i couldn't keep on getting following exception "Error from server (AlreadyExists): services "prometheus" already exists"
kubectl --namespace=monitoring expose deployment/prometheus --type=LoadBalancer
Error from server (AlreadyExists): services "prometheus" already exists
Edited
kubectl -n monitoring edit service prometheus
Edit cancelled, no changes made.
As you have already deployed the Prometheus service manifest file in the monitoring namespace. However, you are trying to deploy a service with the same name at the same namespace.That's not acceptable As Two Service cannot co-exist in the same namespace with same name.
Solutions for your problem
I would use the following command to edit the already deployed Service.
kubectl -n monitoring edit service prometheus
Then your favourite text editor would pop up, you just need to update
type: LoadBalancer
Basically, your service will be edited.
Edited
If you are not able to use the above command, then you do following steps :
you need to edit the Prometheus service manifest file and update it with type: LoadBalancer.
Now you need to apply kubectl apply -f prometheus-service.yaml

kubectl run does not create replicacontroller

I'm newbie of the Kubernetes while I'm using Google Cloud Container. I just follow the tutorials as belows:
https://cloud.google.com/container-engine/docs/tutorials/http-balancer
http://kubernetes.io/docs/hellonode/#create-your-pod
In these tutorials, I'll get the replicacontroller after I run the "kubectl run" but there is no replicacontrollers so that I cannot run the command of "kubectl expose rc" in order to open a port.
Here is my result of the commands:
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl expose rc nginx --target-port=80 --type=NodePort
Error from server: replicationcontrollers "nginx" not found
Here is my result when I run "kubectl get rc,svc,ingress,deployments,pods":
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl get rc,svc,ingress,deployments,pods
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.240.1 <none> 443/TCP 12m
NAME RULE BACKEND ADDRESS AGE
basic-ingress - nginx:80 107.178.247.247 12m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 11m
NAME READY STATUS RESTARTS AGE
nginx-198147104-zgo7m 1/1 Running 0 11m
One of my solution is to create yaml file which define the replicacontroller. But is there any way to create replicacontroller via kubectl run command like above tutorials?
Thanks,
Now that kubectl run creates a deployment, you specify that the type being exposed in a deployment rather than a replication controller:
kubectl expose deployment nginx --target-port=80 --type=NodePort
The team might still be updating the docs to reflect 1.2. Note the output you got:
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
kubectl run now creates a deployemtn+replica-set.
To view these you can do kubectl get deployment, and get rs respectively.
Deployments are essentially a nicer way to perform rolling update server side, but there's a little more to it. See docs: http://kubernetes.io/docs/user-guide/deployments/
In version 1.15.0, it works as follows.
root#k8smaster ~]# kubectl run guestbook --image=coolguy/k8s_guestbook:1.0 --port=8080 --generator=run/v1
kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create
instead.
***replicationcontroller/guestbook created***
In version 1.19.0:
[root#k8smaster ~]# kubectl run guestbook --image=dmsong2008/k8s_guestbook:1.0 --port=8080 --generator=run/v1
***Flag --generator has been deprecated, has no effect and will be removed in the future.***
pod/guestbook created

Resources