Minikube running in Docker, and port forwarding - docker

I'm pretty well versed in Docker, but I haven't got Minikube/K8s working yet. I first tried setting up artifactory-oss in helm but failed to connect to the LoadBalancer. Now I'm just trying the basic hello-minikube NodePort setup as a sanity check.
When I do minikube start, it starts up minikube in Docker:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebabea521ffe gcr.io/k8s-minikube/kicbase:v0.0.18 "/usr/local/bin/entr…" 2 weeks ago Up 36 minutes 127.0.0.1:49167->22/tcp, 127.0.0.1:49166->2376/tcp, 127.0.0.1:49165->5000/tcp, 127.0.0.1:49164->8443/tcp, 127.0.0.1:49163->32443/tcp minikube
So Minikube only has ports 4916(3/4/5/6/7) open?
So I installed hello-minikube:
> kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
> kubectl expose deployment hello-minikube --type=NodePort --port=8080
> minikube ip
192.168.49.2
> minikube service list
|----------------------|------------------------------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|------------------------------------|--------------|---------------------------|
| default | hello-minikube | 8080 | http://192.168.49.2:30652 |
| default | kubernetes | No node port |
| kube-system | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|------------------------------------|--------------|---------------------------|
> minikube service --url hello-minikube
http://192.168.49.2:30652
I check firewall, and it has the ports I've opened:
> sudo firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
sources:
services: dhcpv6-client http https ssh
ports: 8000-9000/tcp 30000-35000/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-hxxmf 1/1 Running 0 40m
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube NodePort 10.97.233.42 <none> 8080:30652/TCP 36m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d
> kubectl describe services hello-minikube
Name: hello-minikube
Namespace: default
Labels: app=hello-minikube
Annotations: <none>
Selector: app=hello-minikube
Type: NodePort
IP Families: <none>
IP: 10.97.233.42
IPs: 10.97.233.42
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30652/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I've tried every IP and port combination, minikube tunnel, and kube proxy and a few other things but I just can't find any port to access this service from another machine. I can't get an 'External-IP'. nmap finds a bunch of ports if i search from the machine itself.
> nmap -p 1-65000 localhost
Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-26 15:16 SAST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0013s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 64971 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
80/tcp open http
111/tcp open rpcbind
443/tcp open https
631/tcp open ipp
3000/tcp open ppp
5000/tcp open upnp
5050/tcp open mmcc
8060/tcp open unknown
8080/tcp open http-proxy
8082/tcp open blackice-alerts
9090/tcp open zeus-admin
9093/tcp open unknown
9094/tcp open unknown
9100/tcp open jetdirect
9121/tcp open unknown
9168/tcp open unknown
9187/tcp open unknown
9229/tcp open unknown
9236/tcp open unknown
33757/tcp open unknown
35916/tcp open unknown
41266/tcp open unknown
49163/tcp open unknown
49164/tcp open unknown
49165/tcp open unknown
49166/tcp open unknown
49167/tcp open unknown
But if I scan that machine from another machine on the network:
> nmap -p 1-65000 10.20.2.26
Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-26 15:23 SAST
Nmap scan report for 10.20.2.26
Host is up (0.00032s latency).
Not shown: 58995 filtered ports, 6001 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
8060/tcp open unknown
those ports don't seem to be accessible. Any ideas?
-- EDIT 1:
The sys admin says only 10.20.x.x IPs will resolve. So 192.168.x.x and 10.96.x.x won't work. So perhaps this --service-cluster-ip-range field is what I'm looking for. I will try it out next.

I faced a similar issue that I was banging my head upon, this documentation was quite helpful. In my case I was accessing a Jenkins build server running in a Kubernetes cluster via minikube on my Mac OS.
I followed steps to get this port forwarding working:
Confirm the port of your pod :
kubectl get pod <podname-f5d-48kbr> --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' -n <namespace-name>
Say the output displays
> 27013
Forward a local port to a port on the Pod like so :
kubectl port-forward <podname-deployment-f5db75f7-48kbr> 8080:27013 -n <namespace-name>
and that should start the port forwarding, the output like :
Forwarding from 127.0.0.1:8080 -> 27013
Forwarding from [::1]:8080 -> 27013
now access your application on the browser via http://localhost:8080/

Posted community wiki for better visibility. Feel free to expand it.
Based on this answer.
Seems there is no possibility to access minikube cluster setup with --driver=docker from the other host in the same local network.
The workaround is to use other driver while setting up minikube cluster:
--driver=virtualbox (recommended) -> use Bridged Adapter setting
--driver=none (potential issues)
For more details (how to setup etc.) please refer to this answer.

Related

accessing minikube loadbalancer service using VM host ip

I have VM (ip: 10.157.156.176) with linux 7 installed. I am able to access with SSH with VM IP.
I have successfully installed Kubectl, Minikube and created loadbalancer service with 2 pods.
[10-157-156-176 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 14h v1.21.2
[10-157-156-176 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
customers-engagement-service LoadBalancer 10.106.146.66 <pending> 80:30654/TCP 14h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
[dc-user#ech-10-157-156-176 ~]$
[10-157-156-176 ~]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
customers-engagement-service-6f75f4df4b-vlpb8 1/1 Running 0 13h 172.17.0.6 minikube <none> <none>
customers-engagement-service-6f75f4df4b-zdjmd 1/1 Running 0 4h22m 172.17.0.5 minikube <none> <none>
[10-157-156-176 ~]$ minikube service customers-engagement-service --url
http://192.168.49.2:30654
I am able to access service within my VM (10-157-156-176) using service URL
[10-157-156-176 ~]$ curl -v http://192.168.49.2:30654/customers
* Trying 192.168.49.2:30654...
* TCP_NODELAY set
* Connected to 192.168.49.2 (192.168.49.2) port 30654 (#0)
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
I would like to access the service from different machine (which is having connectivity to host VM) using host VM ip (10.157.156.176) instead of Minikube VM ip (192.168.49.2).
What changes I have to do to achieve that?
For type LoadBalancer , you would see that the external IP is in pending state. You need to use minikube tunnel to expose it. To use the host IP you need to use nodePort.
Here is a detailed document : https://minikube.sigs.k8s.io/docs/handbook/accessing/

How to connect to the master node using nodeport Kubernetes

I have two pods running on two different VM in the cluster one on the master node and other on the worker node. I have the following docker file exposed port 31700 on the server-side and IP address of server VM node is 192.168.56.105 and for client-side VM IP address is 192.168.56.106.
Dockerfile
EXPOSE 31700
Server file
sock = socket()
sock.bind(('0.0.0.0',31700))
Client file
sock.connect(('192.168.56.105',31700))
Pod : kubectl get pods
NAME STATUS ROLES AGE VERSION
kmaster Ready master 25h v1.19.3
knode Ready worker 25h v1.19.3
Service : kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
myapp-service NodePort 10.108.144.147 <none> 80:31700/TCP 49m
Detail of the service is described below:
kubectl describe services myapp-service
Name: myapp-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=edge-server
Type: NodePort
IP: 10.108.144.147
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31700/TCP
Endpoints: 192.168.189.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
When I try to ping through the below command I retrieve Connection refused on both the VMs
curl -v https://192.168.56.105:31700
I am able to ping the two pods. Please help me in this on connecting the server and client. Help is highly appreciated. Thank you for your wonderful support.
You need to use port-forwarding to access applications in your cluster
(see https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
You can forward a local port to a port on the Pod with:
kubectl port-forward service/kubernetes <local-port>:443
myapp-service is exposing (listening to) 31700, but you should use port 80.
In your Dockerfile , it should be "EXPOSE 80" instead of "EXPOSE 31700" ( Assuming your container listening on port 80)

Unable to access to service from kubernetes master node

[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?

How to make kubernetes pod have access to PostgreSQL Pod

I am trying local Kubernetes(Docker-on-mac), and trying to submit a spark job. The spark job, connects with a PostgreSQL database and do some calculations.
The PostgreSQL is running on my Kube and since I have published it, I can access it from the host via localhost:5432. However, when the spark application is trying to connect to PostgreSQL, it throws
Exception in thread "main" org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl get service postgresql-published
kubectl describe service spark-store-1588217023181-driver-svc
Name: spark-store-1588217023181-driver-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: spark-app-selector=spark-533ecb8556b6439eb938d487cc77c330,spark-role=driver
Type: ClusterIP
IP: None
Port: driver-rpc-port 7078/TCP
TargetPort: 7078/TCP
Endpoints: <none>
Port: blockmanager 7079/TCP
TargetPort: 7079/TCP
Endpoints: <none>
Session Affinity: None
How can I make my spark job, have access to PostgreSQL service?
localhost is there in EXTERNAL_IP but Kubernetes cluster DNS system(CoreDNS) does not know how to resolve it to an IP address.EXTERNAL_IP is supposed to be resolved by an external DNS server and it's generally meant to be used to connect to Postgres from outside the Kubernetes cluster(i.e from another system or from Kubernetes nodes as well) and not from the inside the cluster(i.e from another pod)
Postgres should be accessible from spark pod via 10.106.15.112:5432 or postgresql-published:5432 because kubernetes cluster DNS system knows how to resolve it.
Test the Postgres connectivity
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORTS
postgresql-published LoadBalancer 10.106.15.112 localhost 5432:31277
Means that the service shall be accessible within the cluster at 10.106.15.112:5432 , postgresql-published:5432 and externally at localhost:31277.
Please note that for the Pod the localhost is the Pod itself. In this very case localhost looks ambiguous. However that is how the expose works.

Kubernetes keeps removing Heapster & Grafana services due to already-used NodePort

I am running a Kubernetes cluster on Ubuntu (trusty) locally via Docker.
Since I'm using Vagrant to create the Ubuntu VM I had to modify the docker run command from the official Kubernetes guide a bit:
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
gcr.io/google_containers/hyperkube:v1.3.0 \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=192.168.10.30 \
--config=/etc/kubernetes/manifests-multi \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
Additionally, running a reverse proxy allows me to access my cluster's services via a browser from outside of the VM:
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.3.0 \
/hyperkube proxy --master=http://127.0.0.1:8080 --v=2
These steps work fine and eventually I'm able to access the Kubernetes UI in my browser.
vagrant#trusty-vm:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Now I'd like to run Heapster in that Kubernetes cluster with an InfluxDB backend and a Grafana UI, just as described in this guide. In order to do so, I've cloned the Heapster repo and configured grafana-service.yaml to use an external IP by adding type: NodePort:
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
name: influxGrafana
Creating the services, rcs, etc.:
vagrant#trusty-vm:~/heapster$ kubectl create -f deploy/kube-config/influxdb/
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30593) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "monitoring-grafana" created
replicationcontroller "heapster" created
service "heapster" created
replicationcontroller "influxdb-grafana" created
service "monitoring-influxdb" created
vagrant#trusty-vm:~/heapster$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
monitoring-grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
vagrant#trusty-vm:~/heapster$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-y2yci 1/1 Running 0 32m
kube-system influxdb-grafana-6udas 2/2 Running 0 32m
kube-system k8s-master-192.168.10.30 4/4 Running 0 58m
kube-system k8s-proxy-192.168.10.30 1/1 Running 0 58m
kube-system kube-addon-manager-192.168.10.30 2/2 Running 0 57m
kube-system kube-dns-v17-y4cwh 3/3 Running 0 58m
kube-system kubernetes-dashboard-v1.1.0-bnbnp 1/1 Running 0 58m
vagrant#trusty-vm:~/heapster$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.0.0.1 <none> 443/TCP 18m
kube-system heapster 10.0.0.234 <none> 80/TCP 3s
kube-system kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 18m
kube-system kubernetes-dashboard 10.0.0.58 <none> 80/TCP 18m
kube-system monitoring-grafana 10.0.0.132 <nodes> 80/TCP 3s
kube-system monitoring-influxdb 10.0.0.197 <none> 8083/TCP,8086/TCP 16m
As you can see, everything seems to run smoothly and I can also access Grafana's UI at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ via a browser.
However, after like 1 minute, both Heapster and Grafana endpoints disappear from kubectl cluster-info.
vagrant#trusty-vm:~/heapster$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Browser output:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "endpoints \"monitoring-grafana\" not found",
"reason": "NotFound",
"details": {
"name": "monitoring-grafana",
"kind": "endpoints"
},
"code": 404
}
Pods are still up & running ...
vagrant#trusty-vm:~/heapster$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-y2yci 1/1 Running 0 32m
kube-system influxdb-grafana-6udas 2/2 Running 0 32m
kube-system k8s-master-192.168.10.30 4/4 Running 0 58m
kube-system k8s-proxy-192.168.10.30 1/1 Running 0 58m
kube-system kube-addon-manager-192.168.10.30 2/2 Running 0 57m
kube-system kube-dns-v17-y4cwh 3/3 Running 0 58m
kube-system kubernetes-dashboard-v1.1.0-bnbnp 1/1 Running 0 58m
... but Heapster and Grafana services have disappeared:
vagrant#trusty-vm:~/heapster$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.0.0.1 <none> 443/TCP 19m
kube-system kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 19m
kube-system kubernetes-dashboard 10.0.0.58 <none> 80/TCP 19m
kube-system monitoring-influxdb 10.0.0.197 <none> 8083/TCP,8086/TCP 17m
While checking the output of kubectl cluster-info dump I discovered the following errors:
I0713 09:31:09.088567 1 proxier.go:427] Adding new service "kube-system/monitoring-grafana:" at 10.0.0.227:80/TCP
E0713 09:31:09.273385 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
I0713 09:31:09.395280 1 proxier.go:427] Adding new service "kube-system/heapster:" at 10.0.0.111:80/TCP
E0713 09:31:09.466306 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
I0713 09:31:09.480468 1 proxier.go:502] Setting endpoints for "kube-system/monitoring-grafana:" to [172.17.0.5:3000]
E0713 09:31:09.519698 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
I0713 09:31:09.532026 1 proxier.go:502] Setting endpoints for "kube-system/heapster:" to [172.17.0.4:8082]
E0713 09:31:09.558527 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
E0713 09:31:17.249001 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:22.252280 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:27.257895 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:31.126035 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
E0713 09:31:32.264430 1 server.go:294] Starting health server failed: E0709 09:32:01.153168 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" ($
E0713 09:31:37.265109 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:42.269035 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:47.270950 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:52.272354 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:31:57.273424 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0713 09:32:01.153168 1 proxier.go:887] can't open "nodePort for kube-system/monitoring-grafana:" (:30593/tcp), skipping this nodePort: listen tcp :30593: bind: address alread$
E0713 09:32:02.276318 1 server.go:294] Starting health server failed: listen tcp 127.0.0.1:10249: bind: address already in use
I0713 09:32:06.105878 1 proxier.go:447] Removing service "kube-system/monitoring-grafana:"
I0713 09:32:07.175025 1 proxier.go:447] Removing service "kube-system/heapster:"
I0713 09:32:07.210270 1 proxier.go:517] Removing endpoints for "kube-system/monitoring-grafana:"
I0713 09:32:07.249824 1 proxier.go:517] Removing endpoints for "kube-system/heapster:"
Apparently, the services and endpoints of Heapster & Grafana are removed due to nodePort being already in use. I didn't specify a designated nodePort in grafana-service.yaml, which means Kubernetes could choose one that isn't already used - so how can this be an error? Also, are there any ways to fix this?
OS: Ubuntu 14.04.4 LTS (trusty) | Kubernetes: v1.3.0 | Docker: v1.11.2
I ran into a very similar issue.
In the grafana-service.yaml file (and probably the heapster-service.yaml file) you have the line: kubernetes.io/cluster-service: 'true'. This label means that this service will be managed by the addon-manager. When the addon-manager runs its periodic checks, it will see that there is no grafana/heapster service defined in /etc/kubernetes/addons and will remove the service(s).
To work around this, you have two options:
Change the label to kubernetes.io/cluster-service: 'false'.
Move the controller and service yaml files into /etc/kubernetes/addons (or wherever the addon-manager is configured to look for yaml files) on the master node.
Hope that helps
Same Issue in our environment. K8S version = 1.3.4, Docker 1.12, Heapster is latest from master branch

Resources