kubernetes- load balancer external endpoint is always localhost - docker

I was using minikube, and when I created a load balancer it would always give me a diferent ip in the external endpoint, and I was able to access my app.
But now, I changed to docker kubernetes, and when I create a load balancer, it always add the localhost:8181 at the external endpoints.
here is my yaml:
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
#externalIPs:
# - 172.29.0.0
ports:
- protocol: TCP
name: http
port: 8181
targetPort: 8181
type: LoadBalancer
selector:
app: app1
its the same as : kubectl expose deployment app1 --port=8181 --target-port=8181 --name=app1 --type=LoadBalancer
as you can see, I tried to add externalIPs, when I do that, both localhost and the externalIP appear in the dashboard, but using the externalIP doesn't work...
I would like it to generate an ip when I create a loadbalancer so I can access my app from there, like I did with minikube.
thanks for your time.

Official documentation says that:
Type values and their behaviors are:
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
that is why with Kubernetes you have to have a cloud provider enabled (otherwise no External IP would be provisioned):
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 LoadBalancer 10.0.2.46 <pending> 8181:30257/TCP 18s
While in minikube it is provisioned for you with the minikube service <service_name>:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 LoadBalancer 10.103.51.13 <pending> 8181:30129/TCP 68s
$ minikube service app1
|-----------|------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|-----------------------------|
| default | app1 | http/8181 | http://192.168.99.100:30129 |
|-----------|------|-------------|-----------------------------|
I would like it to generate an ip when I create a loadbalancer so I can access my app from there, like I did with minikube.
There is awesome post by Ales Nosek on topic.
In short:
In order to be able to create a service of type LoadBalancer, a cloud provider has to be enabled in the configuration of the Kubernetes cluster. As of version 1.6, Kubernetes can provision load balancers on AWS, Azure, CloudStack, GCE and OpenStack.
It highly depends on what you'd like to achieve, but I believe that you may be interested in Ingress.

Related

Kubernetes cluster is not exposing external ip as <nodes>

Here is my service.yaml code :
kind: Service
apiVersion: v1
metadata:
name: login
spec:
selector:
app: login
ports:
- protocol: TCP
name: http
port: 5555
targetPort: login-http
type: NodePort
I wrote service type as
type: NodePort
but when i hit command as below it does not show the external ip as 'nodes' :
'kubectl get svc'
here is output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h
login NodePort 10.100.70.98 <none> 5555:32436/TCP 5m
please help me to understand the mistake.
There is nothing wrong with your service, you should be able to access it using <your_vm_ip>:32436.
NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. So, On your node port 32436 is open and will receive all the external traffic on this port and forward it to the login service.
EDIT:
nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).
nodePort is unique, so 2 different services cannot have the same nodePort assigned. Once declared, the k8s master reserves that nodePort for that service. nodePort is then opened on EVERY node (master and worker) - also the nodes that do not run a pod of that service - k8s iptables magic takes care of the routing. That way you can make your service request from outside your k8s cluster to any node on nodePort without worrying whether a pod is scheduled there or not.
See the following article, it shows different ways to expose your services:
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

How to access Kubernetes NodePort service in browser?

I have created a deployment for jenkins in Kubernetes.
The pod is running fine, I've created a service to access jenkins on service-ip:8080 but it seems not to work.
When I create an ingress above the service I can access it using the public ip.
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: jenkins
spec:
type: NodePort
selector:
app: jenkins
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
I created my service as described above:
$ kubectl get svc --namespace=jenkins
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort 10.47.xx.xx <none> 8080:30960/TCP 1d
I tried to access: 10.47.xx.xx:8080 but I was not able to access the jenkins UI. What am I doing wrong? I also tried 10.47.xx.xx:30960
I want to access my jenkins UI using a service but I want to keep it private in my cluster. (ingress makes it public).
UPDATE:
$ kubectl describe svc jenkins-ui --namespace jenkins
Name: jenkins-ui
Namespace: jenkins
Labels: <none>
Annotations: <none>
Selector: app=jenkins
Type: NodePort
IP: 10.47.xx.xx Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 30960/TCP
Endpoints: 10.44.10.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
accessing the pod on 10.44.10.xx:8080 does not work too.
If I understand correctly, you want any container running in your cluster to be able to access your jenkins service, but you don't want your jenkins service to be accessible outside your cluster to something like your browser?
In this case:
curl http://jenkins-ui.default:8080
curl http://10.47.10.xx:8080
will work perfectly fine from inside any container in your kubernetes cluster.
Also, you cannot access it 10.47.10.xx:8080 from outside your cluster because that IP is only valid/available inside your kubernetes cluster.
If you want to access it from outside the cluster an ingress controller or to connect on http://<node-ip>: 30960 is the only way to connect to the jenkins-ui k8s service and thus the pod behind it.
EDIT: Use kubectl port-forward
In development mode, if you want to access a container running internally, you can use kubectl port-forward:
kubectl port-forward <jenkins-ui-pod> 9090:8080
This way, http://localhost:9090 will show you the jenkins-ui screen because you have kubectl access.
kubectl port-forward doesn't work for services yet: https://github.com/kubernetes/kubernetes/issues/15180

Access service on subdomain in Kubernetes

I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.

Kubernetes, Flannel and exposing services

I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16
flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting: --bip=10.0.0.1/24
Hostnode IP: 192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root#kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root#kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
So the issue here was that there's a missing piece of the puzzle when you use nodePort.
I was also making a mistake with the commands.
Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:
kubectl expose rc nginx --port=80 --type=NodePort
Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node:
[root#kubemaster ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Selector: run=nginx
Type: NodePort
IP: 172.16.92.8
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32033/TCP
Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity: None
No events.
You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
yes, you would need to use NodePort.
When you hit the service, the destPort should be equal to NodePort.
The destIP for the service should be considered local by the nodes. E.g. you could use the hostIP of one of the nodes..
A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service..
if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod
you define your container, and ports
containers:
- name: ningx
image: nginx
ports:
- containerPort: 80
hostPort: 80
name: http
this will bind the container to the host networking and use the port defined.
The 2 limitations here are obviously:
1) You can only have one of these pods on each host maximum.
2) The IP is the host IP of the node it binds to
this is essentially how the cloud provider load balancers work in a way.
Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes

How can I access the Kubernetes service through ClusterIP

I am trying to create Kubernetes cluster using three VMs(Master – 10.x.x.4, Node1 – 10.x.x.150, Node2 – 10.x.x.160).
I was able to create the guestbook application successfully following this link: http://kubernetes.io/v1.0/examples/guestbook/. Only one change I made to frontend-service.yaml: to use NodePort. I can access the frontend service using nodes IP and port number(10.x.x.150:30724 or 10.x.x.160:30724). So everything is working as expected but I am not able to access the frontend service using ClusterIP address(in my case 10.x.x.79).
My understanding of NodePort is that the service can be accessed through cluster IP and also on a port on each node of the cluster. How can I access the service through ClusterIP so that I don’t have to access the each node? Am I missing something here?
service and pod details
$sudo kubectl describe service frontend
Name: frontend
Namespace: default
Labels: name=frontend
Selector: name=frontend
Type: NodePort
IP: 10.x.x.79
Port: <unnamed> 80/TCP
NodePort: <unnamed> 30724/TCP
Endpoints: 172.x.x.13:80,172.x.x.14:80,172.x.x.11:80
Session Affinity: None
No events.
$sudo kubectl describe pod frontend-2b5us
Name: frontend-2b5us
Namespace: default
Image(s): gcr.io/google_samples/gb-frontend:v3
Node: 10.x.x.150/10.x.x.150
Labels: name=frontend
Status: Running
Reason:
Message:
IP: 172.x.x.11
Replication Controllers: frontend (3/3 replicas created)
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
State: Running
Started: Fri, 30 Oct 2015 04:00:40 -0500
Ready: True
Restart Count: 0
I tried to search but would not find any solution for my exact problem but I did find similar problem that looks like for GCE.
Why can't I access my Kubernetes service via its IP?
You do not have ClusterIP service. You do have a NodePort service. To access it, you connect to the NodePort on any of your nodes in the cluster, as you've already discovered. You do get load-balancing here. Even though you connect to a cluster node, the pod you get does not necessarily run on that particular node.
Read the relevant section in the documentation at https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types to learn about additional service types. You probably do not want NodePort on GCP.
Talking about ClusterIP. To access a ClusterIP service for debugging purposes, you can run kubectl port-forward. You will not actually access the service, but you will directly connect to one of the pods.
For example
kubectl port-forward frontend-2b5us 80 8080
Now connect to localhost:8080
More sophisticated command, which discovers the port on its own, given namespace -n weave and a selector. Taken from https://www.weave.works/docs/scope/latest/installing/
kubectl port-forward -n weave \
"$(kubectl get -n weave pod \
--selector=weave-scope-component=app \
-o jsonpath='{.items..metadata.name}')" \
4040
From where are you trying to access clusterIP? The clusterIP (by default) only works from within the cluster. It is a virtual IP, not routed.

Resources