How to create a ClusterIP Service? - azure-aks

I followed this guide to create a Windows Server container on Azure Kubernetes Service and this guide to secure it with an ingress-controller. I was successful and the web frontend of the container can now be reached via https through the ingress-controller. However, it can also be reached via the external IP adress of the service itself, which is not secure.
Now I already read something about a ClusterIP which, if I understand it correctly, is a type of service that has no external IP adress, but I wasn't able to find specific documentation on how to create one. Also I noticed that my service already has the type load-balancer. Can one service have multiple types or do I have to create an additional service?

You can have only one service type if you want to expose your application to the internet you can use the service type Load Balancer.
Service ClusterIp will not expose the application to the internet.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
run: my-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-service
Example if service having clusterIP. If you have not mentation any type in service it will create the default service as ClusterIp assign default IP to the pod.

Related

How can I access kubernetes service via dns name from localhost on which is Docker Desktop/Kubernetes?

I have such configuration localhost on which is installed Docker Desktop with Kubernetes.
I deployed for example some cassandra server as statefulset I created such service to expose individual pods from statefulset. I can not use cassandra service since for some Cassandra operation default load balancing is toxic. I need to connect all pods in headless service or selected seed pods - proxyless option is the best choice in this case by design.
apiVersion: v1
kind: Service
metadata:
name: cassandra-0
spec:
type: LoadBalancer
selector:
statefulset.kubernetes.io/pod-name: cassandra-0
ports:
- name: cql
protocol: TCP
port: 9042
targetPort: 9042
I want to expose it as cassandra-0.. inside localhost.
What should I do to make it in the easiest way?
What I imagine now but maybe it not need.
Configure some localhost dns to point cassandra-0:9042 to port 9042.
Use nginx to route traffic to specific exposed ports (9042,9043, ...) for (cassandra-0, cassandra-1, ...)
I plan to make test from localhost using pods in kubernetes.

How to access a container set up in a host with another machine?

I have deployed a mosquitto broker with kubernetes in my Linux machine. Now I want to connect this container with a MQTT client running on my smartphone. How could I do that? Which IP should I connect to?
I have connected to the mosquitto broker with a client inside my machine and it works perfectly.
EDIT: I'm using NodePort:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/mosquitto-entrypoint NodePort 10.152.183.235 <none> 8080:30001/TCP 24h
If your mobile app is on the same network, ideally NodePort must do good. You must be able to reach your service with IP 10.152.183.235. But this might not be the scenario I believe
You must run your service with LoadBalancer type, that will generate an External Facing IP for your cluster. An example given below,
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
Define a yml for your service, and apply it via kubectl kubectl apply -f <yourfile>
If you have a DNS server of your own, then you can prefer using an Ingress Controller and expose your service to outside network.
If the host where your service runs is accessible from your smartphone, you could port map the service to Nodeport.
For example , if your machine IP is 192.168.x.y and you map your service to Hosts port/Nodeport 5000
And machine allows incoming connections from your phone when you are connected to allowed network.
You could reach the service on 192.168.x.y:5000

How to get browsable url from Docker-for-mac or Docker-for-Windows?

In minikube I can get a service's url via minikube service kubedemo-service --url. How do I get the URL for a type: LoadBalancer service in Docker for Mac or Docker for Windows in Kubernetes mode?
service.yml is:
apiVersion: v1
kind: Service
metadata:
name: kubedemo-service
spec:
type: LoadBalancer
selector:
app: kubedemo
ports:
- port: 80
targetPort: 80
When I switch to type: NodePort and run kubectl describe svc/kubedemo-service I see:
...
Type: NodePort
LoadBalancer Ingress: localhost
...
NodePort: <unset> 31838/TCP
...
and I can browse to http://localhost:31838/ to see the content. Switching to type: LoadBalancer, I see localhost ingress lines in kubectl describe svc/kubedemo-service but I get ERR_CONNECTION_REFUSED browsing to it.
(I'm familiar with http://localhost:8080/api/v1/namespaces/kube-system/services/kubedemo-service/proxy/ though this changes the root directory of the site, breaking css and js references that assume a root directory. I'm also familiar with kubectl port-forward pods/pod-name though this only connects to pods until k8s 1.10.)
How do I browse to a type: LoadBalancer service in Docker for Win or Docker for Mac?
LoadBalancer will work on Docker-for-Mac and Docker-for-Windows as long as you're running a recent build. Flip the type back to LoadBalancer and update. When you check the describe command output look for the Port: <unset> 80/TCP line. And try hitting http://localhost:80.
How do I browse to a type: ClusterIP service or type: LoadBalancer service in Docker for Win or Docker for Mac?
This is usual confusion when it comes to scope of kubernetes network levels, and exposures on service level. Here is quick overview of types and scope:
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. To access it outside of cluster, you would need to run kube proxy (such as in standard dashboard example).
A LoadBalancer service is the standard way to expose a service to the internet. Load balancer access and setup is dependent on cloud provider.
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
This said, only way to access your service while on ClusteIP is from within one of the containers from the cluster or with help of proxy, and for LoadBalancer you need cloud provider. You can also mimic LoadBalancer with ingress of your own (upstream proxy such as nginx, sitting in front of ClusterIP type of service).
Useful link with more in-depth explanation and nice images: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Updated for LoadBalancer discussion:
As for using LoadBalancer, here is useful reference from documentation (https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/):
The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service.
On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
minikube service name-of-the-service
This automatically opens up a browser window using a local IP address that serves your app on service port.

Simplest approach to expose a HAProxy (port 80) Docker in IBM Cloud Kubernetes

I need to deploy a Docker running HAProxy which I already have working on on premise dockers into IBM Cloud (Bluemix) Kubernetes service. I am a bit lost on how to expose por 80 and 443. In plain simple docker that is very straightforward but seems complicated in Kubernetes, or at least in IBM Cloud.
I don't need load balancing, virtual hosts, or any extra configuration, as HAProxy will take care of it. Just need to replicate (move) my on premise running HAProxy exposing ports 80 and 443 into bluemix. (For multiple reasons I want to use HAproxy, so the request here is very specific: Simplest way to expose HAProxy ports 443 and 80 to a permanent IP address in IBM Cloud Kubernetes service.
could I have a basic example yaml kubectl file for that? Thanks
NodePort
To keep the same image running in both environments then you can define a Deployment for the HAProxy containers and a Service to access them via a NodePort on the NodeIP or clusterIP. A NodePort is similar in concept to running docker run -p n:n.
The IP:NodePort would need to be accessable externally and HAProxy will take over from there. Here's a sample HAProxy setup that uses an AWS ELB to get external users to a Node. Most people don't recommend running services via NodePort because Kubernetes offers alternate methods that provide more integration.
LoadBalancer
A LoadBalancer is specifically for automatic configuration of a cloud providers load balancer service. I don't believe IBM Clouds load balancer has any support in Kubernetes, maybe IBM have added something in? If they have you could use this instead of a NodePort to get to your Service.
Ingress
If you are running Docker locally and Kubernetes externally you've kind of thrown consistency out the window already so you could setup Ingress with an Ingress Controller based on HAProxy, there's a few available:
https://github.com/appscode/voyager
https://github.com/jcmoraisjr/haproxy-ingress
This gives you the standard Kubernetes abstraction of how to manage ingress for a service but using HAProxy underneath. This will not be your HAProxy image though, it's likely you can configure the same things for the HAProxy Ingress as you do in your HAProxy image.
Voyagers docco is pretty good:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/test'
backend:
serviceName: test-service
servicePort: '80'
backendRules:
- 'acl add_url capture.req.uri -m beg /test-second'
- 'http-response set-header X-Added-From-Proxy added-from-proxy if add_url'
If you are fine with running this HAProsy on each node that is supposed to expose port 80/443 then consider running DaemonSet with hostNetwork: true. That will allow you to create pods that open 80 and 443 directly on node network. If you have a loadbalancer support in your cluster, you can instead use a Service of LoadBalancer type. It will forward from high node ports like ie. 32080 to your backing haproxy pods, and also automaticaly configure LB in front of it to give you an external IP and forward 80/443 from that IP to your high node ports (again, assuming your kube deployment supports use of LB services)
IBM Cloud has built-in solutions for load balancer and Ingress. The docs include sample YAMLs for both.
Load Balancer: https://console.bluemix.net/docs/containers/cs_loadbalancer.html#loadbalancer
Ingress: https://console.bluemix.net/docs/containers/cs_ingress.html#ingress
If you need tls termination or want to use a route rather than an IP address for accessing your HAProxy, then Ingress would be the best choice. If those options don't matter, then I'd suggest starting with the provided load balancer to see if that meets your needs.
Note, both load balancer and Ingress required a paid cluster. For lite clusters, only NodePort is available.
Here's a sample YAML that deploys IBM Liberty and exposes it via a load balancer service.
#If you are not logged into the US-South https://api.ng.bluemix.net
region, change the image registry location to match your region.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ibmliberty-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: ibmliberty
spec:
containers:
- name: ibmliberty
image: registry.ng.bluemix.net/ibmliberty
---
apiVersion: v1
kind: Service
metadata:
name: ibmliberty-loadbalancer
spec:
type: LoadBalancer
selector:
app: ibmliberty
ports:
- protocol: TCP
port: 9080

Exposing containers without a load balancer

I'm aiming to deploy a small test application to GCE. Every guide I've read seems to point to using a LoadBalancer service to expose the pod to the internet. Unfortunately, this comes with a high associated cost and I'd like to be able to expose the containers without creating a load balancer (or using HAProxy / nginx to roll our own).
Is it possible to do so? If so, what are the steps I need to take and possible other associated costs?
Thanks!
The NGINX ingress controller found at https://github.com/kubernetes/ingress/tree/master/controllers/nginx should satisfy your cost saving requirement. I would not consider this "rolling your own" as this lives beside the GLBC ingress controller.
There should be sufficient documentation to satisfy your installation requirements and if there's not please open an issue on https://github.com/kubernetes/ingress
You can do that by choosing a NodePort as the service type.
apiVersion: v1
kind: Service
metadata:
name: myapp-servoce
labels:
name: myapp
context: mycontext
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
# label keys and values that must match in order to receive traffic for this service
selector:
name: myapp
context: mycontext
This would expose that service on port 8080 of each node of the cluster. Now all of your nodes would have externally accessible IP address and you can use the same for testing

Resources