i have a handful of dockerized microservices, each is listening for http requests on a certain port, and i have these deployments formalized as kubernetes yaml files
however, i can't figure out a working strategy to expose my deployments on the interwebs (in terms of kubernetes services)
each deployment has multiple replicas, and so i assume each deployment should have a matching load balancer service to expose it to the outside
now i can't figure out a strategy to sanely expose these microservices to the internet... here's what i'm thinking:
the whole cluster is exposed on a domain name, and services are subdomains
say the cluster is available at k8s.mydomain.com
each loadbalancer service (which exposes a corresponding microservice) should be accessible by a subdomain
auth-server.k8s.mydomain.com
profile-server.k8s.mydomain.com
questions-board.k8s.mydomain.com
so requests to each subdomain would be load balanced to the replicas of the matching deployment
so how do i actually achieve this setup? is this desirable?
can i expose each load balancer as a subdomain? is this done automatically?
or do i need an ingress controller?
am i barking up the wrong tree?
i'm looking for general advice on how to expose a single app which is a mosaic of microservices
each service is exposed on the same ip/domain, but each gets its own port
perhaps the whole cluster is accessible at k8s.mydomain.com again
can i map each port to a different load balancer?
k8s.mydomain.com:8000 maps to auth-server-loadbalancer
k8s.mydomain.com:8001 maps to profile-server-loadbalancer
is this possible? it seems less robust and less desirable than strategy 1 above
each service is exposed on its own ip/domain?
perhaps each service specifies a static ip, and my domain has A records pointing each subdomain at each of these ip's in a manual way?
how do i know which static ip's to use? in production? in local dev?
maybe i'm conceptualizing this wrong? can a whole kubernetes cluster map to one ip/domain?
what's the simplest way to expose a bunch of microservies in kubernetes? on the other hand, what's the most robust/ideal way to expose microservices in production? do i need a different strategy for local development in minikube? (i was just going to edit /etc/hosts a lot)
thanks for any advice, cheers
I think the first option is by far the best.
Your Ingress might look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: auth-server.k8s.mydomain.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: profile-server.k8s.mydomain.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- host: questions-board.k8s.mydomain.com
http:
paths:
- backend:
serviceName: service3
servicePort: 80
You can read more about it on Kubernetes docs regarding Ingress and Name based virtual hosting.
You can also use many Ingress Controllers depending where you will end up setting your cluster. You mentioned that you will be testing this on Minikube so I think nginx ingress will be a good choice here.
If you are thinking about managing your traffic you could consider istio .
Here is a nice guide Setting up HTTP(S) Load Balancing with Ingress and another once Configuring Domain Names with Static IP Addresses.
The first method is typically the format that everyone follows ie each microservice gets its own subdomain. You can achieve the same using Kubernetes ingress (for example Nginx Ingress https://kubernetes.github.io/ingress-nginx/)
They need not be in the same domain also ie you can have both *.example.com and *.example2.com
The second method doesn't scale up as you would have a limited number of available ports and running on non-standard ports comes with its own issues.
Use an ingress:
https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress
With an ingress, you can assign subdomains to different services, or you can serve all the services under different context roots with some url rewriting.
I don't suggest exposing services using different ports. Nonstandard ports have other problems.
Related
I have 1 master and 2 worker on my k8s cluster.It's bare metal and I can't use any of cloud providers. I just can use DNS load balancer. I want to expose valid ports (like 80 and 443) on my nodes because of that I can't use NodePort. What is the best solution?
My only solution was to install Nginx on all of my nodes and proxy ports to my ClusterIp services.I don't know that this is a good solution or not.
Following things that you are doing right :
Cluster IP service - If you don't want to expose your services to be invoked form outside the cluster, CLusterIP is right way instead of NodePort or LoadBalancer.
Following things that you can do:
Create an Ingress Controller and and Ingress resource for your cluster which will listen on port 80 and 443 and proxy the requests to your services according to routes mentioned in the ingress.
You can create inginx-ingress controller using link: https://kubernetes.github.io/ingress-nginx/deploy/
Then create an Ingress resource using link https://kubernetes.io/docs/concepts/services-networking/ingress/
I found the solution. I need to edit /etc/kubernetes/manifests/kube-apiserver.yaml and edit service-node-port-range to 80 to any number that I want. Then declare my ingress service as nodePort.
I am slightly newbie on kubernetes. And we want the present some of our products as SaaS to our costumers. So i need the user based isolated deployments. After some research i decided to create a namespace to each user. Then deploy what user wants from our template to user's namespace. But there is a problem about port mapping. Lets say we have 6 user and all of them want's the deploy django app. So all of them wants to access their project from 80 and 443 ports. Is there any solution for this in kubernetes? If it is how should i proceed?
And how can i seperate each users deployments to diffrent networks or vlans to isolate their networks from each other?
You can either put a dedicated load balancer to each of them (the expensive solution, but straight forward), or make your Ingress Controller to only accept requests with a hostname, and point each hostname to it's service, in its namespace (cheap solution but complicate).
Load Balancer Solution:
This one is easy, if you are with a cloud provider, so every time a client exposes the app, you just create a LoadBalancer type service pointing to his app. Since for each app you get a new load balancer, you have no problem of port collision. Now, the drawbacks are that this you can do only with a cloud provider, and it will be quite expensive.
Ingress Solution:
This one is the pro solution. It's cheap, but it's also more complex. So, you would create an Ingress resource like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
tls:
- secretName: tls
rules:
- host: site1.domain.com
http:
paths:
- path: /path1/
backend:
serviceName: service1
servicePort: 80
- host: site2.domain.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
...
Here, you just have one L7 load balancer, and that's Ingress Controller doing all the routing. Depending on Ingress Controller, you might get an L4 load balancer (e.g. nginx, Traefik), but still that's Ingres Controller doing the routing.
Complexity? You will have to figure out a way to update the Ingress Controller records without a downtime for other users. Also, on Kubernetes, an Ingress Controller can't pass a request from one namespace to another. So, the service needs to run in the same namespace that the Ingress resource has been created (Note, I'm saying Ingress resource, being the rules (like the yaml above); not Ingress Controller). This is a known limitation that Kubernetes team already announced that will never be changed, as it introduces a huge security hole.
You will need to create headless services without selectors in the same namespace that Ingress object, and separately create Endpoint objects pointing to the service in the other namespaces. It might look like cumbersome, but that's quite pro actually.
To separate external access to the apps you need to deploy an Ingress Controller and create different Ingresses pointing to Services of each application. Every Ingress will hold its unique URL per application.
To prohibit internal cross-namespace communication you need to deploy network policies. Fr this you'll need to deploy an addon.
Or you can solve it by deploying any common service mesh: Istio, Linkerd, Consul etc.
I'm having an issue where because an application was originally configured to execute on docker-compose.
I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods.
The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost.
I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.
I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs.
When the frontend tries to call the APIs, it does not find the service, and returns an error.
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.
Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?
Yes, you just add entries to the containers section in your yaml file, example:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: nginx-container
image: nginx
- name: debian-container
image: debian
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:
Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.
If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).
More kubernetes philosophy approach here would be to:
Create deployment for backend
Create service for backend (exposing necessary ports)
Create deployment for frontend
Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...
Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...
I need to deploy a Docker running HAProxy which I already have working on on premise dockers into IBM Cloud (Bluemix) Kubernetes service. I am a bit lost on how to expose por 80 and 443. In plain simple docker that is very straightforward but seems complicated in Kubernetes, or at least in IBM Cloud.
I don't need load balancing, virtual hosts, or any extra configuration, as HAProxy will take care of it. Just need to replicate (move) my on premise running HAProxy exposing ports 80 and 443 into bluemix. (For multiple reasons I want to use HAproxy, so the request here is very specific: Simplest way to expose HAProxy ports 443 and 80 to a permanent IP address in IBM Cloud Kubernetes service.
could I have a basic example yaml kubectl file for that? Thanks
NodePort
To keep the same image running in both environments then you can define a Deployment for the HAProxy containers and a Service to access them via a NodePort on the NodeIP or clusterIP. A NodePort is similar in concept to running docker run -p n:n.
The IP:NodePort would need to be accessable externally and HAProxy will take over from there. Here's a sample HAProxy setup that uses an AWS ELB to get external users to a Node. Most people don't recommend running services via NodePort because Kubernetes offers alternate methods that provide more integration.
LoadBalancer
A LoadBalancer is specifically for automatic configuration of a cloud providers load balancer service. I don't believe IBM Clouds load balancer has any support in Kubernetes, maybe IBM have added something in? If they have you could use this instead of a NodePort to get to your Service.
Ingress
If you are running Docker locally and Kubernetes externally you've kind of thrown consistency out the window already so you could setup Ingress with an Ingress Controller based on HAProxy, there's a few available:
https://github.com/appscode/voyager
https://github.com/jcmoraisjr/haproxy-ingress
This gives you the standard Kubernetes abstraction of how to manage ingress for a service but using HAProxy underneath. This will not be your HAProxy image though, it's likely you can configure the same things for the HAProxy Ingress as you do in your HAProxy image.
Voyagers docco is pretty good:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/test'
backend:
serviceName: test-service
servicePort: '80'
backendRules:
- 'acl add_url capture.req.uri -m beg /test-second'
- 'http-response set-header X-Added-From-Proxy added-from-proxy if add_url'
If you are fine with running this HAProsy on each node that is supposed to expose port 80/443 then consider running DaemonSet with hostNetwork: true. That will allow you to create pods that open 80 and 443 directly on node network. If you have a loadbalancer support in your cluster, you can instead use a Service of LoadBalancer type. It will forward from high node ports like ie. 32080 to your backing haproxy pods, and also automaticaly configure LB in front of it to give you an external IP and forward 80/443 from that IP to your high node ports (again, assuming your kube deployment supports use of LB services)
IBM Cloud has built-in solutions for load balancer and Ingress. The docs include sample YAMLs for both.
Load Balancer: https://console.bluemix.net/docs/containers/cs_loadbalancer.html#loadbalancer
Ingress: https://console.bluemix.net/docs/containers/cs_ingress.html#ingress
If you need tls termination or want to use a route rather than an IP address for accessing your HAProxy, then Ingress would be the best choice. If those options don't matter, then I'd suggest starting with the provided load balancer to see if that meets your needs.
Note, both load balancer and Ingress required a paid cluster. For lite clusters, only NodePort is available.
Here's a sample YAML that deploys IBM Liberty and exposes it via a load balancer service.
#If you are not logged into the US-South https://api.ng.bluemix.net
region, change the image registry location to match your region.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ibmliberty-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: ibmliberty
spec:
containers:
- name: ibmliberty
image: registry.ng.bluemix.net/ibmliberty
---
apiVersion: v1
kind: Service
metadata:
name: ibmliberty-loadbalancer
spec:
type: LoadBalancer
selector:
app: ibmliberty
ports:
- protocol: TCP
port: 9080
I'm aiming to deploy a small test application to GCE. Every guide I've read seems to point to using a LoadBalancer service to expose the pod to the internet. Unfortunately, this comes with a high associated cost and I'd like to be able to expose the containers without creating a load balancer (or using HAProxy / nginx to roll our own).
Is it possible to do so? If so, what are the steps I need to take and possible other associated costs?
Thanks!
The NGINX ingress controller found at https://github.com/kubernetes/ingress/tree/master/controllers/nginx should satisfy your cost saving requirement. I would not consider this "rolling your own" as this lives beside the GLBC ingress controller.
There should be sufficient documentation to satisfy your installation requirements and if there's not please open an issue on https://github.com/kubernetes/ingress
You can do that by choosing a NodePort as the service type.
apiVersion: v1
kind: Service
metadata:
name: myapp-servoce
labels:
name: myapp
context: mycontext
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
# label keys and values that must match in order to receive traffic for this service
selector:
name: myapp
context: mycontext
This would expose that service on port 8080 of each node of the cluster. Now all of your nodes would have externally accessible IP address and you can use the same for testing