Endpoints URLs management in kubernetes - url

I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services.
Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:
Micro service 1:
/v1/user-accounts
Micro service 2:
/v1/savings
Micro service 3:
/v1/auth
If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg.
http://10.12.234.69:5000/v1/user-accounts
Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75
How can i write an application that keep the URL reference constant even if the IP address changes?
Would a Load Balancer Service do the trick?
Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?
I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.
Thanks in advance!
EDIT: (as follow up to simon answer)
questions:
given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (http://<ADDRESS>/v1/savings) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?
in case I add other sub routes to the same paths, like /v1/savings/get and /v1/savings/get/id/<var_id> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ?

A load balancer is what you are looking for.
Kubernetes services will make your pods accessible under a given hostname cluster-internally.
If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP Ingresses. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration.
Example based on your micro service architecture:
Mocking applications
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-accounts
spec:
template:
metadata:
labels:
app: user-accounts
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
args:
- /bin/bash
- "-c"
- echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: savings
spec:
template:
metadata:
labels:
app: savings
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
These deployments represent your services and just return their name via HTTP under /v1/name.
Mapping applications to services
---
kind: Service
apiVersion: v1
metadata:
name: user-accounts
spec:
type: NodePort
selector:
app: user-accounts
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: savings
spec:
type: NodePort
selector:
app: savings
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: auth
spec:
type: NodePort
selector:
app: auth
ports:
- protocol: TCP
port: 80
These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under user-accounts, savings and auth.
Making services reachable via load balancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
spec:
rules:
- http:
paths:
- path: /v1/user-accounts
backend:
serviceName: user-accounts
servicePort: 80
- path: /v1/savings
backend:
serviceName: savings
servicePort: 80
- path: /v1/auth
backend:
serviceName: auth
servicePort: 80
This Ingress defines under which paths the different services should be reachable. Verify your Ingress via kubectl get ingress:
# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example * 80 1m
If you are running on Google Container Engine, there is an Ingress controller running in your cluster which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the ADDRESS column of the above output, there will be an IP displayed under which you can access your applications:
# curl http://<ADDRESS>/v1/user-accounts
user-accounts⏎
# curl http://<ADDRESS>/v1/savings
savings⏎
# curl http://<ADDRESS>/v1/auth
auth⏎

Related

Enable Ingress controller on Docker Desktop with WLS2

Currently, I'm using Docker Desktop with WSL2 integration. I found that Docker Desktop automatically had created a cluster for me. It means I don't have to install and use Minikube or Kind to create cluster.
The problem is that, how could I enable Ingress Controller if I use "built-in" cluster from Docker Desktop?
I tried to create an Ingress to check if this work or not, but as my guess, it didn't work.
The YAML file I created as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nodejs-helloworld:v1
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: ingress.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
I tried to access ingress.local/ but it was not successful. (I added ingress.local to point to 127.0.0.1 in host file. And the webapp worked fine at kubernetes.docker.internal:30090 )
Could you please help me to know the root cause?
Thank you.
Finally I found the way to fix. I have to deploy ingress Nginx by command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
(Follows the instruction at https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac. It works just fine for Docker for Windows)
Now I can access http://ingress.local successfully.
You have to install an ingress-nginx controller on your cluster, so that your nodes will have an opened port 80/443.
Using helm (v3 - see documentation):
helm install --namespace kube-system nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
Using kubectl (see documentation):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml
Then manually adding your ingresses' hostnames to /etc/hosts:
127.0.0.1 ingress.local
127.0.0.1 my.other.service.local
# ...
Then if you make a request on http://ingress.local:
the DNS resolution will route to your cluster node
then the ingress controller will serve the request on port 80
then ingress will route the request to the configured backend service
and the service will route to an available pod
The newest version of Docker Desktop for Windows already adds a hosts file entry: 127.0.0.1 kubernetes.docker.internal.
You had to do use kubernetes.docker.internal URL as a hostname in Ingress definition if you want to point to 127.0.0.1. This should be in the docs on this page kubernetes.github.io/ingress-nginx/deploy but there is no Docker Desktop for Windows section there.
Your files should look like this:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- name: http
protocol: TCP
port: 3000
nodePort: 30090
Your Ingress file should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webapp-ingress
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: webapp-service
servicePort: http
Then you are able to connect to app using http://kubernetes.docker.internal/.
Example you can see here: wsl2-docker-for-desktop.
I used the Docker-Desktop version to install the nginx-ingress controller
https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop
curl http://kubernetes.docker.internal/
Offcourse I've not installed any workload yet but the default ingress controller works just fine.
With Kustomize you can simply use
helmCharts:
- name: ingress-nginx
releaseName: ingress-nginx
repo: https://kubernetes.github.io/ingress-nginx
This is just to point out that Amel Mahmuzićs comment is still valid with a recent (I used the ingress-nginx Helm Chart 4.4.2) ingress deployment.
I could not get this to work for far too long (I tried to follow the Strapi fodadvisor example with Docker Desktop build in Kubernetes instead of minikube) and always received a 404 from the ingress.
However, after using this yaml with the added annotation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: foodadvisor.backend
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foodadvisor-backend
port:
number: 1337
- host: foodadvisor.client
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foodadvisor-frontend
port:
number: 3000
it worked immediately. The K82 docs mention, that this annotation is deprecated.

Docker nginx reverse proxy on Kubernetes

I have a couple of applications, which runs in Docker containers (all on the same VM).
In front of them, I have an nginx container as a reverse proxy.
Now I want to migrate that to Kubernetes.
When I start them by docker-composer locally it works like expected.
On Kubernetes not.
nginx.conf
http {
server {
location / {
proxy_pass http://app0:80;
}
location /app1/ {
proxy_pass http://app1:80;
rewrite ^/app1(.*)$ $1 break;
}
location /app2/ {
proxy_pass http://app2:80;
rewrite ^/app2(.*)$ $1 break;
}
}
}
edit: nginx.conf is not used on kubernetes. I have to use ingress-controller for that:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app0
spec:
replicas: 1
template:
metadata:
labels:
app: app0
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: app0
image: appscontainerregistry1.azurecr.io/app0:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: nginx
---
#the other apps
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: apps-url.com
http:
paths:
- path: /
backend:
serviceName: app0
servicePort: 80
- path: /app1
backend:
serviceName: app1
servicePort: 80
- path: /app2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: ingress-nginx
I get the response on / (app0). Unfortunately, the subroutes are not working. What I´m doing wrong?
EDIT
I figured out. Ich missed installing the ingress controller. Like on this page (https://kubernetes.io/docs/concepts/services-networking/ingress/) described, the ingress doesn't work if no controller is installed.
I used ingress-nginx as a controller (https://kubernetes.github.io/ingress-nginx/deploy/) because it was the best-described install guide which I was able to find and I didn´t want to use HELM.
I have one more question. How I can change my ingress that subdomains are working.
For example, k8url.com/app1/subroute shows me every time the start page of my app1.
And if I use a domain name proxying, it rewrites every time the domain name by the IP.
you have created deployment successfully but with that service should be there. nginx ngress on kubernetes manage traffic based on the service.
so flow goes like
nginx-ingress > service > deployment pod.
you are missing to create the service for both applications and add the proper route based on that in kubernetes ingress.
Add this :
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: ingress-nginx
Because you didnot route for Service Load balancer to targetPort to 80

How Does Dynamic Service Discovery Work When Using Docker Compose Or Kubernetes?

Let's say I am creating a chat app with microservice architecture. I have 2 services:
Gateway service: responsible for user authentication (API endpoint /api/v1/users), and routing requests to appropriate service.
Messaging service: responsible for creating, retrieving, updating, and deleting messages (API endpoint /api/v1/messages).
If I use Docker Compose or Kubernetes, how does my gateway service know which service should it forwards to if there is a request sending to /api/v1/messages API endpoint?
I used to write my own dynamic service discovery middleware (https://github.com/zicodeng/tahc-z/blob/master/servers/gateway/handlers/dsd.go). The idea is that I pre-register services with API endpoints they are responsible for. And my gateway service relies on request resource path to decide which service this request should be forwarded to. But how do you do this with Docker Compose or Kubernetes? Do I still need to keep my own version of dynamic service discovery middleware?
Thanks in advance!
If you are using Kubernetes, here are the high level steps:
Create your micro-service Deployments/Workloads using your docker images
Create Services pointing to these deployments
Create Ingress using Path Based rules pointing to the services
Here is sample manifest/yaml files: (change docker images, ports etc as needed)
apiVersion: v1
kind: Service
metadata:
name: svc-gateway
spec:
ports:
- port: 80
selector:
app: gateway
---
apiVersion: v1
kind: Service
metadata:
name: svc-messaging
spec:
ports:
- port: 80
selector:
app: messaging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-gateway
spec:
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-messaging
spec:
replicas: 1
template:
metadata:
labels:
app: messaging
spec:
containers:
- name: messaging
image: messaging/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-for-chat-application
spec:
rules:
- host: chat.example.com
http:
paths:
- backend:
serviceName: svc-gateway
servicePort: 80
path: /api/v1/users
- backend:
serviceName: svc-messaging
servicePort: 80
path: /api/v1/messages
If you have other containers running in the same namespace and would like to communicate with these services you can directly use their service names.
For example:
curl http://svc-messaging or curl http://svc-gateway
You don't need to run your own service discovery, that's taken care by Kubernetes!
Some visuals:
Step 1:
Step 2:
Step 3:

How to access MySql hosted with Nginx Ingress+Kubernetes from client

I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http
containerPort: 3306
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql
labels:
app: mysql
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mysql.example.com
http:
paths:
- path: /
backend:
serviceName: mysql
servicePort: 3306
My LoadBalancer (created by Nginx Ingress) port configuration looks like :
80 (TCP) forwarding to 32078 (TCP)
Stickiness options not available for TCP protocols
443 (TCP) forwarding to 31480 (TCP)
Stickiness options not available for TCP protocols
mysql.example.com is pointing to my ELB.
I was expecting something like, from my local box i can connect to MySql if try something like :
mysql -h mysql.example.com -u root -P 80 -p
Which is not working out. Instead of NodePort if i try with LoadBalancer, its creating a new ELB for me which is working as expected.
I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I succeded to access MariaDB/MySQL hosted on Google Kubernetes Engine through ingress-nginx, using the hostname specified in the ingress created for the database Cluster IP.
As per the docs, simply create the config map and expose the port in the Service defined for the Ingress.
This helped me to figure how to set --tcp-services-configmap and --udp-services-configmap flags.

How to use kube-proxy to forward https serivce in k8s?

As we know ,kube-proxy is used to proxy serive that could be accessed from external network via apiserver, does kube-proxy support to proxy https service in k8s or any other solution so that we could access it via apiserver ?
You need to expose your https pods via a service of type Nodeport, then you can access the https via the defined port on any node in the cluster (master or worker) because kube-proxy will forward the requests to your pods that are part of the service. NodePorts can be in the range of 30000-32767 by default.
Example configuration for an https service and deployment with nginx:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 443
name: nginx
targetPort: 443
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 443
kube-proxy iptables mode works on IP layer(Networking layer), it does not care if the packet is http or https.

Resources