Expose Web API hosted in AKS - azure-aks

I have an application deployed in Azure Kubernetes services which has a built in Web API service hosted on port 8080. I need to be able to expose this API to the outside of the K8 pod to the outside world.
What is the best practice to achieve this?

With an Kubernetes Service and an Azure Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: public-svc
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: public-app
The type LoadBalancer will create a Azure Load Balancer with a Public IP in the AKS management resource group.
Documentation can be found here

Related

How to access host's localhost from inside kubernetes cluster

In this application, nodejs pods are running inside kubernetes, and mongodb itself sitting outside at host as localhost.
This indeed not good design, but its only for dev environment. In production a separte mongodb server will be there, as such option to have a non loopback ip in endpoint, so will not be a problem in Production.
Have considered following options for dev environment
Use localhost connect string to connect to mongodb, but it will refer to pod's own localhost not host's localhost
Use headless service and provide localhost ip and port in endpoint. However endpoint doesn't allow loopback
Suggest if there is a way to access mongodb database at host's localhost from inside cluster (pod / nodejs application).
I'm running on docker for windows, and for me just using host.docker.internal instead of localhost seems to work fine.
For example, my mongodb connection string looks like this:
mongodb://host.docker.internal:27017/mydb
As an aside, my hosts file includes the following lines (which I didn't add, I guess the docker desktop installation did that):
# Added by Docker Desktop
192.168.1.164 host.docker.internal
192.168.1.164 gateway.docker.internal
127.0.0.1 is a localhost(lo0) interface IP address. Hosts, nodes and pods have their own localhost interfaces and they are not connected to each other.
Your mongodb is running on the Host machine and cannot be accessible using the localhost (or it's IP range) from inside a cluster pod or from inside vm.
In your case, create a headless service and Endpoint for it inside the cluster:
Your mongodb-service.yaml file should look like this:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
clusterIP: None
ports:
- protocol: TCP
port: <multipass-port-you-are-using>
targetPort: <multipass-port-you-are-using>
selector:
name: example
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mongodb-service
subsets:
- addresses:
- ip: 10.62.176.1
ports:
- port: <multipass-port-you-are-using>
I have add IP you've mentioned in comment section.
After creating service and endpoint you can use mongodb-service name and port <multipass-port-you-are-using> inside any pod of this cluster as a destination point.
Take a look: mysql-localhost, mongodb-localhost.
If you are using minikube to deploy a local kubernetes, you can reach your local environment using the variable host.minikube.internal.
I can add one more solution with Ingress and external-service, which may help some of you.
I deploy my complete system locally with a special Kustomize overlay.
When I want to replace one of the deployments with a service running locally in my IDE, I do the following:
I add an ExternalName service which forwards to host.docker.internal:
kind: Service
apiVersion: v1
metadata:
name: backend-ide
spec:
type: ExternalName
externalName: host.docker.internal
and reconfigured my ingress to forward certain request from my web-app to this external-service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
spec:
ingressClassName: nginx
rules:
- host: url.used.by.webapp.com
http:
paths:
- path: /customerportal/api(/|$)(.*)
pathType: Prefix
backend:
service:
name: backend-ide
port:
number: 8080
The same way, I can access all other ports on my host.

Creating a connection string in the cloud for Redis

I have a Redis pod, and I expect connection requests to this pod from different clusters and applications not running in the cloud.
Since Redis does not work with the http protocol, accessing as the route I have done below does not work with this connection string "route-redis.local:6379".
route.yml
apiVersion: v1
kind: Route
metadata:
name: redis
spec:
host: route-redis.local
to:
kind: Service
name: redis
service.yml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis
You may have encountered this situation. In short, is there any way to access to the redis pod via route? If not, how do you solve this problem?
You already discovered that Redis does not work via the HTTP protocol, which is correct as far as I know. Routes work by inspecting the HTTP Host header for each request, which will not work for Redis. This means that you will not be able to use Routes for non-HTTP workload.
Typically, such non-HTTP services are exposed via a Service and NodePorts. This means that each Worker Node that is part of your cluster will open this port and will forward the traffic to your application.
You can find more information in the Kubernetes documentation:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
You can define a NodePort like so (this example is for MySQL, which is also non-HTTP workload):
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql
Of course, your administrator may limit the access to these ports, so it may or may not be possible to use these types of services on your OpenShift cluster.
You can expose the tcp via ingress atleast nginx one
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

How to do Dynamic Proxying to Kubernetes Pods?

I'm wanting to create a service that can do some kind of dynamic proxying back to Kubernetes Pods. Basically I'll have hundreds of K8s Pods that are running the same application that map to a random port on the host (like 10456). However, each Pod is unique and I want traffic directed at a specific pod based on hostname. So when a request comes in for abc123.app.com, I'll have a proxy layer that does a lookup in a database to find what host and port that domain is running on (like 10.0.0.5:10456), then forward the request there. Is there a service that supports this? I've worked with Nginx a lot before, but I'm not clear if it could support this lookup functionality.
Has anyone built something like this before? what's the best way to build a proxy layer that can do lookups like that? How would I update the database when a pod moves from one host to another?
Thanks in advance!
EDIT:
I should have put this in there the first time, but the types of traffic going to these pods are RPC traffic and Peer to Peer traffic
You're describing something very similar to what kubernetes ingress definitions do for http traffic.
An ingress definition configures an ingress controller to point requests for a hostname at a service. The service selects endpoints (pods) via label selectors. When pods move, kubernetes updates the service automatically.
The work on your end just becomes pushing out config changes from your database via one of the API clients to kubernetes rather than directing a proxy. If your environment was extremely dynamic requiring reconfiguration all the time or you need to make dynamic decisions about where traffic should go, you might want to continue looking at a custom proxy or istio, openresty.
It sounds like you have unique deployments going to kubernetes already, so in addition to that include a service and ingress definition.
A simple example including a label on the a pod, a service that use the label. Then an ingress definition using the service.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: host-abc123
spec:
containers:
- name: host-abc123
image: me/my-app:1.2.1
ports:
- containerPort: 10456
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: host-abc123
spec:
rules:
- host: abc123.bar.com
http:
paths:
- backend:
serviceName: host-abc123
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: host-abc123
spec:
ports:
- protocol: TCP
port: 80
targetPort: 10456
The single ingress definition could include all hosts but I'm not sure how kubernetes and the ingress controllers would go replacing that regularly.
There are nginx based ingress controllers too. You end up with a nginx server config per ingress/host definition.

How can we access ubuntu container image from outside the host?

We access the container through cluster IP and even we deploy web application containers can be accessed.The issue with how can we access container from outside the host.
Tried with giving external IP to containers.
You can create a service and bind it to a node port, from outside your cluster if you try to access that service using node_ip:port.
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
ports:
- port: 80
name: http
targetPort: api-http
nodePort: 30004
- port: 443
name: https
targetPort: api-http
type: LoadBalancer
selector:
run: api-server
if you do kubectl get service you can get the external ip.
The best approach would be to expose your pods with ClusterIP type services, and then use an Ingress resource along with Ingress Controller to expose HTTP and/or HTTPS routes so you can access your app outside of the cluster.
For testing purposes it's ok to use NodePort or LoadBalancer type services. Whether you are running on your own infrastructure or using a managed solution, you can use NodePort, while using LoadBalancer requires cloud provider's load balancer.
Source: Official docs

Expose a Kubernetes pod on a bare metal cluster

I'm trying to expose a Kubernetes pod on a single node bare metal cluster without a domain.
In my understanding I've the these options:
Expose using NodePort
Expose using an Ingress controller
Expose using ClusterIP and manually set an external IP
As I mentioned already, I only have a single node cluster. This means that the master is master and node at the same time directlly running on a fedora host system.
The simplest solution is to use a NodePort. But the limitation here is (if I'm right), that the service port will be automatically selected from a given port range.
The next better solution is to use an ingress controller. But for this I need a public domain which I haven't. So the ingress controller also doesn't fit to me.
What for other options do I have? I just want to expose my service directly on port 9090.
Why not Option 3 ? you can setup externalIPs to your node ip.
apiVersion: v1
kind: Service
...
spec:
externalIPs:
- your node ip
Also with NodePort, the service port can be specified.
You can set a custom port range for NodePort by adding this option to your apiserver settings (/etc/kubernetes/manifests/kube-apiserver.yaml):
--service-node-port-range portRange
Default: 30000-32767
A port range to reserve for services with NodePort visibility. Example:
'30000-32767'.
Inclusive at both ends of the range.
This is the part from Kubernetes documentation related to Services:
If you want a specific port number, you can specify a value in the
nodePort field, and the system will allocate you that port or else the
API transaction will fail (i.e. you need to take care about possible
port collisions yourself). The value you specify must be in the
configured range for node ports.
Example for this answer was taken from the article Hosting Your Own Kubernetes NodePort Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx

Resources