Define API server authorized IP range within Azure Kubernetes Services - azure-aks

Define API server authorized IP range - Is this only limited to set the context (.config file) for executing kubectl or also in terms of API calls for services hosted on AKS pods? How different is this from nginx ip whitelisting annotation?

The API server authorization IP range feature blocks access from internet to the API server endpoint minus the provided whitelisted IP addresses. Pods within the cluster access the kubernetes API thru the internal service on kubernetes.default.svc.cluster.local. The whitelisting annotation for the nginx ingress controller (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range) blocks non-whitelisted IP's from accessing your application running in Kubernetes behind the said ingress controller.

Related

Cloud Run service to service call getting 403 when VPC Accessor Enabled with "Route only requests to private IPs through the VPC connector" option

Do we have to route all outbound request through VPC accessor? What the "Route only requests to private IPs through the VPC connector" option for? Is it only for the service that don't call the another one?
When you set the ingress to internal only or internal and cloud load balancing on your Cloud Run service, you can't access your service from outside (except if you use a load balancer).
So, in your case, you route only the private IP to the serverless VPC Connector. However, your Cloud Run service is always reachable on the internet, with a public IP. And thus, to access it from your VPC, you need to use the serverless VPC Connector for all traffic, private and public IPs.

Public Cloud Run Service communication with internal-only ingress Cloud Run Service

I have to following setup:
A VPC network V and a VPC Connector for V using CIDR range "10.8.0.0/28" (EDITED)
The following services A and B are connected to the VPC via the Connector
Cloud Run Service A: This service is set to ingress=internal to secure the API. Its egress is set to or private-ranges-only
Cloud Run Service B: This service provides an API for another Service C within the Azure Cloud. B also needs access to Service A's API. The egress and ingress are set to all to route all outgoing traffic through the VPC connector and allow for a successful request on internal Service A.
The current problem is the following: Requests from Service C -> Service B return in a 504 Gateway Timeout. If the egress of Service B is changed to private-ranges-only the request of Service C succeeds but in return all requests of B -> A return 403 Forbidden since traffic is no longer routed through the VPC Connector because Cloud Run does not allow for private-ranges to send traffic to Service A(afaik). All requests of Cloud Run Services to other Cloud Run Services are currently issued to "*.run.app" URLs.
I can not come up with an idea for a possible and convenient fix for this setup. Is there an explanation why egress=all in Service B results in a Gateway Timeout of requests from Service C. I tried to follow logs from the VPC but did not see any causes.
The following changes were necessary to make it run:
Follow this guide to create a static outbound ip for Service B
Remove previous created VPC Connector (created with CIDR range not subnet as in guide)
Update Cloud Run Service B to use VPC Connector created during Step 1
Since removing the static outbound ip is breaking the setup, I assume the azure service demands a static ip to communicate with.

Service IP & Port discovery with Kubernetes for external App

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

On Premise - Kubernetes External Endpoint for services

We are analyzing the integration of the Kubernetes service in our on premise environment. We have SaaS based services which can be exposed publicly.
We have doubts in setting up the external endpoints for the services. Is there any way to create the external endpoints for the services?
We have tried to setup the ExternalIP parameter in the services with the master node IP address. Not sure this is the correct way. Once we setup the external IP with the master node IP address we are able to access the services.
We have also tried with ingress controllers and also there we can access our services with the IP address of the node where the ingress controllers are running.
For Example :
Public IP : XXX.XX.XX.XX
Ideally, we would map the public IP with the load balancer virtual IP, but we cannot find such a setting in Kubernetes.
Is there any way to address this issue?
My suggestion is to use an Ingress Controller that acts as a proxy for all your services in kubernetes.
Of course your ingress controller has to be somehow exposed to the outside world. My suggestion is to use the hostNetwork setting for the ingress controller pod (this way, the pod will be listening on your host's physical interface, like any other "traditional" service).
A few resources:
Here details on how a pod can be reached from outside your k8s cluster).
Here a nice tutorial on how to setup an ingress controller on k8s.
If you have more than one minion in your cluster, you'll end up having problems with load balancing them. This question can be helpful about that.

Accessing Kubernetes Web UI (Dashboard)

I have installed a Kubernetes with Kubeadm tool, and then followed the documentation to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node.
However, I'm not able to access the Web UI at https://<kubernetes-master>/ui. Instead I can access it on https://<kubernetes-master>:6443/ui.
How could I fix this?
The URL you are using to access the dashboard is an endpoint on the API Server. By default, kubeadm deploys the API server on port 6443, and not on 443, which is what you would need to access the dashboard through https without specifying a port in the URL (i.e. https://<kubernetes-master>/ui)
There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:
If this is a dev/test cluster, you could try making kubeadm deploy the API server on port 443 by using the --api-port flag exposed by kubeadm.
Expose the dashboard using a service of type NodePort.
Deploy an ingress controller and define an ingress point for the dashboard.

Resources