Setup AKS Private Cluster with different API Server IP Address - azure-aks

My office is planning to use Multi Cluster in AKS Azure.
When i do POC to setup new AKS Azure Private Cluster privisioning with different subnet, the API Server ip address is always set to 10.240.0.4 but the host or api server URL is different (random).
Could i set that API Server IP Address to different IP Address for each cluster with different virtual network ?
FYI, I provision AKS Private Cluster by Azure Portal

Actually, you can get something here. The API Server of the private AKS cluster uses the Azure private Link service, so it needs to have a private IP address of the subnet that your private AKS cluster in. And all the things have done by Azure, then it gives the first available private IP address of the subnet, then the API Server gets the private IP address 10.240.0.4. You can change the address space of the subnet that your AKS cluster in, then the API Server will get a different private IP address, but it will still be the first available IP and you can't decide which one.

Related

How do you get the IP address of a Google Cloud Run server?

I have a K8s cluster that should whitelist a Cloud Run server, so I would like to know the IP address or IP range of the Cloud Run server.
As found here:
https://github.com/ahmetb/cloud-run-faq#is-there-a-way-to-get-static-ip-for-outbound-requests
Is there a way to get static IP for outbound requests?
Currently not, since Cloud Run uses a dynamic serverless machine pool by Google and its IP addresses cannot be controlled by Cloud Run users.
However, there is a workaround to route the traffic through a Google Compute Engine instance by running a persistent SSH tunnel inside the container and making your applications use it.

Define API server authorized IP range within Azure Kubernetes Services

Define API server authorized IP range - Is this only limited to set the context (.config file) for executing kubectl or also in terms of API calls for services hosted on AKS pods? How different is this from nginx ip whitelisting annotation?
The API server authorization IP range feature blocks access from internet to the API server endpoint minus the provided whitelisted IP addresses. Pods within the cluster access the kubernetes API thru the internal service on kubernetes.default.svc.cluster.local. The whitelisting annotation for the nginx ingress controller (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range) blocks non-whitelisted IP's from accessing your application running in Kubernetes behind the said ingress controller.

Azure Kubernetes Containers to other internal vlans?

I'm trying to host my docker images behind Kubernetes. But, these docker images are making calls out to other resources on internal vlans. What I can't figure out is how do I enable that communications:
10.3.1.0/24 contains my internal api resources
10.3.2.0/24 contains other resources
10.3.5.0/24 container playground
What I would like to do is to say, host the Kubernetes in something like 10.3.3.0/24 and have them be able to access my internal APIs on the 10.3.1.0 network.
I can't seem to figure out that part.
I do know that if I manually create an instance of my docker image in the 10.3.5.0 space then I can get to the 10.3.1.0 space.
First of all, in Azure, there just has the Vnet and the subnet inside the Vnet. So what you said the vlan is called subnet in Azure.
And as the comment says, when you use the Azure CNI (advanced) networking, then pods get full virtual network connectivity and can be directly reached from outside of the cluster, it means you can access the other resources of Azure in the different subnets of one Vnet. You can read the article about the behavior differences exist between kubenet and Azure CNI.
Here is also an example:
You create the AKS cluster with the CNI networking in the subnet1 and the VM in the subnet2, both subnets in the same Vnet. You deploy an API server in the VM. Then you can access the API server with the VM private IP directly.

How to allocate a static IP for an internal load balancer in Azure AKS

The document here describes how to create an AKS service with an internal load balancer associated with it. It explains how to assign an explicit IP address to this load balancer and states that the chosen IP "must not already be assigned to a resource." My question is how do I allocate this IP? The CLI command
az network public-ip create
can be used to allocate a public IP but there is no equivalent command
az network private-ip create
What is the correct procedure for allocating a private static IP in Azure?
Peter
There is no such command to create a static private IP for an internal load balancer in Azure AKS as Azure Networking has no visibility into the service IP range of the Kubernetes cluster, see here.
Actually, you could add the loadBalancerIP property to the load balancer YAML manifest to specific a private IP for an internal load balancer. When you do that, the specified IP address must reside in the same subnet as the AKS cluster and must not already be assigned to a resource. You could check the subnets where your deploy the aks cluster, then select one of the available addresses from the subnet address range, which should not overlap with the other IP address from connected devices.
Hope this will help you.

AWS Load balancing static IP range/Address

I have a API that has whitelisted IP addresses that are able to access it. I need to allow all AWS Elastic beanstalk EC2 instances to be able to access this API. So i need to either through VPC or Load Balancer settings configure a static IP or IP range x.x.x.x/32 that i can have whitelisted.
Im lost between the VPC, Load Balancer, Elastic Beanstalk, ETC. Need someone to break it down a bit and point me in the right direction.
Currently the load balancer is setup for SSL and this works correctly.
Thank you for your time
You can setup a NAT Gateway and associate an Elastic IP address in your VPC. Configure the routing from subnets to use the NAT Gateway for egress traffic. Then from your API side, you only need to whitelist the Elastic IP address of your NAT Gateway.
Check this guide for more details.
The best way to accomplish this is to place your EB EC2 instances in a private subnet that communicates to the Internet via a NAT Gateway. The NAT Gateway will use an Elastic IP address. Your API endpoint will see the NAT Gateway as the source IP for all instances in the private subnet, thereby supporting adding the NAT Gateway EIP to your whitelist.
To quote Amazon, link below:
Create a public and private subnet for your VPC in each Availability Zone (an Elastic Beanstalk requirement). Then add your public resources, such as the load balancer and NAT, to the public subnet. Elastic Beanstalk assigns them a unique Elastic IP addresses (a static, public IP address). Launch your Amazon EC2 instances in the private subnet so that Elastic Beanstalk assigns them private IP addresses.
Load-balancing, autoscaling environments
You can assign Elastic IP addresses to ELB instances.
First you need to create a number of Elastic IP addresses. They will be unassigned by default.
The actual assignment can be triggered from the "User data" script that you can specify when creating a Launch Configuration for the ELB. The following two lines of code in the user data script should assign an IP:
pip install aws-ec2-assign-elastic-ip
aws-ec2-assign-elastic-ip --region ap-southeast-2 --access-key XXX --secret-key XXX --valid-ips 1.2.3.4,5.6.7.8,9.10.11.12
The list of --valid-ips should be the list of IPs you created in the beginning.

Resources