AKS create with App gateway ingress control fails with IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork error - azure-aks

When i try to create aks using azure cli using the following command :
"az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.2.0.0/16" --generate-ssh-keys"
I get the below error.
"(IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork) Subnet Prefix '10.2.0.0/16' specified for IngressApplicationGateway addon is not contained within the AKS Agent Pool's Virtual Network address prefixes '[10.224.0.0/12]'.
Code: IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork
Message: Subnet Prefix '10.2.0.0/16' specified for IngressApplicationGateway addon is not contained within the AKS Agent Pool's Virtual Network address prefixes '[10.224.0.0/12]'.
Target: AddonProfiles.IngressApplicationGateway"
Any idea why i get this error ? or how to fix it.

I see that you have used the Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Application Gateway instance tutorial.
I had some trouble creating a new AKS cluster with a command similar to yours. For azure-cli version 2.35.0 in Apr 06, 2022 when it was released the command you issued worked fine.
Something changed that broke the tutorial so... The Subnet CIDR you specify with --appgw-subnet-cidr should be a /16 subnet in the usable host range of 10.224.0.0/12.
That leaves you with the choice between the range of 10.224.0.0 - 10.239.0.0. I used subnet 10.225.0.0/16 for my deployment.

Seems your AKS cluster Virtual Network address space is overlap with virtual network of application gateway
When using an AKS cluster and Application Gateway in separate virtual
networks, the address spaces of the two virtual networks must not
overlap.The default address space that an AKS cluster deploys in is
10.0.0.0/8. so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
Would suggest you to please refer this microsft document to Enable the AGIC add-on in existing AKS cluster through Azure CLI to avoid the error.

Related

jgroup_bind_addr in Docker container

I am moving one application from server to docker in Azure infra. How to map jgroup_bind_addr for ever changing pod ip?
<TCP bind_port="${jgroups.bind_port}"
bind_addr="${jgroups.bind_addr}"
>
By default, Infinispan images bind to SITE_LOCAL which mean "Picks a site local IP address, e.g. from the 192.168.0.0 or 10.0.0.0 address range"
In the JGroups Configuration you can check other possible values available for bind_addr. Look for The following special values are also recognized for bind_addr after the table.

Azure API Management service with external virtual network to Docker

I want to use the Azure API Management Service (AMS) to expose the API created with R/Plumber hosted in a Docker container and runs in an Ubuntu machine.
Scenario
With R/Plumber I created some APIs that I want to protect. Then, I created a virtual machine on Azure with Ubuntu and installed Docker. The APIs are in a container that I published on the virtual machine by Docker. I can access them via internet.
On Azure I created an API Management service and added the APIs from the Swagger OpenAPI documentation.
Problem
I want to secure the APIs. I want to expose to the internet only the AMS. Then, my idea was to remove the public IP from the virtual machine and via a virtual network using the internal IPs to connect the API Management Service to the API with the internal IP (http://10.0.1.5:8000).
So, I tried to set a Virtual Network. Clicked on the menu, then External and then on the row, I can select a network. In this virtual network, I have one network interface that is the one the virtual machine is using.
When I save the changes, I have to wait a while and then I receive an error
Failed to connect to management endpoint at azuks-chi-testapi-d1.management.azure-api.net:3443 for a service deployed in a virtual network. Make sure to follow guidance at https://aka.ms/apim-vnet-common-issues.
I read the following documentation but I can't understand how to do what I wanted
Azure API Management - External Type: gateway unable to access resources within the virtual network?
How to use Azure API Management with virtual networks
Is there any how-to to use? Any advice? What are I doing wrong?
Update
I tried to add more Address space in the Virtual network.
One of them (10.0.0.2/24) is delegate for the API Management.
Then, in the Network security group I added the port 3443.
From the API manager I can't reach the server with the internet IP (10.0.2.5). What did I miss?
See common network configuration issues, it lists all dependencies that are expected to be exposed for APIM to work. Make sure that your vnet allows ingress at port 3443 for the subnet where APIM service is located. This configuration must be done on VNET side, not APIM.

Access AKS API from private IP range with api-server-authorized-ip-ranges enabled

I have a VM that hosts an Azure DevOps agent. The VM does not have a public IP. I can run deployments to AKS fine without api-server-authorized-ip-ranges using kubectl apply (getting a .kube config via az).
Once I add an authorised IP range I can no longer run deployments. I can't add a private IP range as I get this exception:
--api-server-authorized-ip-ranges must be global non-reserved addresses or CIDRs
Due to various policies I am unable give my VM a public IP. Is there anyway around this?
You can't use api-server-authorized-ip-ranges option with private IPs. The IPs must be public as described here, or alternatively you can create a private AKS cluster

Activating Container on Standalone Service Fabric Cluster with Open Network Type Configuration Fails on BeginAssignIpAddress

I am running a single node Standalone Service Fabric dev cluster. The node is installed on a Hyper-V virtual machine with two network adapters attached to the external host network. The first VM net adapter is configured with a static IP address and is used as the cluster endpoint. The second VM net adapter is configured through DHCP and has an IP from the same subnet as the first net adapter.
I have looked at the topic "Service Fabric container networking modes" (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes). I have enabled DnsService, IPProviderEnabled and ContainerNetworkSetup. The step 2. I skipped, because it is applicable to Azure Resource Manager configuration. I have configured the network type to Open in the application manifest. The service is hosting a docker container.
When I publish the application to the cluster I get Warning events in the Microsoft-Service-Fabric\Admin channel with Hosting Category.
Here are the text of some of the messages:
SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:131816448810648848: End BeginAssignIpAddress. Error FABRIC_E_INVALID_OPERATION
Failed to remove enpoint resource file=C:\ProgramData\SF\vm0\Fabric\work\Applications\SFApplication1Type_App6\ebanking2016xg_ContainerPkg.d526398e-e01e-43fd-b1d4-9cba19bd608c.Endpoints.txt. Error=0x80070002. NodeVersion=6.3.176.9494:0:0.
SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:131816448810648848: End(Setup->EndCleanupServicePackageEnvironment due to error FABRIC_E_INVALID_OPERATION): error 0x80070002
End(SetupPackageEnvironment): Id=SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c, Version=1.0:1.0:131816452585647419, ErrorCode=FABRIC_E_INVALID_OPERATION
...
Activate: Activate:SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:1.0:1.0:131816452585647419, ErrorCode=FABRIC_E_INVALID_OPERATION, RetryCount=0
This group of warning messages continue to appear on a 10 seconds interval. And the application stays in Activating status on the node.
When I do not set the network type to Open, the application activates successfully using the nat mode.
So a couple of questions emerge:
Is network type Open supported on a Standalone Service Fabric installation?
What is the required configuration on host, guest, cluster, and node level?

How to access a 'private' service of Kubernetes in browser?

I have created a Kubernetes cluster in GKE. The first thing I tried was deploying the cluster, creating a deployment, a service (type: NodePort) and I've created an Ingress above my service.
I was able to visit my pod using a public IP now. This is all working fine but now I want to create a cluster from which I can access the services in my browser using a private IP, but I don't want others to access it.
I've created a new cluster but I've disabled the HTTP loadbalancing addon. So this isn't created inside my cluster. Now I made a new deployment, created a new service which type is ClusterIP.
Now I seem to have a private service, but how can I access this in my browser?
Is it possible to create a VPN solution in GKE to connect to the cluster and get some IP from inside the cluster which will allow me to access the private services in my cluster?
If I'm misunderstanding something, please feel free to correct me.

Resources