Restoring VPC peering for private GKE cluster - gke-networking

Private GKE clusters reuse VPC peering connection from worker nodes to master nodes in Google-managed VPC project/network.
Accidentally this VPC peering was removed and worker nodes lost connection to master API.
Is there a way to restore that VPC peering? I could see removal action log at Cloud Activity page, but it doesn't look too helpful as it doesn't contain before/after state, so it's not evident which project/network should be set to restore peering.
Thanks!

I've managed to get it working by creating a brand new private GKE cluster in the same vpc. Google has created vpc peering with a master subnet of a new cluster together with recently removed master subnet. Hopefully, it looks like they use the same vpc/network for a peering to a specific customer vpc. Then removed temporary created cluster.
P.S. Thanks Leo and
Hector Martinez Rodriguez for pointing to the best practice.

Related

VPC access connector GCP - Cloudrun Services and AlloyDB different Regions

Quick Question, i am trying to configure a cloudrun service to be connected using AlloyDB on GCP.
The problem here is AlloyDB is in a different region than the others services, in this case central1, and services east1.
Is there any way to do the pairing between them?
Thanks in advance,
There is no connectivity issue. You use a serverless VPC connector to bridge the serverless world (where your Cloud Run live) with your VPC. Therefore, with default configuration, all the traffic going to a private IP will arrive in your VPC.
Then you have your AlloyDB peered with your VPC also. Because the VPC is global, as long as you are in the VPC (AlloyDB or Cloud Run), any service can reach any resources, whatever their location.
In fact, your main concern should be the latency and the egress cost.

AKS create with App gateway ingress control fails with IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork error

When i try to create aks using azure cli using the following command :
"az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.2.0.0/16" --generate-ssh-keys"
I get the below error.
"(IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork) Subnet Prefix '10.2.0.0/16' specified for IngressApplicationGateway addon is not contained within the AKS Agent Pool's Virtual Network address prefixes '[10.224.0.0/12]'.
Code: IngressAppGwAddonConfigInvalidSubnetCIDRNotContainedWithinVirtualNetwork
Message: Subnet Prefix '10.2.0.0/16' specified for IngressApplicationGateway addon is not contained within the AKS Agent Pool's Virtual Network address prefixes '[10.224.0.0/12]'.
Target: AddonProfiles.IngressApplicationGateway"
Any idea why i get this error ? or how to fix it.
I see that you have used the Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Application Gateway instance tutorial.
I had some trouble creating a new AKS cluster with a command similar to yours. For azure-cli version 2.35.0 in Apr 06, 2022 when it was released the command you issued worked fine.
Something changed that broke the tutorial so... The Subnet CIDR you specify with --appgw-subnet-cidr should be a /16 subnet in the usable host range of 10.224.0.0/12.
That leaves you with the choice between the range of 10.224.0.0 - 10.239.0.0. I used subnet 10.225.0.0/16 for my deployment.
Seems your AKS cluster Virtual Network address space is overlap with virtual network of application gateway
When using an AKS cluster and Application Gateway in separate virtual
networks, the address spaces of the two virtual networks must not
overlap.The default address space that an AKS cluster deploys in is
10.0.0.0/8. so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
Would suggest you to please refer this microsft document to Enable the AGIC add-on in existing AKS cluster through Azure CLI to avoid the error.

Kubernates Node port services in on premise rancher cluster

I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport

How to access a 'private' service of Kubernetes in browser?

I have created a Kubernetes cluster in GKE. The first thing I tried was deploying the cluster, creating a deployment, a service (type: NodePort) and I've created an Ingress above my service.
I was able to visit my pod using a public IP now. This is all working fine but now I want to create a cluster from which I can access the services in my browser using a private IP, but I don't want others to access it.
I've created a new cluster but I've disabled the HTTP loadbalancing addon. So this isn't created inside my cluster. Now I made a new deployment, created a new service which type is ClusterIP.
Now I seem to have a private service, but how can I access this in my browser?
Is it possible to create a VPN solution in GKE to connect to the cluster and get some IP from inside the cluster which will allow me to access the private services in my cluster?
If I'm misunderstanding something, please feel free to correct me.

AWS Load Balancer EC2 health check request timed out failure

I'm trying to get down and dirty with DevOps and I'm running into a health check request timed out failure. The problem is my Elastic Load Balancer sends a health check to my EC2 instance and gets a network timeout. I'm not sure what I did wrong. I am following this tutorial and I have completed all the steps up to and including "Using a Elastic Load Balancer". My EC2 instance seems to be working fine and I am able to successfully curl localhost on port 9292 from within the EC2 instance.
EC2 instance security group setup:
Elastic Load Balancer setup:
My target group for the ELB routing has port 9292 open via HTTP and here's a screenshot of the target in my target group that is unhealthy.
Health check config:
I have a VPC that my EC2 instance is a part of and my ELB is connected to the same VPC. I do not have Apache installed and I do not have nginx installed. To my understanding, I do not need these. I have a Rails Puma server running and I can send successful curl requests to the server.
My hunch is that my ELB is not allowed to reach my EC2 instance, resulting in a network timeout and a failed health check. I'm unable to find the cause for this. Any ideas? This SO post didn't help much. Are my security groups misconfigured? What else could potentially block a routing request from ELB to my EC2 instance?
Also, is there a way to view network requests / logs for my EC2 instance? I keep seeing VPC flow logging but I feel like there are simpler alternatives.
Here's something I posted in the AWS forums but to no avail.
UPDATE: I can curl the private IP of target just fine from within an EC2 instance. I don't think it's the target instance, I think it's something to do with the security group setup. I am unable to identify why though because I have basically allowed all traffic from the Load Balancer to the EC2 instance.
I made my mistake during the "Setup your VPC" step. I finished creating a subnet for an RDS instance. I proceeded to start an instance and the default subnet that AWS chose when I switched to my VPC was the subnet I made for my RDS, which was NOT a public subnet. Therefore, any attempts, from any EC2 instance or my load balancer, would not be able to reach it because I had only set up my public subnet to take requests.
The solution was to create a new instance and this time, pick the correct public subnet. My original EC2 instance was associated with a private subnet while the load balancer was pointing to the public subnet.
Here's a link to a hand drawn image that helped me pin point my problem, hopefully can help anyone else who's having trouble setting up. I didn't put image here directly because it's bigger than 2MB.
Glad to answer any further questions too!

Resources