Cloud Run service to service call getting 403 when VPC Accessor Enabled with "Route only requests to private IPs through the VPC connector" option - google-cloud-run

Do we have to route all outbound request through VPC accessor? What the "Route only requests to private IPs through the VPC connector" option for? Is it only for the service that don't call the another one?

When you set the ingress to internal only or internal and cloud load balancing on your Cloud Run service, you can't access your service from outside (except if you use a load balancer).
So, in your case, you route only the private IP to the serverless VPC Connector. However, your Cloud Run service is always reachable on the internet, with a public IP. And thus, to access it from your VPC, you need to use the serverless VPC Connector for all traffic, private and public IPs.

Related

AWS Ec2- need to create VPC and Subnets before Ec2 instance?

I am trying to create a basic ec2 instance on which I will run a docker container that runs a spring boot web app.
When I go to create the instance I see the below screen.
Do I need to create a VPC and subnets first before I can create an Ec2 instance? And is this a new feature of AWS?
I want my instance and docker container to be accessible via http and https on the public internet as spring boot exposes a rest api.
If you don't already have one, you can create your own VPC or use the default one then create a public subnet (with auto-assigned public addresses) in this VPC.
I would recommend to directly create your own VPC.
Since you want your instance being reachable on http and https you want to create a security group that allows connections on ports 80 & 443 and allows connection on port 22 from your personal IP address only.
The port 22 will allow you to connect via SSH in the instance to set up your docker container.
Hope it helped!

Connecting a Cloud Run Instance to External RabbitMQ Instance(CloudAMQP)

There is a VPC connector enabled Route all traffic through the VPC connector and a firewall rule to allow all egress traffic for the VPC. Still, I am not able to connect to the RabbitMQ instance(cloudamqp) due to timeout.
I realized that the issue is about using Cloud Run VPC accessor for all traffic without a NAT gateway. After creating a NAT gateway for the related VPC, the issue resolved.
Using the VPC accessor for only the internal traffic can also help in this case. But, if you need to use static IP to route outbound requests to external endpoints then you have to use Cloud Run + all traffic through VPC Accessor + NAT.

Public Cloud Run Service communication with internal-only ingress Cloud Run Service

I have to following setup:
A VPC network V and a VPC Connector for V using CIDR range "10.8.0.0/28" (EDITED)
The following services A and B are connected to the VPC via the Connector
Cloud Run Service A: This service is set to ingress=internal to secure the API. Its egress is set to or private-ranges-only
Cloud Run Service B: This service provides an API for another Service C within the Azure Cloud. B also needs access to Service A's API. The egress and ingress are set to all to route all outgoing traffic through the VPC connector and allow for a successful request on internal Service A.
The current problem is the following: Requests from Service C -> Service B return in a 504 Gateway Timeout. If the egress of Service B is changed to private-ranges-only the request of Service C succeeds but in return all requests of B -> A return 403 Forbidden since traffic is no longer routed through the VPC Connector because Cloud Run does not allow for private-ranges to send traffic to Service A(afaik). All requests of Cloud Run Services to other Cloud Run Services are currently issued to "*.run.app" URLs.
I can not come up with an idea for a possible and convenient fix for this setup. Is there an explanation why egress=all in Service B results in a Gateway Timeout of requests from Service C. I tried to follow logs from the VPC but did not see any causes.
The following changes were necessary to make it run:
Follow this guide to create a static outbound ip for Service B
Remove previous created VPC Connector (created with CIDR range not subnet as in guide)
Update Cloud Run Service B to use VPC Connector created during Step 1
Since removing the static outbound ip is breaking the setup, I assume the azure service demands a static ip to communicate with.

Define API server authorized IP range within Azure Kubernetes Services

Define API server authorized IP range - Is this only limited to set the context (.config file) for executing kubectl or also in terms of API calls for services hosted on AKS pods? How different is this from nginx ip whitelisting annotation?
The API server authorization IP range feature blocks access from internet to the API server endpoint minus the provided whitelisted IP addresses. Pods within the cluster access the kubernetes API thru the internal service on kubernetes.default.svc.cluster.local. The whitelisting annotation for the nginx ingress controller (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range) blocks non-whitelisted IP's from accessing your application running in Kubernetes behind the said ingress controller.

AWS Load balancing static IP range/Address

I have a API that has whitelisted IP addresses that are able to access it. I need to allow all AWS Elastic beanstalk EC2 instances to be able to access this API. So i need to either through VPC or Load Balancer settings configure a static IP or IP range x.x.x.x/32 that i can have whitelisted.
Im lost between the VPC, Load Balancer, Elastic Beanstalk, ETC. Need someone to break it down a bit and point me in the right direction.
Currently the load balancer is setup for SSL and this works correctly.
Thank you for your time
You can setup a NAT Gateway and associate an Elastic IP address in your VPC. Configure the routing from subnets to use the NAT Gateway for egress traffic. Then from your API side, you only need to whitelist the Elastic IP address of your NAT Gateway.
Check this guide for more details.
The best way to accomplish this is to place your EB EC2 instances in a private subnet that communicates to the Internet via a NAT Gateway. The NAT Gateway will use an Elastic IP address. Your API endpoint will see the NAT Gateway as the source IP for all instances in the private subnet, thereby supporting adding the NAT Gateway EIP to your whitelist.
To quote Amazon, link below:
Create a public and private subnet for your VPC in each Availability Zone (an Elastic Beanstalk requirement). Then add your public resources, such as the load balancer and NAT, to the public subnet. Elastic Beanstalk assigns them a unique Elastic IP addresses (a static, public IP address). Launch your Amazon EC2 instances in the private subnet so that Elastic Beanstalk assigns them private IP addresses.
Load-balancing, autoscaling environments
You can assign Elastic IP addresses to ELB instances.
First you need to create a number of Elastic IP addresses. They will be unassigned by default.
The actual assignment can be triggered from the "User data" script that you can specify when creating a Launch Configuration for the ELB. The following two lines of code in the user data script should assign an IP:
pip install aws-ec2-assign-elastic-ip
aws-ec2-assign-elastic-ip --region ap-southeast-2 --access-key XXX --secret-key XXX --valid-ips 1.2.3.4,5.6.7.8,9.10.11.12
The list of --valid-ips should be the list of IPs you created in the beginning.

Resources