Connecting a Cloud Run Instance to External RabbitMQ Instance(CloudAMQP) - google-cloud-run

There is a VPC connector enabled Route all traffic through the VPC connector and a firewall rule to allow all egress traffic for the VPC. Still, I am not able to connect to the RabbitMQ instance(cloudamqp) due to timeout.

I realized that the issue is about using Cloud Run VPC accessor for all traffic without a NAT gateway. After creating a NAT gateway for the related VPC, the issue resolved.
Using the VPC accessor for only the internal traffic can also help in this case. But, if you need to use static IP to route outbound requests to external endpoints then you have to use Cloud Run + all traffic through VPC Accessor + NAT.

Related

Why does isolated V3 App service Environment have 2 outbound IP?

Version 2 App service environment gives 1 outbound IP
But with ver,3 isolated App service Environment I get 2 outbound IP.
background : I need to whitelist the outbound IP, and I would prefer to just whitelist 1 IP instead of 2.
Can i delete one of the outbound IP?
With isolated, seems like i cannot use virtual network NAT gateway to direct traffic through a static public IP address (app service
vnet integration is greyed out)?
Thanks, Peter
Can i delete one of the outbound IP?
There are 2 outbound IPs because there are 2 load balancers in the infra vnet for ASEv3. One IP is for the infra roles (Multi,FE, etc) and the other is for the workers outbound connection. We provide both as outbound ips because outbound traffic could come from the workers (in the case of app outbound traffic) or from the infra layer (like getting KV references in custom dns suffix).
Below is an ASEv3 architecture diagram.
You should account for both IPs or you may run the risk of blocking necessary traffic.
With isolated, seems like i cannot use virtual network NAT gateway to
direct traffic through a static public IP address (app service vnet
integration is greyed out)?
For more details see: https://learn.microsoft.com/en-us/azure/app-service/networking/nat-gateway-integration

Cloud Run service to service call getting 403 when VPC Accessor Enabled with "Route only requests to private IPs through the VPC connector" option

Do we have to route all outbound request through VPC accessor? What the "Route only requests to private IPs through the VPC connector" option for? Is it only for the service that don't call the another one?
When you set the ingress to internal only or internal and cloud load balancing on your Cloud Run service, you can't access your service from outside (except if you use a load balancer).
So, in your case, you route only the private IP to the serverless VPC Connector. However, your Cloud Run service is always reachable on the internet, with a public IP. And thus, to access it from your VPC, you need to use the serverless VPC Connector for all traffic, private and public IPs.

Public Cloud Run Service communication with internal-only ingress Cloud Run Service

I have to following setup:
A VPC network V and a VPC Connector for V using CIDR range "10.8.0.0/28" (EDITED)
The following services A and B are connected to the VPC via the Connector
Cloud Run Service A: This service is set to ingress=internal to secure the API. Its egress is set to or private-ranges-only
Cloud Run Service B: This service provides an API for another Service C within the Azure Cloud. B also needs access to Service A's API. The egress and ingress are set to all to route all outgoing traffic through the VPC connector and allow for a successful request on internal Service A.
The current problem is the following: Requests from Service C -> Service B return in a 504 Gateway Timeout. If the egress of Service B is changed to private-ranges-only the request of Service C succeeds but in return all requests of B -> A return 403 Forbidden since traffic is no longer routed through the VPC Connector because Cloud Run does not allow for private-ranges to send traffic to Service A(afaik). All requests of Cloud Run Services to other Cloud Run Services are currently issued to "*.run.app" URLs.
I can not come up with an idea for a possible and convenient fix for this setup. Is there an explanation why egress=all in Service B results in a Gateway Timeout of requests from Service C. I tried to follow logs from the VPC but did not see any causes.
The following changes were necessary to make it run:
Follow this guide to create a static outbound ip for Service B
Remove previous created VPC Connector (created with CIDR range not subnet as in guide)
Update Cloud Run Service B to use VPC Connector created during Step 1
Since removing the static outbound ip is breaking the setup, I assume the azure service demands a static ip to communicate with.

SSL with TCP connection on Kubernetes?

I'm running a TCP server (Docker instance / Go) on Kubernetes.. It's working and clients can connect and do intended stuff. I would like to make the TCP connection secure with an SSL certificate. I already got SSL working with a HTTP Rest API service running on the same Kubernetes cluster by using ingress controllers, but I'm not sure how to set it up with a regular TCP connection. Can anyone point me in the right direction ?
As you can read in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing
services other than HTTP and HTTPS to the internet typically uses a
service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Depending on the platform you are using you have different kind of LoadBalancers available which you can use to terminate your SSL traffic. If you have on-premise cluster you can set up additional nginx or haproxy server before your Kubernetes cluster which will handle SSL traffic.

AWS Load balancing static IP range/Address

I have a API that has whitelisted IP addresses that are able to access it. I need to allow all AWS Elastic beanstalk EC2 instances to be able to access this API. So i need to either through VPC or Load Balancer settings configure a static IP or IP range x.x.x.x/32 that i can have whitelisted.
Im lost between the VPC, Load Balancer, Elastic Beanstalk, ETC. Need someone to break it down a bit and point me in the right direction.
Currently the load balancer is setup for SSL and this works correctly.
Thank you for your time
You can setup a NAT Gateway and associate an Elastic IP address in your VPC. Configure the routing from subnets to use the NAT Gateway for egress traffic. Then from your API side, you only need to whitelist the Elastic IP address of your NAT Gateway.
Check this guide for more details.
The best way to accomplish this is to place your EB EC2 instances in a private subnet that communicates to the Internet via a NAT Gateway. The NAT Gateway will use an Elastic IP address. Your API endpoint will see the NAT Gateway as the source IP for all instances in the private subnet, thereby supporting adding the NAT Gateway EIP to your whitelist.
To quote Amazon, link below:
Create a public and private subnet for your VPC in each Availability Zone (an Elastic Beanstalk requirement). Then add your public resources, such as the load balancer and NAT, to the public subnet. Elastic Beanstalk assigns them a unique Elastic IP addresses (a static, public IP address). Launch your Amazon EC2 instances in the private subnet so that Elastic Beanstalk assigns them private IP addresses.
Load-balancing, autoscaling environments
You can assign Elastic IP addresses to ELB instances.
First you need to create a number of Elastic IP addresses. They will be unassigned by default.
The actual assignment can be triggered from the "User data" script that you can specify when creating a Launch Configuration for the ELB. The following two lines of code in the user data script should assign an IP:
pip install aws-ec2-assign-elastic-ip
aws-ec2-assign-elastic-ip --region ap-southeast-2 --access-key XXX --secret-key XXX --valid-ips 1.2.3.4,5.6.7.8,9.10.11.12
The list of --valid-ips should be the list of IPs you created in the beginning.

Resources