I have to following setup:
A VPC network V and a VPC Connector for V using CIDR range "10.8.0.0/28" (EDITED)
The following services A and B are connected to the VPC via the Connector
Cloud Run Service A: This service is set to ingress=internal to secure the API. Its egress is set to or private-ranges-only
Cloud Run Service B: This service provides an API for another Service C within the Azure Cloud. B also needs access to Service A's API. The egress and ingress are set to all to route all outgoing traffic through the VPC connector and allow for a successful request on internal Service A.
The current problem is the following: Requests from Service C -> Service B return in a 504 Gateway Timeout. If the egress of Service B is changed to private-ranges-only the request of Service C succeeds but in return all requests of B -> A return 403 Forbidden since traffic is no longer routed through the VPC Connector because Cloud Run does not allow for private-ranges to send traffic to Service A(afaik). All requests of Cloud Run Services to other Cloud Run Services are currently issued to "*.run.app" URLs.
I can not come up with an idea for a possible and convenient fix for this setup. Is there an explanation why egress=all in Service B results in a Gateway Timeout of requests from Service C. I tried to follow logs from the VPC but did not see any causes.
The following changes were necessary to make it run:
Follow this guide to create a static outbound ip for Service B
Remove previous created VPC Connector (created with CIDR range not subnet as in guide)
Update Cloud Run Service B to use VPC Connector created during Step 1
Since removing the static outbound ip is breaking the setup, I assume the azure service demands a static ip to communicate with.
Related
There is a VPC connector enabled Route all traffic through the VPC connector and a firewall rule to allow all egress traffic for the VPC. Still, I am not able to connect to the RabbitMQ instance(cloudamqp) due to timeout.
I realized that the issue is about using Cloud Run VPC accessor for all traffic without a NAT gateway. After creating a NAT gateway for the related VPC, the issue resolved.
Using the VPC accessor for only the internal traffic can also help in this case. But, if you need to use static IP to route outbound requests to external endpoints then you have to use Cloud Run + all traffic through VPC Accessor + NAT.
Do we have to route all outbound request through VPC accessor? What the "Route only requests to private IPs through the VPC connector" option for? Is it only for the service that don't call the another one?
When you set the ingress to internal only or internal and cloud load balancing on your Cloud Run service, you can't access your service from outside (except if you use a load balancer).
So, in your case, you route only the private IP to the serverless VPC Connector. However, your Cloud Run service is always reachable on the internet, with a public IP. And thus, to access it from your VPC, you need to use the serverless VPC Connector for all traffic, private and public IPs.
I deployed a front and back apps with ECS fargate, both of them are up and running and I can access them from my browser. They are configured on the same VPC and subnet.
The backend has service discovery configured and my server's DNS address was inserted to my react application.
I read in this thread cannot-connect-two-ecs-services-via-service-discovery that if I use axios from my browser to access my server using service discovery it will not work.
The error I am getting is: net::ERR_NAME_NOT_RESOLVED
How can I achieve communication between these 2 services with service discovery? Am I missing something?
I have a K8s cluster that should whitelist a Cloud Run server, so I would like to know the IP address or IP range of the Cloud Run server.
As found here:
https://github.com/ahmetb/cloud-run-faq#is-there-a-way-to-get-static-ip-for-outbound-requests
Is there a way to get static IP for outbound requests?
Currently not, since Cloud Run uses a dynamic serverless machine pool by Google and its IP addresses cannot be controlled by Cloud Run users.
However, there is a workaround to route the traffic through a Google Compute Engine instance by running a persistent SSH tunnel inside the container and making your applications use it.
Define API server authorized IP range - Is this only limited to set the context (.config file) for executing kubectl or also in terms of API calls for services hosted on AKS pods? How different is this from nginx ip whitelisting annotation?
The API server authorization IP range feature blocks access from internet to the API server endpoint minus the provided whitelisted IP addresses. Pods within the cluster access the kubernetes API thru the internal service on kubernetes.default.svc.cluster.local. The whitelisting annotation for the nginx ingress controller (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range) blocks non-whitelisted IP's from accessing your application running in Kubernetes behind the said ingress controller.