VPC access connector GCP - Cloudrun Services and AlloyDB different Regions - google-cloud-run

Quick Question, i am trying to configure a cloudrun service to be connected using AlloyDB on GCP.
The problem here is AlloyDB is in a different region than the others services, in this case central1, and services east1.
Is there any way to do the pairing between them?
Thanks in advance,

There is no connectivity issue. You use a serverless VPC connector to bridge the serverless world (where your Cloud Run live) with your VPC. Therefore, with default configuration, all the traffic going to a private IP will arrive in your VPC.
Then you have your AlloyDB peered with your VPC also. Because the VPC is global, as long as you are in the VPC (AlloyDB or Cloud Run), any service can reach any resources, whatever their location.
In fact, your main concern should be the latency and the egress cost.

Related

Azure API Management service with external virtual network to Docker

I want to use the Azure API Management Service (AMS) to expose the API created with R/Plumber hosted in a Docker container and runs in an Ubuntu machine.
Scenario
With R/Plumber I created some APIs that I want to protect. Then, I created a virtual machine on Azure with Ubuntu and installed Docker. The APIs are in a container that I published on the virtual machine by Docker. I can access them via internet.
On Azure I created an API Management service and added the APIs from the Swagger OpenAPI documentation.
Problem
I want to secure the APIs. I want to expose to the internet only the AMS. Then, my idea was to remove the public IP from the virtual machine and via a virtual network using the internal IPs to connect the API Management Service to the API with the internal IP (http://10.0.1.5:8000).
So, I tried to set a Virtual Network. Clicked on the menu, then External and then on the row, I can select a network. In this virtual network, I have one network interface that is the one the virtual machine is using.
When I save the changes, I have to wait a while and then I receive an error
Failed to connect to management endpoint at azuks-chi-testapi-d1.management.azure-api.net:3443 for a service deployed in a virtual network. Make sure to follow guidance at https://aka.ms/apim-vnet-common-issues.
I read the following documentation but I can't understand how to do what I wanted
Azure API Management - External Type: gateway unable to access resources within the virtual network?
How to use Azure API Management with virtual networks
Is there any how-to to use? Any advice? What are I doing wrong?
Update
I tried to add more Address space in the Virtual network.
One of them (10.0.0.2/24) is delegate for the API Management.
Then, in the Network security group I added the port 3443.
From the API manager I can't reach the server with the internet IP (10.0.2.5). What did I miss?
See common network configuration issues, it lists all dependencies that are expected to be exposed for APIM to work. Make sure that your vnet allows ingress at port 3443 for the subnet where APIM service is located. This configuration must be done on VNET side, not APIM.

Kubernates Node port services in on premise rancher cluster

I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport

What are the outbound IP ranges for GCP managed Cloud Run?

I'm developing an app using GCP managed Cloud Run and MongoDB Atlas. If I allow connection from anywhere for IP Whitelist of Atlas, Cloud Run perfectly works well with MongoDB Atlas. However, I want to restrict connection only for necessary IPs but I cloud't find outbound IPs of Cloud Run. Any way to know the outbound IPs?
Update (October 2020): Cloud Run has now launched VPC egress feature that lets you configure a static IP for outbound requests through Cloud NAT. You can follow this step by step guide in the documentation to configure a static IP to whitelist at MongoDB Atlas.
Until Cloud Run starts supporting Cloud NAT or Serverless VPC Access, unfortunately this is not supported.
As #Steren has mentioned, you can create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address.
I have blogged about it here: https://ahmet.im/blog/cloud-run-static-ip/, and you can find step-by-step instructions with a working example at: https://github.com/ahmetb/cloud-run-static-outbound-ip
Cloud Run (like all scalable serverless products) does not give you dedicated IP addresses that are known to be the origination of outgoing traffic. See also: Possible to get static IP address for Google Cloud Functions?
Cloud Run services do no get static IPs.
A solution is to send your outbound requests through a proxy that has a static IP.
For example in Python:
import requests
import sys
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
proxy = os.environ.get('PROXY')
proxyDict = {
"http": proxy,
"https": proxy
}
r = requests.get('http://ifconfig.me/ip', proxies=proxyDict)
return 'You connected from IP address: ' + r.text
With the PROXY environemnt variable containing the IP or URL of your proxy (see here to set an environment variable )
For this proxy, you can either:
create it yourself, for example using a Compute Engine VM with a static public IP address running squid, this likely fits in the Compute Engine free tier.
use a service that offers a proxy with static IP, for example https://www.quotaguard.com/static-ip/ that starts at $19/m
I personally used this second solution. The service gives me a URL that includes a username and password, that I then use as a proxy using the code above.
This feature is now released in beta by the Cloud Run team:
https://cloud.google.com/run/docs/configuring/static-outbound-ip

How can I integrate my application with Kubernetes cluster running Docker containers?

This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.

Google Cloud Platform DataFlow workers IP addresses

Is it possible to know what range of external IP the DataFlow workers on GCP are using? The goal is to set-up some kind of IP filtering on an external service, so that only our DataFlow jobs running on GCP can access the service.
The best solution would be to upgrade so that you can use SSL or other mechanisms of strong authentication.
You can use the --network= option to control the GCE Network that the worker VMs are assigned to. Take a look at the GCE docs on networking for details on how to set up a VPN (like the comment from Elmar suggested). You could also look at setting up a single machine in the network with a static, external IP and using it as a proxy for the other VMs in the network.
This is not a use pattern we have tested, so there may be issues with latency or throughput of traffic through the proxy/VPN. You will likely need to be careful to only send your traffic through this proxy so that you don’t accidentally hijack the traffic used by each worker to communicate with the Dataflow service.

Resources