How to bring two Cloud Run Apps under one domain to avoid CORS - google-cloud-run

I have two apps I wanted to have "fully managed" by Cloud Run. One is a pure Vue.js SPA and the other is the belonging backend server for it that is connected to a MySQL and also fetches some other API endpoints.
Now I have deployed both apps but am totally unaware on how I can give the frontend app access to the backend app. They should be both running on the same domain to avoid the frontend from.
Current URL of the frontend app: https://myapp-xl23p3zuiq-ew.a.run.app
So I'd love to have the server accessible by: https://myapp-xl23p3zuiq-ew.a.run.app/api
Is this somewhat possible to achieve with Cloud Run?

I was having the same issue. The general idea that one usually has is to use path mapping and map / to your client and /server to your backend. After googling for a while I found this:
https://cloud.google.com/run/docs/mapping-custom-domains
Base path mapping: not supported
The term base path refers to the URL
path name that is after the domain name. For example, users is the
base path of example.com/users. Cloud Run only allows you to map a
domain to /, not to a specific base path. So any path routing has to
be handled by using a router inside the service's container or by
using Firebase Hosting.
Option1:
I ended up creating an "all in one" docker image with an nginx as reverse proxy and the client (some static files) and server (in my case a python application powered by uwsgi).
If you are looking for inspiration, you can check out the public repository here: https://gitlab.com/psono/psono-combo
Opttion2:
An alternative would be to host your client on client.example.com, your server on server.example.com and then create a third docker run instance with a reverse proxy under example.com.
All requestes would be "proxied" to the client and server. Your users will only interact with example.com so CORS won't be an issue.
Option3:
Configure CORS, so people accessing example.com can also connect to server.example.com

Currently this is not possible in Cloud Run, as already said on the comments to your question.
You could check if there are any Feature Request for this functionality on Buganizer (Google Issue Tracker), currently there seems to be none, and if that is indeed the case, you can create a new Feature Request by changing the request type from Bug to Feature Request and as Google develops it on their road map, you will be informed.
Hope this helped you.

Related

One VPS, multiples services, different projects/domains

This is my first VPS, so I am pretty new to administrating my own box. I already have experience with a managed web server, registrars, DNS settings, etc. The basics. Now I'd like to take it a step further and manage my own VPS to run multiple services for different business and private projects.
So far I got an VPS from Contabo, updated the system, set up a new user with sudo rights, secured the root user, configured Ufw, installed Nginx with server blocks for two domains and created SSL certificates for one domain using Certbot.
Before I go on with setting up my VPS, I'd like to verify my approach for hosting multiple services for multiple domains makes sense and is a good way to go.
My goal is to host the following services on my VPS. Some of them will be used by all projects some only by a single one:
static website hosting
dynamic website hosting with a lightweight CMS
send and receive emails
Nextcloud/Owncloud
Ghost blog
My current approach is to run all services except for Nginx and the mail server with Docker. Using Nginx as proxy to the services encapsulated in Docker.
Is this an overkill or a valid way to go forward in order to keep the system nice and clean? Since I am new to all of this, I am unsure if I also could run all of the services without using Docker but still be able to serve the different projects on different domains without messing up the system.
Furthermore, I'd like to make sure, that access to the services and the stored data is properly separated between the different tenants (projects). And of course ideally the admin of the services is kind of manageable.

API gateway to my elastic beanstalk docker deployed app

My backend is a simple dockerized Node.js express app deployed onto elastic beanstalk. It is exposed on port 80. It would be located somewhere like
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
I can call my APIs on the backend
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/hello
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/postSomeDataToMe
and they work! Yay.
The URL is not very user friendly so I was hoping to set up API gateway to allow to me simply forward API requests from
api.myapp.com/apiFamily/ to mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
so I can call api.myapp.com/apiFamily/hello or api.myapp.com/apiFamily/postMeSomeData
Unfortunately, I can't figure out (i) if I can do this (ii) how to actually do it.
Can anybody point me to a resource that explains clearly how to do this?
Thanks
Yes, you can do this. For this to happen you need two things:
a custom domain that you own and control, e.g. myapp.com.
a valid, public SSL certificate issued for that domain.
If you don't have them, and want to stay within AWS ecosystem, you can use Route53 to buy and manage your custom domain. For SSL you can use AWS ACM which will provide you with free SSL certificate for the domain.
AWS instructions on how to set it up all is:
Setting up custom domain names for REST APIs

Elasticsearch Securing the connection

i am (desperately) new to elasticsearch (7.9.0) and i currently have a cluster with two nodes running.
After a lot of effort it is performing as i would like it to.
It is running on docker and also has an nginx in front of it to route the traffic to it since it is being accessed directly from my website (angular 10).
The elasticsearch is being used as well from my laravel backend directly through the docker container name so that is secure (i guess).
My problem now is that i cannot find or understand a way to secure the http access from outside docker (eg the normal website).
Going via Laravel is an option but this is too slow for my purpose.
Is there a way i can securely have http access to the elasticsearch from the web?
Also, is there a way i can restrict the actions to read only actions?
If you need more info to help out please let me know as i am not knowledgable on what is important here and what not.
Thanks
Angular is a front-end and is run in your user's web browser. If Angular can somehow reach your Elasticsearch instance, everyone can do so. No matter what. You can try to obscure it as many as you want, but if there is direct exposure to Elasticsearch, it will be reachable.
So you have to either assume this fact, or go the slow way and proxy the requests to Laravel, so it can verify that the information requested is actually available for the user performing the request.

Concerns with gRPC architecture (gRPC, nginx, docker)

I'm currently trying to create a tracing tool for fun (which supports gRPC tracing) and was confused as to whether or not I was thinking about this architecture properly. A tracing tool keeps track of the entire workflow/journey of the request (from the moment a user clicks the button, to when the request goes to the API gateway, between microservices, and back.
Let's say the application is a bookstore, and it is broken up to 2 microservices, maybe account and books. Let's say that there is a User Interface, and when you click a button, it allows a user to favorite a book. I'm only using 2 microservices to keep this example simple.
**Different parts of the Fake/Mock up application**
UI ->
nginx -> I wanted to use this as an API Gateway.
microservice 1 -> (Contains data for all Users of a bookstore)
microservice 2 -> (Contains data for all the books)
**So my goal is to figure a way to trace that request. So we can imagine the request goes to nginx
Concern #1: When the request goes to nginx, it is HTTP. Cool, but when the request is sent to the microservice, it is a grpc call (or over http2). Can nginx get an http request and then send that request over http2...? Not sure if I'm wording this correctly or not. I know nginx plus supports http2. I also know that grpc has a grpc gateway too.
Concern #2: Containerization. Do I have to containerize both microservices individually, or would I have to containerize the entire docker container itself. Is it simple to link nginx and docker?
Concern #3: When tracing gRPC requests (finding out how much time a request is fulfilled), I'm considering using a middleware logger or a tracing API (opentracing, jaegar, etc.) to do this. How else would I figure out how long it takes for gRPC to make requests?
I was wondering if it was possible to address these concerns, if my thought process is correct, and if this architecture is feature.
Most solutions in the industry are implemented on top of a container orchestration solution (Kubernetes, Docker Swarm, etc).
It is usually not a good idea to "containerize" and manage reverse proxy yourself.
The reverse proxy should be aware of all the containers status (by hooking to orchestrator) and dynamically update its configuration when a container created, crashed, or relocated (due to a machine gets out of service).
Kubernetes handles GRPC using the mesh networks. Please take a look at kubernetes service mesh.
If you decided to use Traefik and Docker Swarm check out traefik h2c support.
In conclusion, consider more modern alternatives to Nginx when you want to load balance GRPC.

Service Fabric Reverse Proxy

I am facing issues getting Reverse Proxy right. I keep getting "504 Getaway Timeout" when I am using the Reverse Proxy.
I have followed the Microsoft's example to set up the cluster.
IMHO, I think cluster set-up is correct, the only difference is that I've specified port 80 for the proxy and I did not use SSL for test env.
I am trying it out on test environment at the moment, but the production environment is running the same services, just w/o reverse proxy and it is just fine. Also, I have exposed an endpoint for one of the services on test env, tried calling it w/o reverse proxy and it worked.
I've read that it could be caused by the containers, but I am using Windows 2012 RC2 DataCenter. As far as I am aware, it does not utilize windows nat containers. Also, I've read that it could be caused by the 404 error (#case 2 in the example doc) where it tries to reload it and simply times out trying.
These are some of the summed up details that might be important to know
Service Fabric version: 5.5.219.0
OS: Windows
SKU: 2012-R2-Datacenter
Services are using WebListener
All ports are allowed
1 NodeType (stateless)
Services created with ASP.NET Core Web API template
VS 2015 Enterprise
Service endpoints are configured like follows:
Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input"
All services and cluster are healthy.
I have found a cause for this timeout. It was just me not getting the required change in the request url right.
All of my services are holding MVC controllers that are named after service name. So whenever I was calling them without reverse proxy my request url would be something like http://mycluster.westeurope.cloudapp.azure.com:8280/Notifications/TestMethod
and this would be enough as it could find Controller by unique port.
The way I've been trying to call it with reverse proxy was
http://mycluster.westeurope.cloudapp.azure.com/SomeName.API.Services/Notifications/TestMethod
That is not enough as 'Notifications' is parsed as a name of the service and not the controller. So I was calling the service and an action without specifying a controller.
The correct way to call it is to include service name twice, since I've called my controllers the same as service (I might change that).
Here is a correct url I have to use
http://mycluster.westeurope.cloudapp.azure.com/SomeName.API.Services/Notifications/Notifications/TestMethod
I have figured it out by looking up the reverse proxy code sample.

Resources