How to setup nginx in front of node in docker for Cloud Run? - docker

I need to setup reverse proxy nginx in front of nodejs app that need to be deployed in google cloud run.
Use Cases
- Need to serve assets gzipped via nginx (I don't want to overhead node for gzip compression)
- To block small DDOS attacks
I didn't find any tutorial to setup nginx and node in cloud run.
Also I need to install PM2 to for node.
How to do this setup in docker? also how can I configure nginx before deploying?
Thanks in advance

I need to setup reverse proxy nginx in front of nodejs app that need
to be deployed in google cloud run.
Cloud Run already provides a reverse proxy - Cloud Run Proxy. This is the service that load balances, provides custom domains, authentication, etc. for Cloud Run. However, there is nothing in the design of Cloud Run to prevent you from using Nginx as a reverse proxy inside your container. There is nothing in the design of Cloud Run to prevent you from using Nginx as a separate container front-end to another Cloud Run service. Note in the last case you will be paying twice as much as you will need two Cloud Run services, one for the Nginx service URL and another for the node application.
Use Cases - Need to serve assets gzipped via nginx (I don't want to
overhead node for gzip compression) - To block small DDOS attacks
You can either perform compression in your node app or in Nginx. The result is the same. The performance impact is the same. Nginx does not provide any overhead savings. Nginx may be more convenient in some cases.
Your comment to block small DDOS attacks. Cloud Run autoscales, which means each Cloud Run instance will have some limited exposure to a DOS. As the DDOS traffic increases, Cloud Run will launch more instances of your container. Without a prior request from you, Cloud Run will stop scaling at 1,000 instances. Nginx will not provide any benefit that I can think of to mitigate a DDOS attack.
I didn't find any tutorial to setup nginx and node in cloud run.
I am not aware of a specific document covering Nginx and Cloud Run. However, you do not need one. Any document covering Nginx and Docker will be fine. If you want to run Nginx in the same container as your node application you will need to write a custom script to launch both Nginx and Node.
Also I need to install PM2 to for node.
Not possible. PM2 has a user interface and GUI. Cloud Run only exposes $PORT over HTTP from a Cloud Run instance.
How to do this setup in docker? also how can I configure nginx before
deploying?
There are numerous tutorials on the Internet for setting up Nginx and Docker.
Two examples below. There are hundreds of examples on the Internet.
How to run NGINX as a Docker container
Deploying NGINX and NGINX Plus on Docker
I have answered each of your questions. Now some advice:
Using Nginx with Cloud Run does not make any sense with a Node.js application. Just run your node application and let Cloud Run Proxy do its job.
Compression is CPU intensive. Cloud Run is designed for HTTP style microservices that are small, fast, and compact. You will pay for increased CPU time. If you have content that needs to be compressed, compress it first and serve the content compressed. There are cases where compression in Cloud Run is useful and/or correct, but look at your design and optimize where possible. Static content should be served by Cloud Storage, for example.
Cloud Run can handle a Node.js application easily with excellent performance and scalability provided that you follow its design criteria and purpose.
Key factors to keep in mind:
Low cost, you only pay for requests. Overlapping requests have the same cost as one request.
Stateless. Containers are shut down when not needed which means you must design for restarts. Store state elsewhere such as a database.
Only serves traffic on port $PORT, which today is 8080.
Public traffic can be either HTTP or HTTPS. Traffic from the Cloud Run Proxy to the container is HTTP.
Custom domain names. Cloud Run makes HTTPS for URLs very easy.
UPDATE: Only HTTPS is now supported for the public endpoint (Public Traffic).

I think you should consider using a different approach.
Running multiple processes in a single container is not a best practice. The more common implementation of a proxy as you describe is to use 2 containers (the proxy is often called the sidecar) but this is not possible with Cloud Run.
Google App Engine may be more suitable.
App Engine Flexible permits deployments of containers that are proxied (behind the scenes) by Nginx. You may use static content with Flexible and can incorporate a CDN. App Engine Standard addresses your needs too.
https://cloud.google.com/appengine/docs/flexible/nodejs/serving-static-files
https://cloud.google.com/appengine/docs/standard/nodejs/runtime
Like Cloud Run, App Engine is serverless but provides more flexibility and is a more established service. App Engine integrates with more (all?) GCP services too whereas Cloud Run is limited to a subset.
Alternatively, you may consider Kubernetes (Engine). This provides almost limitless flexibility but requires more ops. As you're likely aware, there's a Cloud Run implementation that runs atop Kubernetes, Istio and Knative.
Cloud Run is a compelling service but it is only appropriate if you can meet its (currently) contrained requirements.

I have good news for you. I have written a blog post about exactly what you needed with sample code.
This example puts NGINX in the front (port 8080 on Cloud Run) while proxying the traffic selectively to another service running in the same container (on port 8081).
Read the blog post: https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/
Source code: https://github.com/ahmetb/multi-process-container-lazy-solution

Google Cloud Compute Systems
To understand GCP Computing, please see the below picture first:
For your case, I totally recommend you to use App Engine Flex to deploy your application. It supports docker container, nodejs,... To understand HOW TO DEPLOY nodejs to GAE Flex, please visit this page https://cloud.google.com/appengine/docs/flexible/nodejs/quickstart
You can install some third party libraries if you want. Moreover, GCP supports the global/internal load balancer, you can apply it into your GAE services.

Related

Which GCP service to choose for hosting Docker image stored at Container Registry?

I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!
IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?

How to handle the concurrence of file uploads in my kubernetes cluster?

I am designing the architecture of my software instance provisioning system. For this I will use kubernetes in such a way that each client will have their namespace with the pods of their integrity. However, the kubernetes cluster will have a common entry point to all the instances, which will be a nginx server.
My question is as follows, as software provided allows upload of files, in case at the same time several of my clients decide to upload a file simultaneously, I run the risk that the server nginx is overloaded and that nobody can access your instance hired?
Is there any good practice to try to design my architecture?
You could use nginx ingress controller and deploy it with multiple replicas so that it can be scaled up to handle the load. Then your nginx is part of the cluster (rather than a separate server) and can take advantage of the kubernetes cluster's capacity for horizontal scaling.
Unless you are running on-prem with NodePort/HostPort - then you might want to run your nginx as an external Load balancer as in that case you don't have one from a cloud provider. Then what you can do is configure rate-limiting and throttling in nginx. If cloud then you can also use annotations to do this with nginx ingress.

How to setup nginx as reverse proxy for rest microservice in kubernetes?

I have a rest microservice and would like to setup nginx as a reverse proxy for it. I am little confused about which approach to follow:
Run nginx in each pod where application code is running.
Run nginx in separate pods and redirect http requests to application code running in separate pods.
Can someone explain which one is better
In my opinion, running nginx in a separate pod is a better option because that way you can scale up and down application separately from a proxy. Usually, we use one container with proxy and few with API.
Option 1 will work but it is looks to be inefficient way to do what you have mentioned. Nginx is a highly capable server (footprint/runtime resources) and can easily be able to serve multiple applications from a separate pod.
So I think the option 2 is a better option.
Running nginx separately will have following advantages:
Efficient (save on resources and money) because a single nginx will be able to serve multiple applications
Possibility to use other nginx capabilities in future (e.g. load balancing)
Maintainability - only a single pod to maintain, monitor and troubleshoot (e.g. Upgrade rollout, monitoring etc.) and many more
I have had a similar requirement. I used a single nginx on a separate pod to serve multiple (250) application deployments running on different pods. I used proxy_pass directive to get the job done.

Advantages of dockerizing Java Springboot application?

We are working with a dockerized kafka environment. I would like to know the best practices for deployments of kafka-connectors and kafka-streams applications in such scenerio . Currently we are deploying each connector and stream as springboot applications and are started as systemctl microservices . I do not find a significant advantage in dockerizing each kafka connector and stream . Please provide me insights on the same
To me the Docker vs non-Docker thing comes down to "what does your operations team or organization support?"
Dockerized applications have an advantage in that they all look / act the same: you docker run a Java app the same way as you docker run a Ruby app. Where as with an approach of running programs with systemd, there's not usually a common abstraction layer around "how do I run this thing?"
Dockerized applications may also abstract some small operational details, like port management for example - ie making sure all your app's management.ports don't clash with each other. An application in a Docker container will run as one port inside the container, and you can expose that port as some other number outside. (either random, or one to your choosing).
Depending on the infrastructure support, a normal Docker scheduler may auto-scale a service when that service reaches some capacity. However, in Kafka streams applications the concurrency is limited by the number of partitions in the Kafka topics, so scaling up will just mean some consumers in your consumer groups go idle (if there's more than the number of partitions).
But it also adds complications: if you use RocksDB as your local store, you'll likely want to persist that outside the (disposable, and maybe read only!) container. So you'll need to figure out how to do volume persistence, operationally / organizationally. With plain ol' Jars with Systemd... well you always have the hard drive, and if the server crashes either it will restart (physical machine) or hopefully it will be restored by some instance block storage thing.
By this I mean to say: that kstream apps are not stateless, web apps where auto-scaling will always give you some more power, and that serves HTTP traffic. The people making these decisions at an organization or operations level may not fully know this. Then again, hey if everyone writes Docker stuff then the organization / operations team "just" have some Docker scheduler clusters (like a Kubernetes cluster, or Amazon ECS cluster) to manage, and don't have to manage VMs as directly anymore.
Dockerizing + clustering with kubernetes provide many benefits like auto healing, auto horizontal scaling.
Auto healing: in case spring application crashes, kubernetes will automatically run another instances and will ensure required number of containers are always up.
Auto horizontal scaling: if you get burst of messages, yo can tune spring applications to auto scale up or down using HPA that can use custom metrics also.

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

Resources