Which GCP service to choose for hosting Docker image stored at Container Registry? - docker

I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!

IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?

Related

How should I host a .Net Core Console application that needs to run 24/7?

My application is written in .Net Core as a Console App. It consumes a RabbitMQ Queue and it listens on SignalR sockets, calls 3rd party APIs and publishes to RabbitMQ Queues. It needs to run 24/7.
This is all working great on my local environment, but now I am ready to deploy to a web server, I am trying to work out how best to host this application. I am leaning towards deploying into a Docker container, but I am unsure if this is advisable for a 24/7 application.
Are containers designed for short lived workers only, and will they be costly to leave running all the time?
Can I put my container on my Web Server alongside my Web APIs etc. and host on the same Windows EC2 box maybe to save hosting costs?
How would others approach the deployment of this .Net Core application onto a web hosting environment?
Does you application maintain any state ? You can have a long live application but you’ll want to handle state if you maintain it . Might be able to use a compose file to handle everything like volumes , networking, and restart policies .

How to setup nginx in front of node in docker for Cloud Run?

I need to setup reverse proxy nginx in front of nodejs app that need to be deployed in google cloud run.
Use Cases
- Need to serve assets gzipped via nginx (I don't want to overhead node for gzip compression)
- To block small DDOS attacks
I didn't find any tutorial to setup nginx and node in cloud run.
Also I need to install PM2 to for node.
How to do this setup in docker? also how can I configure nginx before deploying?
Thanks in advance
I need to setup reverse proxy nginx in front of nodejs app that need
to be deployed in google cloud run.
Cloud Run already provides a reverse proxy - Cloud Run Proxy. This is the service that load balances, provides custom domains, authentication, etc. for Cloud Run. However, there is nothing in the design of Cloud Run to prevent you from using Nginx as a reverse proxy inside your container. There is nothing in the design of Cloud Run to prevent you from using Nginx as a separate container front-end to another Cloud Run service. Note in the last case you will be paying twice as much as you will need two Cloud Run services, one for the Nginx service URL and another for the node application.
Use Cases - Need to serve assets gzipped via nginx (I don't want to
overhead node for gzip compression) - To block small DDOS attacks
You can either perform compression in your node app or in Nginx. The result is the same. The performance impact is the same. Nginx does not provide any overhead savings. Nginx may be more convenient in some cases.
Your comment to block small DDOS attacks. Cloud Run autoscales, which means each Cloud Run instance will have some limited exposure to a DOS. As the DDOS traffic increases, Cloud Run will launch more instances of your container. Without a prior request from you, Cloud Run will stop scaling at 1,000 instances. Nginx will not provide any benefit that I can think of to mitigate a DDOS attack.
I didn't find any tutorial to setup nginx and node in cloud run.
I am not aware of a specific document covering Nginx and Cloud Run. However, you do not need one. Any document covering Nginx and Docker will be fine. If you want to run Nginx in the same container as your node application you will need to write a custom script to launch both Nginx and Node.
Also I need to install PM2 to for node.
Not possible. PM2 has a user interface and GUI. Cloud Run only exposes $PORT over HTTP from a Cloud Run instance.
How to do this setup in docker? also how can I configure nginx before
deploying?
There are numerous tutorials on the Internet for setting up Nginx and Docker.
Two examples below. There are hundreds of examples on the Internet.
How to run NGINX as a Docker container
Deploying NGINX and NGINX Plus on Docker
I have answered each of your questions. Now some advice:
Using Nginx with Cloud Run does not make any sense with a Node.js application. Just run your node application and let Cloud Run Proxy do its job.
Compression is CPU intensive. Cloud Run is designed for HTTP style microservices that are small, fast, and compact. You will pay for increased CPU time. If you have content that needs to be compressed, compress it first and serve the content compressed. There are cases where compression in Cloud Run is useful and/or correct, but look at your design and optimize where possible. Static content should be served by Cloud Storage, for example.
Cloud Run can handle a Node.js application easily with excellent performance and scalability provided that you follow its design criteria and purpose.
Key factors to keep in mind:
Low cost, you only pay for requests. Overlapping requests have the same cost as one request.
Stateless. Containers are shut down when not needed which means you must design for restarts. Store state elsewhere such as a database.
Only serves traffic on port $PORT, which today is 8080.
Public traffic can be either HTTP or HTTPS. Traffic from the Cloud Run Proxy to the container is HTTP.
Custom domain names. Cloud Run makes HTTPS for URLs very easy.
UPDATE: Only HTTPS is now supported for the public endpoint (Public Traffic).
I think you should consider using a different approach.
Running multiple processes in a single container is not a best practice. The more common implementation of a proxy as you describe is to use 2 containers (the proxy is often called the sidecar) but this is not possible with Cloud Run.
Google App Engine may be more suitable.
App Engine Flexible permits deployments of containers that are proxied (behind the scenes) by Nginx. You may use static content with Flexible and can incorporate a CDN. App Engine Standard addresses your needs too.
https://cloud.google.com/appengine/docs/flexible/nodejs/serving-static-files
https://cloud.google.com/appengine/docs/standard/nodejs/runtime
Like Cloud Run, App Engine is serverless but provides more flexibility and is a more established service. App Engine integrates with more (all?) GCP services too whereas Cloud Run is limited to a subset.
Alternatively, you may consider Kubernetes (Engine). This provides almost limitless flexibility but requires more ops. As you're likely aware, there's a Cloud Run implementation that runs atop Kubernetes, Istio and Knative.
Cloud Run is a compelling service but it is only appropriate if you can meet its (currently) contrained requirements.
I have good news for you. I have written a blog post about exactly what you needed with sample code.
This example puts NGINX in the front (port 8080 on Cloud Run) while proxying the traffic selectively to another service running in the same container (on port 8081).
Read the blog post: https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/
Source code: https://github.com/ahmetb/multi-process-container-lazy-solution
Google Cloud Compute Systems
To understand GCP Computing, please see the below picture first:
For your case, I totally recommend you to use App Engine Flex to deploy your application. It supports docker container, nodejs,... To understand HOW TO DEPLOY nodejs to GAE Flex, please visit this page https://cloud.google.com/appengine/docs/flexible/nodejs/quickstart
You can install some third party libraries if you want. Moreover, GCP supports the global/internal load balancer, you can apply it into your GAE services.

Run a web page similar to kubernetes dashboard

I a want to run a web page similar like kubernetes dashboard.The web page takes input from the user and generates a small file but i want the web page to be loaded without using any server. kubernetes is deploying a pod and bringing up the web page i want to do the same.If kubernetes is also using a server how is it using it(is it directly downloading it with the OS in the pod or how is kubernetes doing it).
Overview I want to know how kubernetes dashboard is getting deployed is it using a server if so how is it getting the server installed in the kubernetes pod else how is it bring up the UI.
Actually, Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
Containers represent run-time environment for images, however images consist with OS layer and application binaries, a good explanation you can find here. In order to build own image you might consider two ways to afford this: create an image from existing one in Docker Hub or compose image from Dockerfile.To store the customized image might be the option to push it into Docker Hub repository or stand for some private isolated repo by deploying a Registry server.
When you are ready with an image, and you plan to implement application in Kubernetes cluster, that's a good time to create first microservice. Although, there are tons of materials about Kubernetes cluster and its run-time engine architecture in the globe, I would focus on the application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
In general scenario with Kubernetes Dashboard, the method in use kubectl proxy will expose the application by proxying gateway between host and Kubernetes API, which is more like for testing purposes and not secure, in comparison with Nodeport type which brings more convenient way to make application accessible outside the cluster, as described in this Stack thread.
I encourage you to get some more learning stuff in the official Kubernetes documentation.

How to deploy docker app using docker-compose.yml in cloud foundry

I have a docker-compose.yml file which have environment variable and certificates. I like to deploy these in cloud foundry dev version.
I want to deploy microgateway on cloud foundry link for microgateway is below-
https://github.com/CAAPIM/Microgateway
In cloud native world, you instantiate the services to your foundation beforehand. You can use prebuilt services (auto-scaler) available from the market place.
If the service you want is not available, you can install a tile (e.g redis, mysql, rabbitmq), which will add services to the market place. Lot of vendors provide tiles that can be installed on PCF (check on newtork.pivotal.io for the full list).
If you have services that are outside of cloud foundry (e.g. Oracle, Mongo, or MS Sql Server), and you wish to inject them into your cloud foundry foundation, you can create do that by creating User Provide Services (cups).
Once you have a service, you have to create a service instance. Think of it as provisioning a service for you. After you have provisioned i.e. created a service instance, then you can bind it to one or more apps.
A service instance is scoped to an org and a space. All apps within a org - space, can be bound to that service instance.
You deploy your app individually, by itself, to cloud foundry (jar, war, zip). You then bind any needed services to your app (e.g db, scaling, caching etc).
Use a manifest file to do all these steps in one deployment.
PCF 2.0 is introducing PKS - Pivotal Container Service. It is implementation of Kubo within PCF. It is still not GA.
Kubo, Kubernetes, and PKS allow you to deployed your containerized applications.
I have played with MiniKube and little bit of Kubo. Still getting my hands wet on PKS.
Hope this helps!

How to develop applications with Docker which are in separate repositories

The stack consists of a few applications/microservices need to be connected to run locally in development, and each is within its own repository
E.g. frontend, db, api
If each app has its own Dockerfile, and docker-compose.yml that list the required services to run that one application, what practices are recommended for development of the whole stack?
This is exactly what we do at work.
Front end angular running on Apache (prod) or node (dev)
All bog standard request are handled normally, all request to api container have /imanapicall in the url and are proxied to the api container based on the fact that the url contains /imanapicall
This is standard practice. Fe container is stateless.
We have node running the api, it is stateless and simply requests data from db and sends it back to front end
We have node running restify but express more popular
Then most people use mongodb but we use some weird db stuff
Important thing is to expose ports between containers, make sure firewalls aren't being a pain. For dev purposes you prob wanna expose ports for all containers to host also so u can debug more easily by for example hitting an express endpoint directly to make sure it's giving me wot I want
Ps statelessness important to support scaling, so I can introduce a load balancer and not worry which server it hits. Only db container holds state
FURTHER TO YOUR COMMENT ...
Each tier (db, api, etc) has its own git repo.
Each git repo has an automated Jenkins job that does a build (on a push to the repo) and on success pushes a new docker image.
We then have another git repo that is responsible for pulling it all together. this repo basically consist of a docker compose file to pull all the relevant containers and run them.
job done.
this gives a simple overview. if you have any more detailed questions feel free to ask.
FINE DETAIL ...
During development of the db tier no difficulties arise.
However, during development of the api tier, for example, this tier is dependent on the db tier, so a docker container for the db tier will need to be running when developing the api tier. Can use compose for this also.
When developing front end tier, this relies on both db and api tiers.
Its best to use a generic solution during development that allows all containers to be stood up in their latest docker image state and ignore those that are irrelevant for current purposes.
For example,
When developing front end, bring up all 3 containers from the latest images. Ignore the front end container and use your front end development environment as usual. Point the front end development environment to the api container. Hope this makes sense.
Ignoring the containers that are irrelevant for the development of the tier you are working on means you can use a common approach when brining up docker containers for all tiers without having a specific solution for each.
hope this makes sense, not the easiest thing to explain!

Resources