How to deploy a Flask Backend and React Front End on Google Cloud - docker

I know this may seem like an opinion-based question but I can't seem to find any answers anywhere. I'm having trouble figuring out how to deploy my flask backend and react front end on google cloud. I am using a docker-compose on my local machine but I can't seem to find a way to deploy that on Google Cloud.
My question is, is there a way to deploy them using a docker-compose file using Cloud Build and Cloud Run? Or do I have to create two different Cloud Run instances to run the frontend and backend? Or is it better to create a VM instance and run the docker-compose container on there (and how would one even do this)? I am very new to deployment so any help is appreciated.
For reference, I saw this but it didn't exactly answer my question. Thanks in advance!

You use docker-compose for multi-container applications. In your case it wouldn't make much sense.
You have a python backend. You can containerize it and deploy to Cloud Run, Cloud Functions, App Engine, Google Kubernetes Engine or even on a Compute Engine VM. In my opinion the most convenient option would be Cloud Run.
If your React frontend is a Single Page App, it communicates with your python backend with HTTP requests. You build the HTML/CSS/JS files and host them somewhere, like a Cloud Storage bucket or Cloud CDN.

Related

How to save doccano database to Google Cloud Storage after deploying to Cloud Run?

I deployed a doccano docker container to Cloud Run and I am successfully able to reach the WebApp.
Everything works fine, such as log in, data import and annotation.
Now I would like to connect the container to Google Cloud Storage in order to save all annotations in a bucket. Currently, all data is lost after the container restarts.
Any hints on how to accomplish that are highly appreciated!
What I (kind of) tried:
Container is up and running, some environment variables are set. But I don't know how I can set a bucket uri within the doccano docker container (doccanos documentation is a bit sparse in that regard).
Maybe this can be helpful for anyone with a similar use case:
My solution/workaround for deploying doccano on GCP was deploying a docker container to the Compute Engine (and opening a port to the app) instead of Cloud Run. Cloud Run seems indeed to be the wrong service for that use case. Compute Engine has a persistent storage which keeps all of the data even if the container has to restart.

How do you deploy multiple services on AWS Copilot at once?

I'm new to AWS Copilot and containers in general (frontend focused engineer helping out with DevOps at my job) and am trying to figure out how to deploy multiple containers at once. The project includes a web container, deployed through copilot as a Load Balanced Web Service, and a worker container, deployed as a Backend Service. The two containers share a code base, so when I update one, I need to update the other, especially when there is a database migration. When I do a copilot deploy, however, it seems to only give the option to choose one service. How would I synchronize the deployment?
You can deploy the two services in parallel by using Copilot's pipelines feature. Tons of info here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/ ... let us know if you have more questions not answered by the docs!
Thanks!

How to design & Implement containerize architecture for nginx with reactjs and laravel with mongo?

I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.
still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, I used a private cloud server so, restricted to deploying on AWS\AZURE\GCP\etc...
share your Architecture plan and implementation for a better approach to microservices!
as per my thinking,
first make a docker file for react and laravel project
then upload to docker private registry.[dockerhub]
install docker and k8s on VM
deploy container of 1=react and 2=laravel from image
also deploy 3=nginx and 4=mongo container from default market image
Some of my questions:
How to make the connection?
How to take a new pull on the container, for the new update release version?
How to make a replica, for a disaster recovery plan?
How to monitor errors and performance?
How to make the pipeline MOST important?
How to make dev, staging, and production environments?
This is more of planning question, more of the task can be automated by developer/devops, except few administrative task like monitoring and environment creation.
still this can be shared responsibility. or if team is available to manage product/services.
You can use gitlab, which can directly attach to kub8 provider. Can reduce multiple build steps.

Kubernetes - from Minikube to production

I have created a simple PHP api application that works with a mysql database to store data. I have been experimenting with Kubernetes on my Windows 10 machine through Minikube.
I have just about got my head round the ideas involved, yet I’m not sure about how to implement this properly. So far I have used Kompose to create a set of yaml files from an existing docker-compose file. This has been half successful.
To get my application code into a pod hosting PHP, I have been using hostPath to share from my local machine. I mount to the minikube machine and share from there. I was having trouble sharing by other means. The application code is hosted in a github repo.
My questions are:
Is mounting my application code into a pod (assuming this is similar to what happens in docker) the correct way to do this? I’m not clear exactly what information is held on an image retrieved from the docker hub. Although I have read up on containers isolating the build environment from your machine.
How does this approach to translate into a production environment hosted on a cloud? I see there are various storage types. I had for example, wanted to try deploying on AWS just to see how this would work in practice.
I’m really looking for guidance to go from the tutorials found on the web working on my machine, to something that could be done for a customer hosted on the cloud. This might scale up to a more microservices style architecture over time.
The approach you are describing is mostly for development setups, where you want to mount your code into the container as a volume so you don't have to rebuild every time your code changes. Typically done with a docker-compose file.
For production setups, you want the docker image to correctly work and only mount volumes to data you want to persist, typically databases are the core example. For this EKS is deeply integrated into the AWS infrastructure and will create EBS volumes on demand. You don't need to provision any volume or even care for most cases (unless you need multiple read-write volumes needed for scaling).
For a PHP application you really should not persist any data in the pod, because it will create other issues when you need to scale the application. Also, a good approach for managing files that need to persist is S3 (AWS simple storage service).
So generally speaking, you need a deployment per application a service to access each pod on that application and then an ingress object to route traffic from the internet to each pod.
Your application docker image is really the core. You just build it with your code inside. Make sure to pass configuration using environment variable or configuration file so you can connect to the database.
Now for kubernetes, for each compoment (e.g. PHP application, MySQL) you will most likely create a deployment k8s manifest that points to the docker image and add some configuration environment variables.
For production, you will need persistence volume. On aws you can simply use EBS-backed volumes
To get traffic from Internet to your PHP application, you will need to add one or more k8s components:
K8s Service manifest that exposes your PHP deployment/pod on a stable address. If you only have q or very few services, you can use LoadBalancer which on cloud like AWS will create an ALB/ELB (might need to add annotation to your service)
An ingress which is just a reverse proxy (contour, nginx, traefik). On cloud environment it will map to an ALB/ELB. The advantage of this is that you can have a single ALB for all your services i.e. save money. Also you can configure routing path or TLS termination in one place.

Can I use docker to automatically spawn instances of whole microservices?

I have a microservice with about 6 seperate components.
I am looking to sell instances of this microservice to people who need dedicated versions of it for their project.
Docker seems to be the solution to doing this as easily as possible.
What is still very unclear to me is, is it possible to use docker to deploy whole instances of microservices within a cloud service like GCP or AWS?
Is this something more specific to the Cloud provider itself?
Basicly in the end, I'd like to be able to, via code, start up a whole new instance of my microservice within its own network having each component be able to speak to eachother.
One big problem I see is assigning IP's to the containers so that they will find each other, independent of which network they are in. Is this even possible or is this not yet feasible with current cloud techonology?
Thanks a lot in advance, I know this is a big one...
They is definitely feasible and is nowadays one of the most popular ways to ship and deploy applications. However, the procedure to deploy varies slightly based on the cloud provider that you choose.
The good news is that the packaging of your microservices with Docker is independent from the cloud provider you use. You basically need to package each component in a Docker image, and deploy these images to a cloud platform.
All popular cloud platforms nowadays support deployment of docker containers. You can use in addition popular frameworks such as Docker swarm or Kubernetes on these cloud platforms to orchestrate the microservices deployment.

Resources