I just want to check my understanding of a microservice architecture.
I have 5 different apps that I'm building and running in their own Dockerfile.
Each docker file first builds that app before pulling the Apache httpd image and moving the built files over to its server.
This means that all 5 apps have separate httpd servers serving that application at different urls. Each app communicates with the other, getting the necessary resources over http.
I'm looking to deploy this in Kubernetes.
Is it normal to have a server per service? or would you create a separate server container and copy all the files over to that one.
Yes it is normal, each microservice should have its own web server, so that they run in isolation and can be scaled individually.
Related
We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.
I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!
IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?
I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.
My application is written in .Net Core as a Console App. It consumes a RabbitMQ Queue and it listens on SignalR sockets, calls 3rd party APIs and publishes to RabbitMQ Queues. It needs to run 24/7.
This is all working great on my local environment, but now I am ready to deploy to a web server, I am trying to work out how best to host this application. I am leaning towards deploying into a Docker container, but I am unsure if this is advisable for a 24/7 application.
Are containers designed for short lived workers only, and will they be costly to leave running all the time?
Can I put my container on my Web Server alongside my Web APIs etc. and host on the same Windows EC2 box maybe to save hosting costs?
How would others approach the deployment of this .Net Core application onto a web hosting environment?
Does you application maintain any state ? You can have a long live application but you’ll want to handle state if you maintain it . Might be able to use a compose file to handle everything like volumes , networking, and restart policies .
I have an Azure web app running. I would need to move this application to docker so I can flexibly move my apps to different cloud services.
I am not sure if a web app can directly be contained in a docker file or whether I need to move it to Azure containers and then a docker file.
Please help
Tried creating and spinning web apps and their respective database. Not sure the next steps to containerize or dockerize this
If you only have one docker image you can stick with Azure Web App for Containers, otherwise you will need to go on Azure Container Service.
You can look at this SO post for a quick comparison.