Hosting multiple Single Page Apps with Docker - docker

We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?

A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.

Related

How many servers in a microservice architecture?

I just want to check my understanding of a microservice architecture.
I have 5 different apps that I'm building and running in their own Dockerfile.
Each docker file first builds that app before pulling the Apache httpd image and moving the built files over to its server.
This means that all 5 apps have separate httpd servers serving that application at different urls. Each app communicates with the other, getting the necessary resources over http.
I'm looking to deploy this in Kubernetes.
Is it normal to have a server per service? or would you create a separate server container and copy all the files over to that one.
Yes it is normal, each microservice should have its own web server, so that they run in isolation and can be scaled individually.

Why multi-container docker apps are built?

Can somebody explain it with some examples? Why multi-container docker apps are built? while you can contain your app in a single docker container.
When you make a multi-container app you have to do networking. Is not it easy to run a single image of a single container rather than two images of two containers?
There are several good reasons for this:
It's easier to reuse prebuilt images. If you need MySQL, or Redis, or an Nginx reverse proxy, these all exist as standard images on Docker Hub, and you can just include them in a multi-container Docker Compose setup. If you tried to put them into a single image, you'd have to install and configure them yourself.
The Docker tooling is built for single-purpose containers. If you want the logs of a multi-process container, docker logs will generally print out the supervisord logs, which aren't what you want; if you want to restart one of those containers, the docker stop; docker rm; docker run sequence will delete the whole thing. Instead with a multi-process container you need to use debugging tools like docker exec to do anything, which is harder to manage.
You can upgrade one part without affecting the rest. Upgrading the code in a container usually involves building a new image, stopping and deleting the old container, and running a new container from the new image. The "deleting the old container" part is important, and routine; if you need to delete your database to upgrade your application, you risk losing data.
You can scale one part without affecting the rest. More applicable in a cluster environment like Docker Swarm or Kubernetes. If your application is overloaded (especially in production) you'd like to run multiple copies of it, but it's hard to run multiple copies of a standard relational database. That essentially requires you to run these separately, so you can run one proxy, five application servers, and one database.
Setting up a multi-container application shouldn't be especially difficult; the easiest way is to use Docker Compose, which will deal with things like creating a network for you.
For the sake of simplification, I would say you can run only one application with a public entry point (like API) in a single container. Actually, this approach is recommended by Docker official documentation.
Microservices
Because of this single constraint, you cannot run microservices that require their own entry points in a single docker container.
It could be more a discussion on the advantages of Monolith application vs Microservices.
Database
Even if you decide to run the Monolith application only, still you need to connect some database there. As you noticed, Docker has an additional network-configuration layer, so if you want to run Database and application locally, the easiest way is to use docker-compose to run both images (Database and your Application) inside one, automatically configured network.
# Application definition
application: <your app definition>
# Database definition
database:
image: mysql:5.7
In my example, you can just connect to your DB via https://database:<port> URL from your main app (plus credentials eventually) and it will work.
Scalability
However, why we should split images for the database from the application? One word - scalability. For development purposes, you want to have your local DB, maybe with docker because it is handy. For production purposes, you will put the application image to run somewhere (Kubernetes, Docker-Swarm, Azure App Services, etc.). To handle multiple requests at the same time, you want to run multiple instances of your application. However what about the database? You cannot connect to the internal instance of DB hosted in the same container, because other instances of your app in other containers will have a completely different set of data (without synchronization).
Most often you are electing to use a separate Database server - no matter if running it on the container or fully manged databases (like Azure CosmosDB or Mongo Atlas), but with your own configuration, scaling, and synchronization dedicated for DB only. Your app just needs to worry about the proper URL to that. Most cloud providers are exposing such services out of the box, so you are not worrying about the configuration by yourself.
Easy to change
Last but not least argument is about changing the initial setup overtime. You might change the database provider, or upgrade the version of the image in the future (such things are required from time to time). When you separate images, you can modify one without touching others. It is decreasing the cost of maintenance significantly.
Also, you can add additional services very easy - different logging aggregator? No Problem, additional microservice running out-of-the-box? Easy.

Editing application settings of a containerized application after deployment

There might be something I fundamentally misunderstand about Docker and containers, but... my scenario is as follows:
I have created an asp.net core application and a docker image for it.
The application requires some settings being added / removed at runtime
Also some dll plugins could be added and loaded by the application
These settings would normally be stored in appsettings.json and a few other settings files located in predefined relative path (e.g. ./PluginsConfig)
I don't know how many plugins will there be and how will they be configured
I didn't want to create any kind of UI in the web application for managing settings and uploading plugins - this was to be done on the backend (I need the solution simple and cheap)
I intend to deploy this application on a single server and the admin user would be able and responsible for setting the settings, uploading plugins etc. It's an internal productivity tool - there might be many instances of this application, but they would not be related at all.
The reason I want it in docker is to have it as self-contained as possible, with all the dependencies being there.
But how would I then allow accessing, adding and editing of the plugins and config files?
I'm sure there's a pattern that would allow this scenario.
What you are looking for are volumes and bind mounts. You can bind files or directories from a host machine to a container. Thus, host and container can share files.
Sample command (bind mount - (there are also other ways))
docker container run -v /path/on/host:/path/in/container image
Detailed information for volumes and bind mounts

Is it wise to delete the default webapps from a Tomcat-based docker image?

I am containerizing an older Java web application with Docker. My Dockerfile pulls an official Tomcat image from Docker Hub (specifically, tomcat:8.5.49-jdk8-openjdk), copies my .WAR file into the webapps/ directory, and copies in some idiosyncratic configuration files and dependencies. It works.
Now I know that Tomcat comes out-of-the-box with a few directories under webapps/, including the "manager" app, and some others: ROOT, docs, examples, host-manager. I'm thinking I ought to delete these, lest one of my users access them, which might be a security risk and is unprofessional at the least.
Is it a best practice to delete those installed-by-default web apps from an official Tomcat image? Is there any downside to doing so? It seems logical to me, but a web search didn't turn up any expert opinion either way.
Every folder under webapps represents discrete Web Application contained within Tomcat Servlet Container after the server startup and deployment.
None of those web applications have any implicit or explicit correlation with either Catalina, Jasper or any other system component of Tomcat.
You should be quite OK to remove all those folders (apps) unless you need to have a Manager tool/application to manage your deployments and server. Even that can be installed again later on.

What is the benefit of dockerize the SPA web app

I dockerize my SPA web app by using nginx as base image then copy my nginx.conf and build files. As Dockerize Vue.js App mention I think many dockerizing SPA solutions are similar.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
So what's the benefit of dockerizing SPA?
----- update -----
One answer said "If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it." I don't agree with that at all. I don't need the latest version of nginx, after all I only use the basic feature of nginx. Some of my team members just use the nginx version bundled with linux when doing development. If my docker image uses the latest ngixn it actually creates the different environment than the development environment.
I realize my question will be probably closed b/c it will be seen as opinion based. But I have googled it and can't find a satisfied answer.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
This is a security concern... fire and forget is what it seems is being done here regarding the server.
If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it.
Bear in mind that if your App does not release new versions in a weekly bases then you need to consider to rebuild the docker images at least weekly in order to get the updates and keep everything up to date with the last security patches.
So what's the benefit of dockerizing SPA?
Same environment across development, staging and production. This is called 100% parity across all stages were you run your app, and this true for no matter what type of application you deploy.
If something doesn't work in production you can pull the docker image by the digest and run it locally to debug and try to understand where is the problem. If you need to ssh to a production server it means that you automation pipeline have failed or maybe your are not even using one...
Tools like Webpack compile Javascript applications to static files that can then be served with your choice of HTTP server. Once you’ve built your SPA, the built files are indistinguishable from pages like index.html and other assets like image files: they’re just static files that get served by some HTTP server.
A Docker container encapsulates a single running process. It doesn’t really do a good job at containing these static files per se.
You’ll frequently see “SPA Docker containers” that run a developer-oriented HTTP server. There’s no particular benefit to doing this, though. You can get an equally good developer experience just by developing your application locally, running npm run build or whatever to create a dist directory, and then publishing that the same way you’d publish other assets. An automation pipeline is helpful here, but this isn’t a task Docker makes wildly simpler.
(Also remember when you do this that the built application runs on the user’s browser. That means it can’t see any of the Docker-internal networking machinery: it can’t use the Docker-internal IP addresses and it can’t use the built-in Docker DNS service. Everything it reaches has to be on docker run -p published ports and it has to use a DNS name that reaches the host. The browser literally has no idea Docker is involved in this at all.)
There are a few benefits.
Firstly, building a Docker image means you are explicitly stating what your application's canonical run-time is - this version of nginx, with that SSL configuration, whatever. Changes to the run-time are in source control, so you can upgrade predictably and reversibly. You say you don't want "the latest version" - but what if that latest version patches a critical security vulnerability? Being able to upgrade predictably, on "disposable" containers means you upgrade when you want to.
Secondly, if the entire development team uses the same Docker image, you avoid the challenges with different configurations giving the "it works on my machine" response to bugs - in SPAs, different configurations of nginx can lead to different behaviour. New developers who join the team don't have to install or configure anything, and can use any device they want - they can be certain that what runs in Docker is the same as it is for all the other developers.
Thirdly, by having all your environments containerized (not just development, but test and production), you make it easy to move versions through the pipeline and only change the environment-specific values.
Now, for an SPA, these benefits are real, but may not outweigh the cost and effort of creating and maintaining Docker images - inevitably, the Docker image becomes a bottleneck and the first thing people blame. I'd only invest in it if you see lots of environment-specific pain (suggesting having a consistent run-time environment is necessary), or if you see lots of "it works on my machine" type of bug.

Resources