should I add DB, API and FE in one docker compose? - docker

I have a project FE, BE and a DB.
now all the tutorials I found use the three in one file.
Now should it be the DB in one docker-compose file and the BE and FE in another?
Or should it be one file per project with DB, FE and BE?
[UPDATE]
The stack I'm using is Spring Boot, Postgres and Angular.

Logically your application has two parts. The front-end runs in the browser, and it makes HTTP requests to the back-end. The database is an implementation detail of the back-end and not something you separately need to manage.
So I'd consider two possible Compose layouts:
This is "one application", and there is one docker-compose.yml that manages all of it.
The front- and back-end are managed separately, since they are two separate components with a network API. You have a frontend/docker-compose.yml that manages the front-end, and a backend/docker-compose.yml that manages the back-end and its associated database.
Typical container style is not to have a single shared database. Since it's easy to launch an isolated database in a container, you'd generally have a separate database per application (or per microservice if you're using that style). Of the options you suggest, a separate Compose file only launching a standalone database is the one I'd least consider.
You haven't described your particular tech stack here, but another consideration is that you may not need a container for the front-end. If it's a plain React application, the "classic" development flow of compiling it to static files and publishing them via any convenient HTTP service still works with a Docker-based backend. You get some advantages from this path like Webpack's hashed build files, so you can avoid disrupting running clients when you push out a new build. If you separate the front- and back-ends initially, then you can change the way the front-end is deployed without affecting the back-end at all.

Related

One VPS, multiples services, different projects/domains

This is my first VPS, so I am pretty new to administrating my own box. I already have experience with a managed web server, registrars, DNS settings, etc. The basics. Now I'd like to take it a step further and manage my own VPS to run multiple services for different business and private projects.
So far I got an VPS from Contabo, updated the system, set up a new user with sudo rights, secured the root user, configured Ufw, installed Nginx with server blocks for two domains and created SSL certificates for one domain using Certbot.
Before I go on with setting up my VPS, I'd like to verify my approach for hosting multiple services for multiple domains makes sense and is a good way to go.
My goal is to host the following services on my VPS. Some of them will be used by all projects some only by a single one:
static website hosting
dynamic website hosting with a lightweight CMS
send and receive emails
Nextcloud/Owncloud
Ghost blog
My current approach is to run all services except for Nginx and the mail server with Docker. Using Nginx as proxy to the services encapsulated in Docker.
Is this an overkill or a valid way to go forward in order to keep the system nice and clean? Since I am new to all of this, I am unsure if I also could run all of the services without using Docker but still be able to serve the different projects on different domains without messing up the system.
Furthermore, I'd like to make sure, that access to the services and the stored data is properly separated between the different tenants (projects). And of course ideally the admin of the services is kind of manageable.

Run Docker containers dynamically according to DB?

I'm developing an app which live-streams video and/or audio from different entities. Those entities' IDs and configurations are stored as records in my DB. My app's current architecture is something such as the following:
a CRUD API endpoint for system-wide functionalities, such as logging in or editing an entity's configuration.
N-amount of other endpoints (where N is the number of entities and every endpoint's route is defined by the specific entity's ID, like so: "/:id/api/") for each entity's specific functionalities. Each entity is loaded by the app on initialization. Those endpoints are both a REST API handler and a WebSocket server for live-streaming media received from the backend which was configured for that entity.
On top of that, there's an NGINX instance which acts as a proxy and hosts our client files.
Obviously, this isn't very scalable at the moment (a single server instance handles an ever-growing amount of entities) and requires restarting my server's instance when adding/deleting an entity - which isn't ideal. I was thinking of splitting my app's server into micro-services: one for system-wide CRUD, and N others for each entity defined in my DB. Ultimately, I'd like those micro-services to be run as Docker containers. The problems (or questions to which I don't know the answers) I'm facing at the moment are:
How does one run Docker containers dynamically, according to a DB (or programmatically)? Is it even possible?
How does one update the running Docker container to be able to reconfigure that entity during run-time?
How would one even configure NGINX to proxy those dynamic micro-services? I'm guessing I'll have to use something like Consul?
I'm not very knowledgeable, so pardon me if I'm too naive to think I can achieve such architecture. Also, if you can think of a better architecture, I'd love to hear your suggestions.
Thanks!

How can I implement a sub-api gateway that can be replicated?

Preface
I am currently trying to learn how micro-services work and how to implement container replication and API gateways. I've hit a block though.
My Application
I have three main services for my application.
API Gateway
Crawler Manager
User
I will be focusing on the API Gateway and Crawler Manager services for this question.
API Gateway
This is a docker container running a Go server. The communication is all done with GraphQL.
I am using an API Gateway because I expect to have different services in my application each having their own specialized API. This is to unify everything.
All it does is proxy requests to their appropriate service and return a response back to the client.
Crawler Manager
This is another docker container running a Go server. The communication is done with GraphQL.
More or less, this behaves similar to another API gateway. Let me explain.
This service expects the client to send a request like this:
{
# In production 'url' will be encoded in base64
example(url: "https://apple.example/") {
test
}
}
The url can only link to one of these three sites:
https://apple.example/
https://peach.example/
https://mango.example/
Any other site is strictly prohibited.
Once the Crawler Manager service receives a request and the link is one of those three it decides which other service to have the request fulfilled. So in that way, it behaves much like another API gateway, but specialized.
Each URL domain gets its own dedicated service for processing it. Why? Because each site varies quite a bit in markup and each site needs to be crawled for information. Because their markup is varied, I'd like a service for each of them so in case a site is updated the whole Crawler Manager service doesn't go down.
As far as the querying goes, each site will return a response formatted identical to other sites.
Visual Outline
Problem
Now that we have a bit of an idea of how my application works I want to discuss my actual issues here.
Is having a sort of secondary API gateway standard and good practice? Is there a better way?
How can I replicate this system and have multiple Crawler Manager service family instances?
I'm really confused on how I'd actually create this setup. I looked at clusters in Docker Swarm / Kubernetes, but with the way I have it setup it seems like I'd need to make clusters of clusters. That makes me question my design overall. Maybe I need to not think about keeping them so structured?
At a very generic level, if service A calls service B that has multiple replicas B1, B2, B3, ... then it needs to know how to call them. The two basic options are to have some sort of service registry that can return all of the replicas, and then pick one, or to put a load balancer in front of the second service and just directly reach that. Usually setting up the load balancer is a little bit easier: the service call can be a plain HTTP (GraphQL) call, and in a development environment you can just omit the load balancer and directly have one service call the other.
/-> service-1-a
Crawler Manager --> Service 1 LB --> service-1-b
\-> service-1-c
If you're willing to commit to Kubernetes, it essentially has built-in support for this pattern. A Deployment is some number of replicas of identical pods (containers), so it would manage the service-1-a, -b, -c in my diagram. A Service provides the load balancer (its default ClusterIP type provides a load balancer accessible only within the cluster) and also a DNS name. You'd configure your crawler-manager pods with perhaps an environment variable SERVICE_1_URL=http://service-1.default.svc.cluster.local/graphql to connect everything together.
(In your original diagram, each "box" that has multiple replicas of some service would be a Deployment, and the point at the top of the box where inbound connections are received would be a Service.)
In plain Docker you'd have to do a bit more work to replicate this, including manually launching the replicas and load balancers.
Architecturally what you've shown seems fine. The big "if" to me is that you've designed it so that each site you're crawling potentially gets multiple independent crawling containers and a different code base. If that's really justified in your scenario, then splitting up the services this way makes sense, and having a "second routing service" isn't really a problem.

Deploying Grails application on a large scale

I started looking into Grails and I felt quite comfortable writing a simple prototype web app with services, controllers, a RESTful interface and some simple views.
The usual way for deployment would be to package the web app into a .war (or .zip for plugins) and then deploy it to a application server e.g. Tomcat.
Assuming I will integrate the frontend into a larger existing frontend/portal, I don't want to package this together with some potentially heavy backend and put all this on the same application server, which was originally meant to host frontent/portal code only. Also the backend might provide services to be used by other applications.
Thinking about flexibilty in scalability, is there a possibility (or the need at all?) to seperateley deploy the frontend (views, maybe some controllers) and the backend (maybe the REST controllers?, services, domains) to different hosts, by packaging two separate modules? Anyone having experience rolling out a Grails app in large scale?

How to share a data access layer (Services and Domain Classes) between multiple Grails apps

I would like to share the data access portion of my grails app (Grails domain classes and services) with another grails app. One is a standard client facing web app, the other (not yet written) will be for periodic background tasks such as reminder emails and such using the Quartz plugin or similar, where the UI will just be for statistics/control for internal users.
I do not want this all bundled in one Grails application because I want to be able to scale them and run them on different machines. What is the proper way to do this? I have accomplished this in the past in more legacy Java web applications by bundling the shared data access classes into a .jar and including them where needed in multiple apps, but I'm not sure if this is the right approach for Grails.
I've considered a full blown service oriented architecture where a third grails application is responsible for all data access and the two described do all their data access through REST calls to this service app, but this is out of scope for the short term since the client facing webapp is already written.
Usually this is done via plugins. Create your domain classes, services, controllers and even default gsp's that you want to share among apps and create them as a plugin. That way you can install them in any Grails app that requires that behavior.
I've done this with some generic accounting type behavior that is fairly common among apps I write like receivables, payables, etc.
One great thing is that you can write the plugin and test separately with a test data source and then when you install it into a Grails app it will use the apps data source. And it will have default gsp's and controllers that give you a basic set of behavior that you can override in the actual app.

Resources