how to setup quartz.net for many customer databases - quartz.net

I'm sure someone has come across this scenario before. We're rolling out a product where each customer has a separate copy of a database and each customer requires quartz.net jobs.
Are there any recommendations on how to configure quartz to run against each copy of the database?

You'll have to run separate instances for each database. Each scheduler runs against a single database instance.

Related

Prevent multiple cron running in nest.js on docker

In docker we have used deploy: replicas: 3 for our microservice. We have some Cronjob & the problem is the system in running all cronjob is getting called 3 times which is not what we want. We want to run it only one time. Sample of cron in nest.js :
#Cron(CronExpression.EVERY_5_MINUTES)
async runBiEventProcessor() {
const calculationDate = new Date()
Logger.log(`Bi Event Processor started at ${calculationDate}`)
How can I run this cron only once without changing the replicas to 1?
This is quite a generic problem when cron or background job is part of the application having multiple instances running concurrently.
There are multiple ways to deal with this kind of scenario. Following are some of the workaround if you don't have a concrete solution:
Create a separate service only for the background processing and ensure only one instance is running at a time.
Expose the cron job as an API and trigger the API to start background processing. In this scenario, the load balancer will hand over the request to only one instance. This approach will ensure that only one instance will handle the job. You will still need an external entity to hit the API, which can be in-house or third-party.
Use repeatable jobs feature from Bull Queue or any other tool or library that provides similar features.
Bull will hand over the job to any active processor. That way, it ensures the job is processed only once by only one active processor.
Nest.js has wrapper for the same. Read more about the Bull queue repeatable job here.
Implement a custom locking mechanism
It is not difficult as it sounds. Many other schedulers in other frameworks work on similar principles to handle concurrency.
If you are using RDBMS, make use of transactions and locking. Create cron records in the database. Acquire the lock as soon as the first cron enters and processes. Other concurrent jobs will either fail or timeout as they will not be able to acquire the lock. But you will need to handle a few cases in this approach to make it bug-free and flawless.
If you are using MongoDB or any similar database that supports TTL (Time-to-live) setting and unique index. Insert the document in the database where one of the fields from the document has unique constraints that ensure another job will not be able to insert one more document as it will fail due to database-level unique constraints. Also, ensure TTL(Time-to-live index) on the document; this way document will be deleted after a configured time.
These are workaround if you don't have any other concrete options.
There are quite some options here on how you could solve this, but I would suggest to create a NestJS microservice (or plain nodeJS) to run only the cronjob and store it in a shared db for example to store the result in Redis.
Your microservice that runs the cronjob does not expose anything, it only starts your cronjob:
const app = await NestFactory.create(
WorkerModule,
);
await app.init();
Your WorkerModule imports the scheduler and configures the scheduler there. The result of the cronjob you can write to a shared db like Redis.
Now you can still use 3 replica's but prevent registering cron jobs in all replica's.

Postgresql 10 logical replication - what is the best way to sync replica`s db tables

I was set up two VMs where are the first VM is master PostgreSQL and the second is the slave.
I use PostgreSQL 10 with logical replicating, so I created publisher and subscription.
Initially, I created necessary tables on Master, then take backup and apply it to the slave, so all tables are synced and all working good.
I am using Rails app with migrations, so, now I want to apply the migration to master DB which will create a lot of new tables.
What is the best way to create the same tables with indexes to replication?
A simple solution for me - create a master DB dump again and apply it to slave.
But, maybe there are exists other solutions to keep database structure synced?
You can use Continuous Archiving to push any changes that happen to the master to the slave .
https://www.postgresql.org/docs/12/continuous-archiving.html

Temporarily stop mnesia replication

I have an erlang application that uses mnesia to store some basic state that defines users and roles of our system. We have a new feature that we need to roll out that requires an extension of the record schema stored in one our mnesia tables.
Our deployment plan was to take one node out of the cluster (just by removing from network), deploy the code, run a script to upgrade the record schema on that node. Bring it back into service. However, once I upgrade the records on this node, it replicates to the other nodes and certain operations begin failing on those nodes because of the mis-matched record schema. Obviously a BIG PROBLEM for zero-down-time deployments.
Is there a way to isolate my schema changes so that the schema upgrade can be run on each node as they are upgraded? Preferably for only the table being upgraded, allowing the other tables to keep replicating. However, I could live with shutting-of replication between all nodes for the few minutes it takes for use to deploy to all nodes.
I had this exact problem. The only way I was able to solve it was to take all nodes out of the cluster and leave only one live, upgrade that "master" node's schema and code, which can hopefully be done while live, then for each remaining node, delete its database files, upgrade the code, and bring the node up (creating the tables with the correct new schema) and back into the cluster.
I used an escript I wrote that adds and removes nodes from a cluster to make this easier, and an Ansible playbook to orchestrate it. I really don't want to do that again any time soon.
The essential problem is that Mnesia doesn't have schema versioning, otherwise this could be done in a much better way.

Use one windows service to execute jobs and two web applications to schedule jobs

I have one SQL Server database as the job store, two web applications that both can schedule jobs, and a Quartz.NET windows service to execute jobs.
I want the two web applications to just schedule jobs, while the windows service just to execute jobs.
Here comes the problem:
If I create IScheduler instances in the two web applications and in the windows service, they will execute jobs at the same time, there can be conflicts.
If I do not create IScheduler instances in the two web applications, how can I schedule jobs to the windows service from web applications?
Is there a way to let the IScheduler to just schedule jobs without executing jobs? (I can deploy the IJob assemblies to all these three applications)
You probably don't want to instantiate an IScheduler instance in the websites, precisely because creating a local instance also executes jobs.
I've implemented something similar to what you're looking to do.
First, make sure that in your service configuration file (app.config) that you configure the four keys quartz.scheduler.exporter.type, quartz.scheduler.exporter.port, quartz.scheduler.exporter.bindName, and quartz.scheduler.exporter.channelType.
Second, make sure that your web.config has the following four keys configured : quartz.scheduler.instanceName, quartz.scheduler.instanceId, quartz.scheduler.proxy, and quartz.scheduler.proxy.address.
Then when you create your StdSchedulerFactory() and use it to get a scheduler, you are not instantiating a new scheduler, but attaching to an existing scheduler. You can then do anything through the remote scheduler that you could do with a local one, but there is only a single instance that executes jobs.
In your config file, set key "quartz.threadPool.type" to "Quartz.Simpl.ZeroSizeThreadPool, Quartz". The quartz scheduler thus created can schedule/modify jobs/trigger/calendars but can not run the jobs.

Grails start two in-memory databases for dev/test?

Is there a way to start up two in-memory databases with grails? Specifically, I'd like to integration test my ETL process and allow reporting to be runnable in both development and test environments.
You should be able to do this with the Datasources Plugin

Resources