How to create leases to avoid duplicate cron-jobs when deploying application across multiple instances? - docker

I have a Dockerized Django application which have a number of CRON-jobs that need to be executed.
Right now I'm running it with the package Supercronic (which is recommended for running cron-jobs inside containers). This will be deployed on a two servers for redunancy-purposes, i.e. If one goes down the other one need to take over and execute the cron-jobs.
However, the issue is that without any configuration this will result in duplicate cron-jobs being executed, one for each server. I've read that you can set up something called a "lease" for the cron-jobs to retrieve, to avoid duplicates from different servers, but I haven't found any instructions on how to set this up.
Can someone maybe point me in the right direction here?

If you are running Supercron in two different instance, Supercron doesn't know about whether the job gets triggered, Its up to the application to handle the consistency.
You can do it in many ways either controlling the state with File or DB entries or any better way where your docker application can check the status before it start executing the actual process.

Related

How to Always have just a single Instance of a Cloud Run container running

I have a NodeJS app hosted on Cloud Run.
I have set that just 1 and only 1 instance of the service should be running at any given point in time.
However, whenever I make a code change and deploys the new revision, it turns out that the previous revision is still running until after a while then it stops.
How can I make sure even though I am deploying new code changes, multiple instances should never run. The existing running instance should stop immediately I am about to deploy new changes.
Multiple instances is causing duplicate items to be produced in my code and business logic.
Thank you.
Make sure that
Minimum number of instances:0
Maximum number of instances: 1
'Serve this revision immediately' checkbox is selected.
Based on that, 100% of the traffic will be migrated to the revision, overriding all existing traffic splits, if any.

How to have a logic (piece of code) that executes only once inside pods which are replicated in k8s?

I have a cronjob which is inside my golang application code.
now, this code is inside a container which is inside a pod
What happens:
Suppose I have a cronjob to send emails every Sunday.
The application starts to run and the cronjobs are created as soon as the application starts.
Now, If I have 3 such pods, the applications starts thrice in each pod and would have it's own cronjob, so the emails will be sent three times.
What I want:
The email should be sent only once i.e. all cronjobs should run only once independent of how many replicas I create
How can I achieve this?
Preferably: I would like to have the jobs inside the application because if I separate them out, I will have to call the API endpoint instead of the service directly.
TL;DR: Perhaps you need to rethink the value of co-locating the cronjobs with the function exposed via the API.
i.e. put the cronjobs in a separate pod with no replicas.
From the information available, that would seem to solve your problem most easily.
The question then arises, what value was gained or problem solved (other than convenience) by co-locating the cronjobs in the first place?
If there was no other problem, or that problem is more easily solved in other ways than the additional complexity involved in solving the problem that the co-location solution has created, then you have your answer.
Another test to apply would be which solution architecture would be easier for someone to understand (and in the future extend, modify or maintain):
separate cronjobs, running only once, and doing their work via an API
multiple cronjobs seemingly intentionally placed in a replicaset but with some complex coordination mechanism that contrives to ensure that of these multiple jobs only one instance is actually effective and the others rendered essentially inoperative

How to use a scheduler(cron) container to execute commands in other containers

I've spent a fair amount of time researching and I've not found a solution to my problem that I'm comfortable with. My app is working in a dockerized environment:
one container for the database;
one or more containers for the APP itself. Each container holds a specific version of the APP.
It's a multi-tenant application, so each client (or tenant) may be related to only one version at a time (migration should be handle per client, but that's not relevant).
The problem is I would like to have another container to handle scheduling jobs, like sending e-mails, processing some data, etc. The scheduler would then execute commands in app's containers. Projects like Ofelia offer a great promise but I would have to know the container to execute the command ahead of time. That's not possible because I need to go to the database container to discover which version the client is in, to figure it out what container the command should be executed in.
Is there a tool to help me here? Should I change the structure somehow? Any tips would be welcome.
Thanks.
So your question is you want to get the APP's version info in the database container before scheduling jobs,right?
I think this is relate to the business, not the dockerized environment,you may have ways to slove the problem:
Check the network ,make sure the network of the container can connect to each other
I think the database should support RPC function,you can use it to get the version data
You can use some RPC supported tools,like SSH

Debugging Amazon SQS consumers

I'm working with a PHP frontend which connects to a distributed back end, using Amazon SQS and a variety of message types and message consumers. I'm trying to come up with a way to safely debug those consumers, as we don't want message handlers with new, untested code consuming end-user messages, risking the messages being lost or incorrectly processed.
The actual message queue names are hardcoded as PHP constants in a class, so my first tactic was to create two different sets of queues, one for production and another for debugging, and to externalise the queue name constants into two different files. Depending on whether our debug condition is true or not, I wanted to include one or the other of those constant definitions and assign the constants in the included file to the class constants which currently have the names hardcoded.
This doesn't seem to work though because constants seem to act like class variables in PHP whereas I am trying to assign the values like instance variables. The next tactic was to see if there was anything on Amazon's side that would allow us to debug our message consumers transparently without adding lots of hacks to our code, but I couldn't see anything there that facilitated this. I'd love to know if anyone else has experienced (and ideally, solved this problem)
SQS doesn't provide a way to inspect the contents of messages in the queue, or for the sender to see if any consumers are failing to process messages.
A common approach to this problem would be to set up two sets of queues as you suggest and have the producer post the same message onto both queues. That way you can debug your code against a stream of production messages without affecting the actual production queue.
I'd recommend moving the decision of which queue to use out of your code and into config, and then deploy different config files to your development boxes vs your production boxes. The risk is always that a development box ends up talking to production systems, so having a single consistent approach to configuring those end-points across all your code is much less risky that doing it on an ad-hoc basis each time you call out to a service.
I'd also recommend putting your production and development queues in different AWS accounts with different access credentials. That way you can give your production account permission to publish to the development account's queue, but you can guarantee that your development systems can't read from the production queue.

Can I safely run multiple instances of the same Windows Service?

I have a windows service running on a server. It's a 'helper app' for a system we have and it does a very specific task (downloads html files based on the config) with a specific database configuration.
We're now developing a system that's very similar to the existing system (at least on the face of it, where this service has an impact). I need to have a different instance of the service to run the same server with a different database configuration, so it can do its task for the new system, as well as the existing system.
Can somebody tell me if it's going to cause problems if I install a second instance of the same service on the same box?
I intend to install the service from a different directory from where the original is installed.
It turns out I couldn't. I needed to give each instance of the service a unique name. Not a big deal though.
This won't be a problem at all as long as the program itself does not do things in a way that would cause the various instances to conflict - like trying to write to the same files at the same time, or the like. As long as each is configured/coded to keep to itself, it will be fine.

Resources