I know we can use app.requestSingleInstanceLock to limit Electron to run a single instance. But I have a use case, to run multiple Electron instances based on the cmdline arguments, to allow a new instance for each unique arg. For example,
myelectron.exe http://host1:port1
myelectron.exe http://host2:port2
myelectron.exe http://host1:port1
The above 3 commands would start 2 instances, 1 for each unique cmdline url.
Questions:
How to save app-level user data in Electron across all instances?
I'd like to keep a list/map for all running instances of unique urls, and remove an entry if its instance was closed.
Is there a better way to achieve this use case?
Related
I want to only ever run one instance of my container to run at a given time. For instance say my ECS container is writing data to an EFS. The application I am running is built in such a way that multiple instances can not be writing to this data at a given time. So is there a way to make sure that ECS never starts more than one instance. I was worried that when one container was being torn down or stood up that two containers may end up running simultaneously.
I was also thinking about falling back to EC2 so that I could meet this requirement but did want to try this out in ECS.
I have tried setting the desired instances to 1 but I am worried that this will not work.
Just set min, desired and maximum number of tasks to 1.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-auto-scaling.html#:~:text=To%20configure%20basic%20Service%20Auto%20Scaling%20parameters
I have a Dockerized Django application which have a number of CRON-jobs that need to be executed.
Right now I'm running it with the package Supercronic (which is recommended for running cron-jobs inside containers). This will be deployed on a two servers for redunancy-purposes, i.e. If one goes down the other one need to take over and execute the cron-jobs.
However, the issue is that without any configuration this will result in duplicate cron-jobs being executed, one for each server. I've read that you can set up something called a "lease" for the cron-jobs to retrieve, to avoid duplicates from different servers, but I haven't found any instructions on how to set this up.
Can someone maybe point me in the right direction here?
If you are running Supercron in two different instance, Supercron doesn't know about whether the job gets triggered, Its up to the application to handle the consistency.
You can do it in many ways either controlling the state with File or DB entries or any better way where your docker application can check the status before it start executing the actual process.
I have a Docker container that reads a variable which I provide to it during the execution. Then I though that I would like to run many of those and pass a different value as variable for each one of those. Then, I just created a simple text file which contains all the values I want to pass into (they are about 20k different ones) and I am using gnu-parallel to spawn multiple Dockers in parallel.
My question is how I could do something like that in a Kubernetes environment?
Sounds like you want you want to do can be achieved using kubernetes jobs.
I would advise against using gnu parallel on kubernetes unless you can fit all the jobs in one node. If this is the case I think it's ok, just set the cpu request in the job template.
We have Ruby on Rails application running on EC2 and enabled autoscaling feature. We have been using whenever to manage cron. So new instances with an image of the main instance created automatically on spikes and dropped when low traffic. But this also copies cron jobs as well to newly created instances.
We have a specific requirement where we want to limit cron to a single instance.
I found a gem which looks like handing this specific requirement but I am skeptical about it, reason being it is for elastic beanstalk and no longer maintained.
as a workaround, you can have a condition within the cron specifying that the cron job should execute based on a condition that would elect a single instance among your autoscaling group. e.g have only the oldest instance running the cron, or only the instance having the "lowest" instance ID, or whatever you like as a condition.
you can achieve such a thing by having your instances calling the AWS API.
As a more proper solution, you maybe could use a single cronified lambda accessing your instances? this is now possible as per this page
Best is to set scale in protection. It prevents your instance being terminated during scaling events.
You can find more information here on AWS https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
I am writing a script (a Rails runner, if it matters) that will run periodically. It uses a gem to query a SQL database. Because that database does not update existing rows, but merely adds new ones to reflect changes to data, the script will run a query that only finds objects with an id greater than the id that was in the database the last time the script was run.
In what file should that id be stored? Is it bad practice to store it in the script and have the script write over itself, and if so, why?
Store the ID in a separate file. Not only would the script writing over itself be more difficult to do correctly, but that practice would also be likely to confuse users, and could result in a whole host of other problems, such as additional friction when trying to version control the script or update it to a new version.
Under most circumstances, data and code should be separate.