I have a windows service running on a server. It's a 'helper app' for a system we have and it does a very specific task (downloads html files based on the config) with a specific database configuration.
We're now developing a system that's very similar to the existing system (at least on the face of it, where this service has an impact). I need to have a different instance of the service to run the same server with a different database configuration, so it can do its task for the new system, as well as the existing system.
Can somebody tell me if it's going to cause problems if I install a second instance of the same service on the same box?
I intend to install the service from a different directory from where the original is installed.
It turns out I couldn't. I needed to give each instance of the service a unique name. Not a big deal though.
This won't be a problem at all as long as the program itself does not do things in a way that would cause the various instances to conflict - like trying to write to the same files at the same time, or the like. As long as each is configured/coded to keep to itself, it will be fine.
Related
I have a Dockerized Django application which have a number of CRON-jobs that need to be executed.
Right now I'm running it with the package Supercronic (which is recommended for running cron-jobs inside containers). This will be deployed on a two servers for redunancy-purposes, i.e. If one goes down the other one need to take over and execute the cron-jobs.
However, the issue is that without any configuration this will result in duplicate cron-jobs being executed, one for each server. I've read that you can set up something called a "lease" for the cron-jobs to retrieve, to avoid duplicates from different servers, but I haven't found any instructions on how to set this up.
Can someone maybe point me in the right direction here?
If you are running Supercron in two different instance, Supercron doesn't know about whether the job gets triggered, Its up to the application to handle the consistency.
You can do it in many ways either controlling the state with File or DB entries or any better way where your docker application can check the status before it start executing the actual process.
I've spent a fair amount of time researching and I've not found a solution to my problem that I'm comfortable with. My app is working in a dockerized environment:
one container for the database;
one or more containers for the APP itself. Each container holds a specific version of the APP.
It's a multi-tenant application, so each client (or tenant) may be related to only one version at a time (migration should be handle per client, but that's not relevant).
The problem is I would like to have another container to handle scheduling jobs, like sending e-mails, processing some data, etc. The scheduler would then execute commands in app's containers. Projects like Ofelia offer a great promise but I would have to know the container to execute the command ahead of time. That's not possible because I need to go to the database container to discover which version the client is in, to figure it out what container the command should be executed in.
Is there a tool to help me here? Should I change the structure somehow? Any tips would be welcome.
Thanks.
So your question is you want to get the APP's version info in the database container before scheduling jobs,right?
I think this is relate to the business, not the dockerized environment,you may have ways to slove the problem:
Check the network ,make sure the network of the container can connect to each other
I think the database should support RPC function,you can use it to get the version data
You can use some RPC supported tools,like SSH
Due to a limitation of a 3rd party library, I need to use a file with a static name. What happens in Rails if multiple users are trying to write to that file at the same time? EACCESS error?
Is there a way I could circumvent this?
At the Ruby level, what will happen if multiple processes try to write to the file depends on how the library uses the file: whether and how it locks the file before opening it and what mode it opens the file in. It might just work, it might raise an error, or (most likely, if the library does nothing to handle this situation) multiple writers might silently interleave writes with one another in a way that could corrupt the file, or the last writer might win.
At the Rails level, it depends on how you run Rails. If you run a single, normally configured Rails instance on a given server, you won't have any problems, since Rails itself is single-threaded by default. If you run multiple Rails instances (presumably controlled by an application server like Passenger or unicorn) you might have problems.
Assuming the library doesn't handle multiple writers for you, you can work around it in a couple of ways:
Run only one instance of your Rails app on each server (or docker container or chrooted environment).
Fork the library and change it to include the process ID in the file name. That's what I'd do.
I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.
I have written several services in Delphi now, but I want to add the facility of auto updating the service either from a LAN unc path or from a http server. I have been pondering this and I am interested to hear peoples ideas. I can create a thread that will check for the update periodically, but how do I go about stopping the service uninstalling and installing automatically. My initial thoughts where to write a console app to do this and start it using create process, then let the service stop and the console app do the work, starting the new version of the service before it exits. Is this a good stratergy or shoul I consider something else. Thanks in advance
I do as you suggest. A thread checks occasionally for an update. If it is present, it downloads it and puts it into an appropriate place. It then verifies that it is wholesome (don't want it to be broken!). Finally, the thread then launches another app with parameters to tell it what to do, specifically, the name of the service, the location of the file to replace, and the file to replace it with. Then the service just waits.
When the updater app starts, it pauses a moment to make sure that the service is all stable, and then it uses the service control API to stop the service. It then monitors it until it is gone. Finally, it pauses a little to ensure that Windows has really finished with the file. Then it starts the process of renaming the old file to move it out of the way (if still in use, it retries a few times), and then copying the new file into place. And finally, it starts the service up again. Then the updater quits.
This has worked quite reliably for my services, and also standalone apps too (with different parameters for the updater app to know which mode). And if you are careful, you can update the updater using the exact same system, which is nice to watch.
I would have the service be a shell that only updates another executable or DLL file where the real code is at.
Have some communication method between the shell and the child process to force a shutdown and then have the shell perform the upgrade and relaunch the child.
As a side note, this makes debugging the service much easier as well as you'll be able to run the child process directly without having to worry about the extra efforts required to debug windows services.
your idea seems very good to me, however take this into consideration aswell:
- add module(the main core) to the service that will be unloaded and will load the updated module(*.dll file) when an update is available -- in this time the service should put the "tasks" in a queue or something...
additionally you can use plugins and/or scripts like Pascal script or DWScript
Last versions of Windows (I think since windows 10) does not allow a service to start other programs. So you will need an other program to run the update. It could be an other service.
Windows Services cannot start additional applications because they are
not running in the context of any particular user. Unlike regular
Windows applications, services are now run in an isolated session and
are prohibited from interacting with a user or the desktop.