How do I prevent another process from terminating my service? - delphi

I have a Windows service that performs various "jobs" for my application (send emails, create backups, check for my application updates, provide some services...)
Recently some costumers reported problems between using some Internet banking sites and my application.
In searching for solutions, I found reports about a plugin (ActiveX) installed by the Internet banking Web site.
This ActiveX installs a bizarre service (GbPlugin, from GAS Tecnologia), that kills suspicious applications based in some idiot heuristic, and my service is a victim!
Now I'm trying to "immunize" my service.
Are there some ways to restrict the termination of my service to protect it?
I cannot use the "auto restart" option in the service properties, because I cannot be killed!
Both services are running as LOCALSYSTEM.

Most likely that service runs as LOCALSYSTEM and so can kill anything it likes. So it's extremely unlikely that you can defend against it.
Indeed, a quick websearch throws up some some hits that indicate that the service does indeed run as LOCALSYSTEM.
Your only tenable solution is going to involve the other software. Either compel your users to remove it, or work with its developers to find a way to white-list your program.

Assuming GbPlugin is going through normal SCM procedures to stop services and not just brute-force terminating them, then you have a couple of choices to prevent your service from stopping:
set your service's AllowStop property to False.
in the OnStop event, set the Stopped parameter to False.
Either approach will also prevent you from stopping your own service under normal consitions. To work around that, you could write a separate app that uses the Win32 API ControlService() function to send a custom command to your service. Inside your service, have it override the virtual DoCustomControl() method to look for that command. Have it either reset the AllowStop property back to True, or set a flag somewhere that the OnStop event can look at, then call Controller(SERVICE_CONTROL_STOP) to initiate a normal stop.
Needless to say, this is a bit overkill. If possible, a better option is to simply contact GAS Tecnologia and ask why your service is being flagged by GbPlugin's heuristics and then change that condition in your service, or else ask them to fix GbPlugin to ignore your service.

Related

Automating Azure VIP Swap

I have an ASP.NET MVC 4 app hosted as an Azure web role. I want to do something that seems like it should be pretty standard: I want to create a function that I can call that initiates a VIP swap and raises and event (or calls a callback) when the VIP Swap operation is done.
Just to add some context to the situation: My website implements a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, it's convenient (i.e. much less "backward compatibility" code to write) to first let all of the current users complete the workflow so that the new code doesn't need to deal with data created by the previous version of the code. So a management function in my website would first poke a value into the database that disables new workflows; it would then wait until all current workflows are done; it would then call the "VIP Swap" routine; finally, when the VIP Swap routine signals its completion, it would poke the database value to re-enable new workflows.
I found the Microsoft documentation for how to programmatically initiate a VIP swap here:
http://msdn.microsoft.com/en-us/library/ee460814.aspx
The procedure involves POSTing to a magic URL and including some headers in the POST, then periodically performing a GET to a magic URL and checking the response code.
The more I think about this, the more non-trivial it seems. In addition to the basic complexities of wiring up a background timer and completion notification, I don't know what complexities, if any, I might run into trying to do this stuff in the IIS environment. Can I even perform HTTP operations on a background thread? For that matter, will I run into complications just trying to use any of the half dozen or so different "do things in the background" mechanisms baked into .NET?
Any help or guidance will be greatly appreciated. In particular, I'd be ecstatic if someone could point me at a ready-to-go implementation of this function!
I don't think you will find an easy solution to this as the fabric controller is setup to do some very fancy things without your involvement. Running hour-long workflows on a cloud computing environment, where an instance can be pulled out from underneath you, (with a maximum of 5 minutes from the OnStopping event being called to clean up) requires that you do other work anyway to make sure that all of your tasks complete.
The simple question is "What do you do if an instance goes down when workflows are still running?" Do you restart them or are they lost? If they get lost then you don't care anyway, so killing workflows for an upgrade are equally unimportant. If you re-start them then use that same mechanism to decide whether or not a node is due to be shut down, and distribute the jobs accordingly. This pattern is eerily similar to the Hadoop JobTracker. Don't just run the workflows on any 'ol instance. Submit them to a (job tracker) service that decides what to do. The (job tracker) service can then use the service management API to scale up as many instances as you need running the version that you want, run workflows on the appropriate node, and shut them down when they are no longer needed or are outdated.
Unfortunately this may not be the simple solution that you are looking for, but something in your architecture needs to change, rather than trying to force PaaS to fit with your current approach. Decompose your workloads, create loosely coupled services, design for failure, and a few other cloud/distributed computing practices need to be considered. There is a reason why Hadoop is built the way that it is — and it has a reputation for being able to get work done on a bunch of somewhat unreliable commodity hardware.

capture events from windows service

I built a Windows Service, let's call it Jobs. And I have a WinForm app, let's call it Viewer.
I want Viewer to receive events from Jobs when Jobs starts executing something so Viewer can display to the user that Job A started, did something, stopped, etc.
Is there a way to have Jobs throw events that Viewer can register to receive? The best solution I can think of is using MSMQ, however I'd like a more direct approach where I startup Viewer and it registers with the Jobs windows service and asks to receive notifications/events from it.
I can't seem to figure out how to get a Windows Service to Push something without using MSMQ, or to have others programs register with it to receive Pushes. I definitely do NOT want to use some weird file and/or database system where Viewer sleeps for 5 seconds and then checks for changes. I want something streamlined where Viewer Waits for Events, but does not have to use the MSMQ.
You can use one of the WCF duplex bindings for this, such as netTcpBinding or dualHttpBinding. There is a full example here which would seem to do exactly what you want.
The drawback is that duplex is complex to understand and may not be that reliable. I would always implement a solution using one way communication (either via queueing or some other mechanism) if it was an option to do so. One way is always simpler and more reliable than duplex.

Best way to run rails with long delays

I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.

How to stop a Windows Service programmatically?

I'm writing a simple Windows Service that sends out emails to all employees every month. My question is, how to stop itself when it's done? I'm a noobie in this field so please help me out. Really appreciated.
It will be deployed on the server to be run monthly. I did not start this thing and the code was given to me like that. It is written in VB.NET and I'm asked now to change a few things around it. I noticed that there is only 'Sub OnStart' and wondered when the service would stop? After the main sub is done, what it the status of this service? Is it stopped or just hung in there? Sorry, as I said, I am really new to this....
If you have a task that recurs monthly you may be better off writing a console app, and then using Windows Task Scheduler to set it to run monthly. A service should be used for processes that need to run for a long time or constantly, with or without a user logged on
As every other answer has noted, it sounds like this should be an executable or script that you run as a scheduled task.
However, if you are obligated for some reason to run as a Windows Service and you're working in .NET, you just have to call the Stop() method inherited from ServiceBase once your service completes its work. From the MSDN documentation for the method:
The Stop method sets the service state
to indicate a stop is pending and
calls the OnStop method. After the
application is stopped, the service
state is set to stopped. If the
application is a hosted service, the
application domain is unloaded.
There's one important caveat here: the user account under which the service is running must have permission to stop services (which is a topic for ServerFault).
Once a service's OnStart method completes, it will continue running (doing nothing) until something tells it to stop in one of the following ways:
Programatically, by calling Stop
within the service itself or from an
external process using the method
Colin Gravill describes in his
answer.
Via the command-line.
Through the windows Computer Management console's "Services" panel.
If this is a Win32 service (i.e. written in C or C++), then you simply call SetServiceStatus(SERVICE_STOPPED) and return from ServiceMain.
On the other hand, if you're just sending emails once a month, why are you using a service at all? Use the Windows Task Scheduler and run a normal application or script.
net stop [service_name] ...on the command line will do it too.
But, I agree with everyone else; it seems that Windows Task Scheduler will meet your needs better.
It might be better to write this as a scheduled task, it would certainly be easier to develop initially. Then it would naturally terminate and wouldn't be consuming resources for the rest of the month.
To answer the original question, you can get a list of the current running services in C#
services = System.ServiceProcess.ServiceController.GetServices();
Then look for the one you want and set the status to stopped
locatedService.Status == ServiceControllerStatus.Stopped
Full example on msdn
Is there a reason it has to be a Windows service? If not, then follow #Macros solution. However, if it does, then why stop the service? If you stop it, then it'll just have to be restarted when the emails need to be sent. Based on your description, it doesn't sound like it would require a lot of resources, so I'd suggest just installing it and letting it run, firing up once a month to send the emails.
here's what i did in a similar situation.
windows service runs 24/7 and processes work units. it gets work units through a database view.
table Message
ProcessingStartTime
CompletionDTE
...
the database view only pulls records marked not-complete and have a ProcessingStartTime in the past. So after the service confirms the transaction it executes a stored procedure that updates the database record. For this system, end-user upload excel files to asp.net webfrom that imports them into the database.

windows service vs scheduled task

What are the cons and pros of windows services vs scheduled tasks for running a program repeatedly (e.g. every two minutes)?
Update:
Nearly four years after my original answer and this answer is very out of date. Since TopShelf came along Windows Services development got easy. Now you just need to figure out how to support failover...
Original Answer:
I'm really not a fan of Windows Scheduler. The user's password must be provided as #moodforall points out above, which is fun when someone changes that user's password.
The other major annoyance with Windows Scheduler is that it runs interactively and not as a background process. When 15 MS-DOS windows pop up every 20 minutes during an RDP session, you'll kick yourself that didn't install them as Windows Services instead.
Whatever you choose I certainly recommend you separate out your processing code into a different component from the console app or Windows Service. Then you have the choice, either to call the worker process from a console application and hook it into Windows Scheduler, or use a Windows Service.
You'll find that scheduling a Windows Service isn't fun. A fairly common scenario is that you have a long running process that you want to run periodically. But, if you are processing a queue, then you really don't want two instances of the same worker processing the same queue. So you need to manage the timer, to make sure if your long running process has run longer than the assigned timer interval, it doesn't kick off again until the existing process has finished.
After you have written all of that, you think, why didn't I just use Thread.Sleep? That allows me to let the current thread keep running until it has finished and then the pause interval kicks in, thread goes to sleep and kicks off again after the required time. Neat!
Then you then read all the advice on the internet with lots of experts telling you how it is really bad programming practice:
http://msmvps.com/blogs/peterritchie/archive/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program.aspx
So you'll scratch your head and think to yourself, WTF, Undo Pending Checkouts -> Yes, I'm sure -> Undo all today's work..... damn, damn, damn....
However, I do like this pattern, even if everyone thinks it is crap:
OnStart method for the single-thread approach.
protected override void OnStart (string args) {
// Create worker thread; this will invoke the WorkerFunction
// when we start it.
// Since we use a separate worker thread, the main service
// thread will return quickly, telling Windows that service has started
ThreadStart st = new ThreadStart(WorkerFunction);
workerThread = new Thread(st);
// set flag to indicate worker thread is active
serviceStarted = true;
// start the thread
workerThread.Start();
}
The code instantiates a separate thread and attaches our worker
function to it. Then it starts the thread and lets the OnStart event
complete, so that Windows doesn't think the service is hung.
Worker method for the single-thread approach.
/// <summary>
/// This function will do all the work
/// Once it is done with its tasks, it will be suspended for some time;
/// it will continue to repeat this until the service is stopped
/// </summary>
private void WorkerFunction() {
// start an endless loop; loop will abort only when "serviceStarted"
// flag = false
while (serviceStarted) {
// do something
// exception handling omitted here for simplicity
EventLog.WriteEntry("Service working",
System.Diagnostics.EventLogEntryType.Information);
// yield
if (serviceStarted) {
Thread.Sleep(new TimeSpan(0, interval, 0));
}
}
// time to end the thread
Thread.CurrentThread.Abort();
}
OnStop method for the single-thread approach.
protected override void OnStop() {
// flag to tell the worker process to stop
serviceStarted = false;
// give it a little time to finish any pending work
workerThread.Join(new TimeSpan(0,2,0));
}
Source: http://tutorials.csharp-online.net/Creating_a_.NET_Windows_Service%E2%80%94Alternative_1%3a_Use_a_Separate_Thread (Dead Link)
I've been running lots of Windows Services like this for years and it works for me. I still haven't seen a recommended pattern that people agree on. Just do what works for you.
Some misinformation here. Windows Scheduler is perfectly capable of running tasks in the background without windows popping up and with no password required. Run it under the NT AUTHORITY\SYSTEM account. Use this schtasks switch:
/ru SYSTEM
But yes, for accessing network resources, the best practice is a service account with a separate non-expiring password policy.
EDIT
Depending on your OS and the requirements of the task itself, you may be able to use accounts less privileged than Localsystem with the /ru option.
From the fine manual,
/RU username
A value that specifies the user context under which the task runs.
For the system account, valid values are "", "NT AUTHORITY\SYSTEM", or "SYSTEM".
For Task Scheduler 2.0 tasks, "NT AUTHORITY\LOCALSERVICE", and
"NT AUTHORITY\NETWORKSERVICE" are also valid values.
Task Scheduler 2.0 is available from Vista and Server 2008.
In XP and Server 2003, system is the only option.
In .NET development, I normally start off by developing a Console Application, which will run will all logging output to the console window. However, this is only a Console Application when it is run with the command argument /console. When it is run without this parameter, it acts as a Windows Service, which will stay running on my own custom coded scheduled timer.
Windows Services, I my mind, are normally used to manage other applications, rather than be a long running application. OR .. they are continuously-running heavyweight applications like SQL Server, BizTalk, RPC Connections, IIS (even though IIS technically offloads work to other processes).
Personally, I favour scheduled tasks over Window Services for repititive maintenance tasks and applications such as file copying/synchronisations, bulk email sending, deletion or archiving of files, data correction (when other workarounds are not available).
For one project I have been involved in the development of 8 or 9 Windows Services, but these sit around in memory, idle, eating 20MB or more memory per instance. Scheduled tasks will do their business, and release the memory immediately.
What's the overhead of starting and quitting the app? Every two minutes is pretty often. A service would probably let the system run more smoothly than executing your application so frequently.
Both solutions can run the program when user isn't logged in, so no difference there. Writing a service is somewhat more involved than a regular desktop app, though - you may need a separate GUI client that will communicate with the service app via TCP/IP, named pipes, etc.
From a user's POV, I wonder which is easier to control. Both services and scheduled tasks are pretty much out of reach for most non-technical users, i.e. they won't even realize they exist and can be configured / stopped / rescheduled and so on.
The word 'serv'ice shares something in common with 'serv'er. It is expected to always be running, and 'serv'e. A task is a task.
Role play. If I'm another operating system, application, or device and I call a service, I expect it to be running and I expect a response. If I (os, app, dev) just need to execute an isolated task, then I will execute a task, but if I expect to communicate, possibly two way communication, I want a service. This has to do with the most effective way for two things to communicate, or a single thing that wants to execute a single task.
Then there's the scheduling aspect. If you want something to run at a specific time, schedule. If you don't know when you're going to need it, or need it "on the fly", service.
My response is more philosophical in nature because this is very similar to how humans interact and work with another. The more we understand the art of communication, and "entities" understand their role, the easier this decision becomes.
All philosophy aside, when you are "rapidly prototyping", as my IT Dept often does, you do whatever you have to in order to make ends meet. Once the prototyping and proof of concept stuff is out of the way, usually in the early planning and discovering, you have to decide what's more reliable for long term sustainability.
OK, so in conclusion, it's highly dependent on a lot of factors, but hopefully this has provided insight instead of confusion.
A Windows service doesn't need to have anyone logged in, and Windows has facilities for stopping, starting, and logging the service results.
A scheduled task doesn't require you to learn how to write a Windows service.
It's easier to set up and lock down windows services with the correct permissions.
Services are more "visible" meaning that everyone (ie: techs) knows where to look.
This is an old question but I will like to share what I have faced.
Recently I was given a requirement to capture the screenshot of a radar (from a Meteorological website) and save it in the server every 10 minutes.
This required me to use WebBrowser.
I usually make windows services so I decided to make this one service too but it would keep crashing.
This is what I saw in Event Viewer
Faulting module path: C:\Windows\system32\MSHTML.dll
Since the task was urgent and I had very less time to research and experiment, I decided to use a simple console application and triggered it as a task and it executed smoothly.
I really liked the article by Jon Galloway recommended in accepted answer by Mark Ransom.
Recently passwords on the servers were changed without acknowledging me and all the services failed to execute since they could not logon.
So ppl claiming in the article comments that this is a problem. I think windows services can face same problem (Pls. correct me if I am wrong, I am jus a newbie)
Also the thing mentioned, if using task scheduler windows pop up or the console window pops up.
I have never faced that. It may pop up but it is at least very instantaneous.
Why not provide both?
In the past I've put the 'core' bits in a library and wrapped a call to Whatever.GoGoGo() in both a service as well as a console app.
With something you're firing off every two minutes the odds are decent it's not doing much (e.g. just a "ping" type function). The wrappers shouldn't have to contain much more than a single method call and some logging.
Generally, the core message is and should be that the code itself must be executable from each and every "trigger/client". So it should not be rocket science to switch from one to the other approach.
In the past we used more or less always Windows Services but since also more and more of our customers switch to Azure step by step and the swap from a Console App (deployed as a Scheduled Task) to a WebJob in Azure is much easier than from a Windows Service, we focus on Scheduled Tasks for now. If we run into limitations, we just ramp up the Windows Service project and call the same logic from there (as long as customers are working OnPrem..) :)
BR,
y
Windows services want more patience until it's done.
It has a bit hard debug and install. It's faceless.
If you need a task which must be done in every second, minute or hour,
you should choice Windows Service.
Scheduled Task is quickly developed and has a face.
If you need a daily or weekly task, you can use Scheduled Task.

Resources