Threaded application sleeps with other application - pthreads

I have a weird problem with my threaded software.
I start 2 instances of the software. Each instance has 2 threads, one thread creates a socket to use, and the other one is uses the socket for communication.
When one of the threads in one instance calls sleep(3), the other threads in the the other instance sleeps too. And the weirdest thing is that when I rebooted the computer, it works the first time, but after trying a second time, it sleeps like described.
How is this possible? Is it using some shared resource?

Sleep suspends your process, not your thread. See sleep vs pthead_yield for more details.

Have you tried wrapping your sleep function in a prophylactic?

Related

Quart.Net is Sometimes Running Overlapping Tasks

I am using Quartz.Net 3.0.7 to manage a scheduler. In my test environment I have two instances of the scheduler running. I have a test process that runs for exactly 2 hours before ending. Quartz is configured to start the process every 10 seconds and I am using the DisallowConcurrentExecution attribute to prevent multiple instances of the task from running at the same time. 80% of the time this is working as expected. Quartz will start up the process and prevent any other instances of the task from starting until after the initial one has completed. If I stop one of the two services hosting Quart, then the other instance picks up the task at the next 10-second mark.
However, after keeping these two Quartz services running for 48 uninterrupted hours, I have discovered a couple of times where things went horribly wrong. At times host B will start up the task, even though the task is still in the middle of its 2 hour execution on host A. At one point I even found the process had started up 3 times on host B, all within a 10 minute period. So, for a two hour period, the one task had three instances running simultaneously. After all three finished, Quartz went back to the expected schedule of only having one instance running at a time.
If these overlapping tasks were happening 100% of the time, I would think there is something wrong on my end, but since it seems to happen only about 20% of the time, I am thinking it must be something in the Quartz implementation. Is this by design or is it a bug? If there is an event I can capture from Quart.Net to tell me that another instance of a task has started up, I can listen for that and stop the existing task from running. I just need to make sure that DisallowConcurrentExecution is getting obeyed and prevent a task from running multiple instances concurrently. Thanks.
Edit:
I added logic that uses context.Scheduler.GetCurrentlyExecutingJobs to look for any jobs that have the same JobDetail.Key but a different FireInstanceId when my task starts up. If I find another currently executing job, I will prevent this instance from doing anything. I am finding that in the duplicate concurrent scenario, Quartz is reporting that there are no other jobs currently executing with the same JobDetail.Key. Should that be possible? Under what case would Quartz.Net start an IJob, lose track of it as an executing job after a few minutes, but allow it to continue executing without cancelling the CancellationToken?
Edit2:
I found an instance in my logs where Quartz started a task as expected. Then, one minute later, Quartz tried to start up 9 additional instances, each with a different FireInstanceId. My custom code blocked the 9 additional instances, because it can see that the original instance was still going, by calling GetCurrentlyExecutingJobs to get a list of running jobs. I double checked and the ConcurrentExecutionDisallowed flag is true on all of the tasks at runtime, so I would expect that Quartz would prevent the duplicate instances. This sounds like a bug. Am I expected to handle this manually or should I expect Quartz to get this right?
Edit3:
I am definitely looking at two different problems. In both cases Quartz.Net is launching my IJob instance with a new FireInstanceId while there is already another FireInstanceId running for the same JobKey. In one scenario I can see that both FireInstanceIds are active by calling GetCurrentlyExecutingJobs. In the second scenario calling GetCurrentlyExecutingJobs shows that the first FireInstanceId is no longer running, even though I can see from my logs that the original instance is still running. Both of these scenarios result in multiple instances of my IJob running at the same time, which is not acceptable. It is easy enough to tackle the first scenario by calling GetCurrentlyExecutingJobs when my IJob starts, but the second scenario is harder. I will have to ping GetCurrentlyExecutingJobs on an interval and stop the task if it’s FireInstanceId has disappeared from the active list. Has anyone else really not noticed this behavior?
I found that if I set this option, that I no longer have overlapping executing jobs. I still wish that Quartz would cancel the job’s cancellation token, though, if it lost track of the executing job.
QuartzProperties.Add("quartz.jobStore.clusterCheckinInterval", "60000");

Communication from rails with endless worker

I'm implementing worker for rails+sneakers that will work for a long time(days, months). It should be websocket communication with external service.
The first concern: is it correct to run the endless process via ruby+sneakers? How can I monitor that there are no memory leaks?
Second question: What is the best way to send new commands to that worker?

Close applications safely from a service

I have 2 programs in Delphi - a service and some child processes that may run in any user session (these start when the service starts and should be closed when the service stops).
When the service stops I have to close the child applications safely, to make them catch formClose/FormDestroy events.
The service cannot use desktop communication, so it cannot send WMs like WM_Close, etc., to those processes.
Calling TerminateProcess does not make formClose/FormDestroy events occur in my child processes ...
So, what method of child process termination may be used here?
Currently, the only idea we have is to run taskkill.exe /im process.exe in each user session - it somehow makes killed process to run formClose/FormDestroy. How does it work? Just by sending WM_CLOSE?
The best solution is some simple IPC. In this case, all you really need is a global manual-reset event object, as IInspectable already suggested.
However, if you aren't allowed to do it the right way, you could instead launch another child process to send window messages to the application(s) you want to close.

Restarting windows service within itself

I have a windows service that invokes some heavy image processing whenever a user sends some data to it. So if there are more than one data, the data is queued up and is processed in order. however sometimes processing the data may go for a toss, and the processing hangs in there forever. Not sure yet why that happens. When this happens I want to restart the serivice by itself, so that when the service restarts next one from the queue is picked up. My question is is it a good idea to restart the service within itself? can you even do that or is there any other way to do it?
Sapna
As Oded said in his comment, if the service has hung, it can't restart itself. It would be best if you could figure out why it hangs and just stop it from hanging altogether, but assuming that that's not possible for some reason.
The two options I can think of would be if the image processing is done in a thread, and it's only that thread that hangs, then you might be able to have a separate "monitoring" thread that keeps checking if the processing thread is still happy and otherwise it kills it and restarts it. Or, if the whole service hangs, you could have a separate monitoring service, that does the checking and restarting.
you have three tasks:
detect that service is stuck:
this can be done in different ways, the first one would be to use timeouts
restart the service:
can be done by separate monitoring service or by another thread of the same service
handle task queue between different service instances:
you need to serialize your task queue to disk so when service is restarted it can continue handling the queue

windows service vs scheduled task

What are the cons and pros of windows services vs scheduled tasks for running a program repeatedly (e.g. every two minutes)?
Update:
Nearly four years after my original answer and this answer is very out of date. Since TopShelf came along Windows Services development got easy. Now you just need to figure out how to support failover...
Original Answer:
I'm really not a fan of Windows Scheduler. The user's password must be provided as #moodforall points out above, which is fun when someone changes that user's password.
The other major annoyance with Windows Scheduler is that it runs interactively and not as a background process. When 15 MS-DOS windows pop up every 20 minutes during an RDP session, you'll kick yourself that didn't install them as Windows Services instead.
Whatever you choose I certainly recommend you separate out your processing code into a different component from the console app or Windows Service. Then you have the choice, either to call the worker process from a console application and hook it into Windows Scheduler, or use a Windows Service.
You'll find that scheduling a Windows Service isn't fun. A fairly common scenario is that you have a long running process that you want to run periodically. But, if you are processing a queue, then you really don't want two instances of the same worker processing the same queue. So you need to manage the timer, to make sure if your long running process has run longer than the assigned timer interval, it doesn't kick off again until the existing process has finished.
After you have written all of that, you think, why didn't I just use Thread.Sleep? That allows me to let the current thread keep running until it has finished and then the pause interval kicks in, thread goes to sleep and kicks off again after the required time. Neat!
Then you then read all the advice on the internet with lots of experts telling you how it is really bad programming practice:
http://msmvps.com/blogs/peterritchie/archive/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program.aspx
So you'll scratch your head and think to yourself, WTF, Undo Pending Checkouts -> Yes, I'm sure -> Undo all today's work..... damn, damn, damn....
However, I do like this pattern, even if everyone thinks it is crap:
OnStart method for the single-thread approach.
protected override void OnStart (string args) {
// Create worker thread; this will invoke the WorkerFunction
// when we start it.
// Since we use a separate worker thread, the main service
// thread will return quickly, telling Windows that service has started
ThreadStart st = new ThreadStart(WorkerFunction);
workerThread = new Thread(st);
// set flag to indicate worker thread is active
serviceStarted = true;
// start the thread
workerThread.Start();
}
The code instantiates a separate thread and attaches our worker
function to it. Then it starts the thread and lets the OnStart event
complete, so that Windows doesn't think the service is hung.
Worker method for the single-thread approach.
/// <summary>
/// This function will do all the work
/// Once it is done with its tasks, it will be suspended for some time;
/// it will continue to repeat this until the service is stopped
/// </summary>
private void WorkerFunction() {
// start an endless loop; loop will abort only when "serviceStarted"
// flag = false
while (serviceStarted) {
// do something
// exception handling omitted here for simplicity
EventLog.WriteEntry("Service working",
System.Diagnostics.EventLogEntryType.Information);
// yield
if (serviceStarted) {
Thread.Sleep(new TimeSpan(0, interval, 0));
}
}
// time to end the thread
Thread.CurrentThread.Abort();
}
OnStop method for the single-thread approach.
protected override void OnStop() {
// flag to tell the worker process to stop
serviceStarted = false;
// give it a little time to finish any pending work
workerThread.Join(new TimeSpan(0,2,0));
}
Source: http://tutorials.csharp-online.net/Creating_a_.NET_Windows_Service%E2%80%94Alternative_1%3a_Use_a_Separate_Thread (Dead Link)
I've been running lots of Windows Services like this for years and it works for me. I still haven't seen a recommended pattern that people agree on. Just do what works for you.
Some misinformation here. Windows Scheduler is perfectly capable of running tasks in the background without windows popping up and with no password required. Run it under the NT AUTHORITY\SYSTEM account. Use this schtasks switch:
/ru SYSTEM
But yes, for accessing network resources, the best practice is a service account with a separate non-expiring password policy.
EDIT
Depending on your OS and the requirements of the task itself, you may be able to use accounts less privileged than Localsystem with the /ru option.
From the fine manual,
/RU username
A value that specifies the user context under which the task runs.
For the system account, valid values are "", "NT AUTHORITY\SYSTEM", or "SYSTEM".
For Task Scheduler 2.0 tasks, "NT AUTHORITY\LOCALSERVICE", and
"NT AUTHORITY\NETWORKSERVICE" are also valid values.
Task Scheduler 2.0 is available from Vista and Server 2008.
In XP and Server 2003, system is the only option.
In .NET development, I normally start off by developing a Console Application, which will run will all logging output to the console window. However, this is only a Console Application when it is run with the command argument /console. When it is run without this parameter, it acts as a Windows Service, which will stay running on my own custom coded scheduled timer.
Windows Services, I my mind, are normally used to manage other applications, rather than be a long running application. OR .. they are continuously-running heavyweight applications like SQL Server, BizTalk, RPC Connections, IIS (even though IIS technically offloads work to other processes).
Personally, I favour scheduled tasks over Window Services for repititive maintenance tasks and applications such as file copying/synchronisations, bulk email sending, deletion or archiving of files, data correction (when other workarounds are not available).
For one project I have been involved in the development of 8 or 9 Windows Services, but these sit around in memory, idle, eating 20MB or more memory per instance. Scheduled tasks will do their business, and release the memory immediately.
What's the overhead of starting and quitting the app? Every two minutes is pretty often. A service would probably let the system run more smoothly than executing your application so frequently.
Both solutions can run the program when user isn't logged in, so no difference there. Writing a service is somewhat more involved than a regular desktop app, though - you may need a separate GUI client that will communicate with the service app via TCP/IP, named pipes, etc.
From a user's POV, I wonder which is easier to control. Both services and scheduled tasks are pretty much out of reach for most non-technical users, i.e. they won't even realize they exist and can be configured / stopped / rescheduled and so on.
The word 'serv'ice shares something in common with 'serv'er. It is expected to always be running, and 'serv'e. A task is a task.
Role play. If I'm another operating system, application, or device and I call a service, I expect it to be running and I expect a response. If I (os, app, dev) just need to execute an isolated task, then I will execute a task, but if I expect to communicate, possibly two way communication, I want a service. This has to do with the most effective way for two things to communicate, or a single thing that wants to execute a single task.
Then there's the scheduling aspect. If you want something to run at a specific time, schedule. If you don't know when you're going to need it, or need it "on the fly", service.
My response is more philosophical in nature because this is very similar to how humans interact and work with another. The more we understand the art of communication, and "entities" understand their role, the easier this decision becomes.
All philosophy aside, when you are "rapidly prototyping", as my IT Dept often does, you do whatever you have to in order to make ends meet. Once the prototyping and proof of concept stuff is out of the way, usually in the early planning and discovering, you have to decide what's more reliable for long term sustainability.
OK, so in conclusion, it's highly dependent on a lot of factors, but hopefully this has provided insight instead of confusion.
A Windows service doesn't need to have anyone logged in, and Windows has facilities for stopping, starting, and logging the service results.
A scheduled task doesn't require you to learn how to write a Windows service.
It's easier to set up and lock down windows services with the correct permissions.
Services are more "visible" meaning that everyone (ie: techs) knows where to look.
This is an old question but I will like to share what I have faced.
Recently I was given a requirement to capture the screenshot of a radar (from a Meteorological website) and save it in the server every 10 minutes.
This required me to use WebBrowser.
I usually make windows services so I decided to make this one service too but it would keep crashing.
This is what I saw in Event Viewer
Faulting module path: C:\Windows\system32\MSHTML.dll
Since the task was urgent and I had very less time to research and experiment, I decided to use a simple console application and triggered it as a task and it executed smoothly.
I really liked the article by Jon Galloway recommended in accepted answer by Mark Ransom.
Recently passwords on the servers were changed without acknowledging me and all the services failed to execute since they could not logon.
So ppl claiming in the article comments that this is a problem. I think windows services can face same problem (Pls. correct me if I am wrong, I am jus a newbie)
Also the thing mentioned, if using task scheduler windows pop up or the console window pops up.
I have never faced that. It may pop up but it is at least very instantaneous.
Why not provide both?
In the past I've put the 'core' bits in a library and wrapped a call to Whatever.GoGoGo() in both a service as well as a console app.
With something you're firing off every two minutes the odds are decent it's not doing much (e.g. just a "ping" type function). The wrappers shouldn't have to contain much more than a single method call and some logging.
Generally, the core message is and should be that the code itself must be executable from each and every "trigger/client". So it should not be rocket science to switch from one to the other approach.
In the past we used more or less always Windows Services but since also more and more of our customers switch to Azure step by step and the swap from a Console App (deployed as a Scheduled Task) to a WebJob in Azure is much easier than from a Windows Service, we focus on Scheduled Tasks for now. If we run into limitations, we just ramp up the Windows Service project and call the same logic from there (as long as customers are working OnPrem..) :)
BR,
y
Windows services want more patience until it's done.
It has a bit hard debug and install. It's faceless.
If you need a task which must be done in every second, minute or hour,
you should choice Windows Service.
Scheduled Task is quickly developed and has a face.
If you need a daily or weekly task, you can use Scheduled Task.

Resources