Sleep performance - pthreads

I am developing a program in c++ and I have to implement a cron. This cron should be executed every hour and every 24 hours for different reasons. My first idea was to make an independent pthread and sleep it during 1h every time. Is this correct? I mean, is really efficient to have a thread asleep more than awake? What are the inconvenients of having a thread slept?

I would tend to prefer to have such a task running via cron/scheduler since it is to run at pre-determined intervals, as opposed to in response to some environmental event. So the program should just 'do' what ever it needs to do, and then be executed by the operating system as needed. This also makes it easy to change the frequency of execution - just change the scheduling, rather than needing to rebuild the app or expose extra configurability.
That said, if you really, really wanted to do it that way, you probably would not sleep for the whole hour; You would sleep in multiple of some smaller time frame (perhaps five minutes, or what ever seems appropriate) and have a variable keeping the 'last run' time so you know when to run again.
Sleep() calls typically won't be exceptionally accurate as far as the time the thread ends up sleeping; it depends on what other threads have tasks waiting, etc.

There is no performance impact of having a thread sleeping for a long time, aside of the fact that it will occupy some memory without doing anything useful. Maybe if you'd do it for thousands of threads there would be some slow down in the OS's management of threads, but it doesn't look like you are planning to do that.
A practical disadvantage of having a thread sleep for long is, that you can't do anything with it. If you, for example, want to tell the thread that it should stop because the application wants to shut down, the thread could only get this message after the sleep. So your application either needs a very long time to shut down, or the thread will need to be stopped forcefully.

My first idea was to make an independent pthread and sleep it during 1h every time.
I see no problem.
Is this correct? I mean, is really efficient to have a thread asleep more than awake?
As long as a thread sleeps and doesn't wake up, OS wouldn't even bother with its existence.
Though otherwise, if thread sleeps most of its life, why to have a dedicated thread? Why other thread (e.g. main thread) can't check the time, and start a thread to do the cron job?
What are the inconvenients of having a thread slept?
None. But the fact that the sleeping thread cannot be easily unblocked. That is a concern if one needs proper shutdown of an application. That is why it is a better idea to have another (busy) thread to check the time and start the cron job when needed.

When designing your solution keep these scenarios in mind:
At 07:03 the system time is reset to 06:59. What happens one minute later?
At 07:03 the system time is moved forward to 07:59. What happens one minute later?
At 07:59 the system time is moved forward to 08:01. Is the 08:00-job ever executed?
The answers to those questions will tell you a lot about how you should implement your solution.
The performance of the solution should not be an issue. A single sleeping thread will use minimal resources.

Related

How can I get breakpoints/logs/increased visibility when my Main Thread blocks?

In the never-ending quest for UI responsiveness, I would like to gain more insight into cases where the main thread performs blocking operations.
I'm looking for some sort of "debugging mode" or extra code, or hook, or whatever, whereby I can set a breakpoint/log/something that will get hit and allow me to inspect what's going if my main thread "voluntarily" blocks for I/O (or whatever reason, really) other than for going idle at the end of the runloop.
In the past, I've looked at runloop cycle wall-clock duration using a runloop observer, and that's valuable for seeing the problem, but by the time you can check, it's too late to get a good idea for what it was doing, because your code is already done running for that cycle of the runloop.
I realize that there are operations performed by UIKit/AppKit that are main-thread-only that will cause I/O and cause the main thread to block so, to a certain degree, it's hopeless (for instance, accessing the pasteboard appears to be a potentially-blocking, main-thread-only operation) but something would be better than nothing.
Anyone have any good ideas? Seems like something that would be useful. In the ideal case, you'd never want to block the main thread while your app's code is active on the runloop, and something like this would be very helpful in getting as close to that goal as possible.
So I set forth to answer my own question this weekend. For the record, this endeavor turned into something pretty complex, so like Kendall Helmstetter Glen suggested, most folks reading this question should probably just muddle through with Instruments. For the masochists in the crowd, read on!
It was easiest to start by restating the problem. Here's what I came up with:
I want to be alerted to long periods of time spent in
syscalls/mach_msg_trap that are not legitimate idle time. "Legitimate
idle time" is defined as time spent in mach_msg_trap waiting for the
next event from the OS.
Also importantly, I didn't care about user code that takes a long time. That problem is quite easy to diagnose and understand using Instruments' Time Profiler tool. I wanted to know specifically about blocked time. While it's true that you can also diagnose blocked time with Time Profiler, I've found it considerably harder to use for that purpose. Likewise, the System Trace Instrument is also useful for investigations like this, but is extremely fine grained and complex. I wanted something simpler -- more targetted to this specific task.
It seemed evident from the get-go that the tool of choice here would be Dtrace.
I started out by using a CFRunLoop observer that fired on kCFRunLoopAfterWaiting and kCFRunLoopBeforeWaiting. A call to my kCFRunLoopBeforeWaiting handler would indicate the beginning of a "legitimate idle time" and the kCFRunLoopAfterWaiting handler would be the signal to me that a legitimate wait had ended. I would use the Dtrace pid provider to trap on calls to those functions as a way to sort legitimate idle from blocking idle.
This approach got me started, but in the end proved to be flawed. The biggest problem is that many AppKit operations are synchronous, in that they block event processing in the UI, but actually spin the RunLoop lower in the call stack. Those spins of the RunLoop are not "legitimate" idle time (for my purposes), because the user can't interact with the UI during that time. They're valuable, to be sure -- imagine a runloop on a background thread watching a bunch of RunLoop-oriented I/O -- but the UI is still blocked when this happens on the main thread. For example, I put the following code into an IBAction and triggered it from a button:
NSMutableURLRequest *req = [NSMutableURLRequest requestWithURL: [NSURL URLWithString: #"http://www.google.com/"]
cachePolicy: NSURLRequestReloadIgnoringCacheData
timeoutInterval: 60.0];
NSURLResponse* response = nil;
NSError* err = nil;
[NSURLConnection sendSynchronousRequest: req returningResponse: &response error: &err];
That code doesn't prevent the RunLoop from spinning -- AppKit spins it for you inside the sendSynchronousRequest:... call -- but it does prevent the user from interacting with the UI until it returns. This is not "legitimate idle" to my mind, so I needed a way to sort out which idles were which. (The CFRunLoopObserver approach was also flawed in that it requred changes to the code, which my final solution does not.)
I decided that I would model my UI/main thread as a state machine. It was in one of three states at all times: LEGIT_IDLE, RUNNING or BLOCKED, and would transition back and forth between those states as the program executed. I needed to come up with Dtrace probes that would allow me to catch (and therefore measure) those transitions. The final state machine I implemented was quite a bit more complicated than just those three states, but that's the 20,000 ft view.
As described above, sorting out legitimate idle from bad idle was not straightforward, since both cases end up in mach_msg_trap() and __CFRunLoopRun. I couldn't find one simple artifact in the call stack that I could use to reliably tell the difference; It appears that a simple probe on one function is not going to help me. I ended up using the debugger to look at the state of the stack at various instances of legitimate idle vs. bad idle. I determined that during legitimate idle, I'd (seemingly reliably) see a call stack like this:
#0 in mach_msg
#1 in __CFRunLoopServiceMachPort
#2 in __CFRunLoopRun
#3 in CFRunLoopRunSpecific
#4 in RunCurrentEventLoopInMode
#5 in ReceiveNextEventCommon
#6 in BlockUntilNextEventMatchingListInMode
#7 in _DPSNextEvent
#8 in -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]
#9 in -[NSApplication run]
#10 in NSApplicationMain
#11 in main
So I endeavored to set up a bunch of nested/chained pid probes that would establish when I had arrived at, and subsequently left, this state. Unfortunately, for whatever reason Dtrace's pid provider doesn't seem to be universally able to probe both entry and return on all arbitrary symbols. Specifically, I couldn't get probes on pid000:*:__CFRunLoopServiceMachPort:return or on pid000:*:_DPSNextEvent:return to work. The details aren't important, but by watching various other goings on, and keeping track of certain state, I was able to establish (again, seemingly reliably) when I was entered and left the legit idle state.
Then I had to determine probes for telling the difference between RUNNING and BLOCKED. That was a bit easier. In the end, I chose to consider BSD system calls (using Dtrace's syscall probe) and calls to mach_msg_trap() (using the pid probe) not occurring in periods of legit idle to be BLOCKED. (I did look at Dtrace's mach_trap probe, but it did not seem to do what I wanted, so I fell back to using the pid probe.)
Initially, I did some extra work with the Dtrace sched provider to actually measure real blocked time (i.e. time when my thread had been suspended by the scheduler) but that added considerable complexity, and I ended up thinking to myself, "If I'm in the kernel, what do I care if the thread is actually asleep or not? It's all the same to the user: it's blocked." So the final approach just measures all time in (syscalls || mach_msg_trap()) && !legit_idle and calls that the blocked time.
At this point, catching single kernel calls of long duration (like a call to sleep(5) for instance) is rendered trivial. However, more often UI thread latency comes from many little latencies accumulating over multiple calls into the kernel (think of hundreds of calls to read() or select()), so I thought it would also be desirable to dump SOME call stack when the overall amount of syscall or mach_msg_trap time in a single pass of the event loop exceeded a certain threshold. I ended up setting up various timers and logging accumulated time spent in each state, scoped to various states in the state machine, and dumping alerts when we happened to be transitioning out of the BLOCKED state, and had gone over some threshold. This method will obviously produce data that is subject to misinterpretation, or might be a total red herring (i.e. some random, relatively quick syscall that just happens to push us over the alert threshold), but I feel like it's better than nothing.
In the end, the Dtrace script ends up keeping a state machine in D variables, and uses the described probes to track transitions between the states and gives me the opportunity to do things (like print alerts) when the state machine is transitioning state, based on certain conditions. I played around a bit with a contrived sample app that does a bunch of disk I/O, network I/O, and calls sleep(), and was able to catch all three of those cases, without distractions from data pertaining to legitimate waits. This was exactly what I was looking for.
This solution is obviously quite fragile, and thoroughly awful in almost every regard. :) It may or may not be useful to me, or anyone else, but it was a fun exercise, so I thought I'd share the story, and the resulting Dtrace script. Maybe someone else will find it useful. I also must confess being a relative n00b with respect to writing Dtrace scripts so I'm sure I've done a million things wrong. Enjoy!
It was too big to post in line, so it's kindly being hosted by #Catfish_Man over here: MainThreadBlocking.d
Really this is the kind of job for the Time Profiler instrument. I believe you can see where time is spent in code per thread, so you'd go see what code was taking a while to perform and get the answer as to what was potentially blocking UI.

Ask Scheduler to schedule a certain thread

In Linux, with POSIX threads, is it possible to hint the scheduler to schedule a particular thread. Actually the scenario is that I have a process which is replica of another process. For deterministic execution, the follower process needs to acquire the locks in the same order as the leader process.
So for example, say in leader process, mutex a is locked by first thread 2, then 3 and 4. The follower must execute in the same order. So if in follower, thread 3 first encounters mutex a, I want thread 3 to say to the scheduler, ok I'm giving up my time slice, please schedule thread 2 instead. I know this can be achieved by modifying the scheduler, but I do not want that, I want to be able to control this from user space program.
In any system, Linux, Windows POSIX or not, if you have to ask this sort of question then I'm afraid that your app is heading for a dark place :(
Even if thread 3 were to yield, say with sleep(0), an interrupt straight after might well just schedule thread 3 back on again, preempting thread 2, or the OS might run thread 3 straightaway on another free core and it could get to the mutex first.
You have to make your app work correctly, (maybee not optimally), independently of the OS scheduling/dispatching algorithms. Even if you get your design to work on a test box, you will end up having to test your system on every combination of OS/hardware to ensure that it still works without deadlocking or performing incorrectly.
Fiddling with scheduling algorithms, thread priorities etc. should only be done to improve the performance of your app, not to try and make it work correctly or to stop it locking up!
Rgds,
Martin

Queue Thread In Blackberry

I've looked at the BB API(5.0) and I can't find any way of serially executing a batch of threads. I know BB has a limit on the number of threads it will launch, so I don't want to launch 7 if the user clicks through things fast enough but I cannot find anything like a thread pool.
Is there an easy fix for this or do I have to create a data structure?
If you just want to execute a bunch of tasks on a single thread serially and order isn't important, you could create a Timer object (which has its own thread) then add each task to it as a TimerTask. If you schedule it with a delay of 0 or 1, it will essentially run that task as soon as possible. And since a Timer only has a single thread, if you schedule multiple tasks concurrently, it will ensure that only one will run at a time.
Incidentally, I was talking to a RIM engineer at the BlackBerry Developer Conference this year and he said that as of OS 5.0 there are no longer limits to the number of threads -- so this is becoming less and less of a concern.
I've tested Jeff Heaton's Thread Pool example on 4.5 and it works. (http://www.informit.com/articles/article.aspx?p=30483&seqNum=1).

When to use pthread_cancel and not pthread_kill?

When does one use pthread_cancel and not pthread_kill?
I would use neither of those two but that's just personal preference.
Of the two, pthread_cancel is the safest for terminating a thread since the thread is only supposed to be affected when it has set its cancelability state to true using pthread_setcancelstate().
In other words, it shouldn't disappear while holding resources in a way that might cause deadlock. The pthread_kill() call sends a signal to the specific thread, and this is a way to affect a thread asynchronously for reasons other than cancelling it.
Most of my threads tends to be in loops doing work anyway and periodically checking flags to see if they should exit. That's mostly because I was raised in a world when pthread_kill() was dangerous and pthread_cancel() didn't exist.
I subscribe to the theory that each thread should totally control its own resources, including its execution lifetime. I've always found that to be the best way to avoid deadlock. To that end, I simply use mutexes for communication between threads (I've rarely found a need for true asynchronous communication) and a flag variable for termination.
You can not "kill" a thread with pthread_kill(). If you try to send SIGTERM or SIGKILL to a thread with pthread_kill(), it will terminate the entire process.
I subscribe to the theory that the PROGRAMMER and not the THREAD (nor the API designers) should totally control its own software in all aspects, including which threads cancel which.
I once worked in a firm where we developed a server that used a pool of worker threads and one special master thread that had the responsibility to create, suspend, resume and terminate the worker threads at any time it wanted. Of course the threads used some sort of synchronization, but it was of our design and not some API-enforced dogmas. The system worked very well and efficiently!
This was under Windows. Then I tried to port it for Linux and I stumbled at the pthreads' stupid "theories" about how wrong it is to suspend another thread etc. So I had to abandon pthreads and directly use the native Linux system calls (clone()) to implement the threads for our server.

Safest way to idle delphi application to wait for timer?

I am doing a delphi application that will run on my pc 24/7 in the background and will check if it has to do some actions or not, wait 30 minutes and check again, and so on.
How can I make sure the application will not overload cpu or memory because of being running all the time.
Create a timer to run every 30 minutes, and call your checks/actions from there. Then your application can just sit idle when there is nothing to do.
Alternatively you could create a Scheduled Task that just runs periodically to do this.
The answers about timers are good solutions, and I add this:
Make sure that the timer event, or subsequent procedure called, checks for busy. i.e. if you wake up, make sure that the last batch is done before starting a new batch. This is easy to miss when things are flowing well, and then you have a situation where things are backed up and the whole system logjams at 8 in the morning because something bad happened at midnight and now there are now 16 calls stacked up (or threads, processes, etc..).
So write your timer event like this:
OnMyTimer...
begin
MyTimer.Enabled := false;
try
DoSomethingForALongTime; // Usually runs in 45 seconds, but sometimes takes 45 minutes!
finally
MyTimer.Enabled := true; // and (SomeAbortRequest = False) and (SomeHorribleErrorCount = 0);
end;
end;
The answers about timers are pretty much exactly what you're looking for. As for your question about not overloading the CPU or memory, take a look at your program in the Task Manager. When it's not doing anything, it should sit at a "steady state" of memory, not allocating any more, and using 1% or less of CPU time.
It's safe to let most programs idle for long periods. The VCL knows how to handle the idling without hogging CPU resources for you, and you just need to use a timer to make sure it wakes up and activates its event at the right time.
Most programming languages have a "sleep" function that you can call to make the program stop doing stuff.
You are in control of the memory usage; you can deallocate your memory before going to sleep and reallocate it when coming awake, but...
It might be better to just set up some recurring job. I don't know how to do this, but I suspect there's a way with Windows Scripting Host to just launch your application on whatever schedule you want.
If you want to enforce an absolute state of inactivity, I guess you could use the "sleep" function. Though I'm not sure how it would behave on a reboot. (I guess Windows would report the application as being unresponsive.)
If the application has no main form and just sitting in the tray (Or being totally invisible), it won't do much. The main message loop will handle all message it receive from the OS, but it shouldn't receive many. And the few message it will receive, it should process them (Shutdown messages, System parameters change notification, etc)
So, I think you could just set up a timer and forget about setting code to force your program to stay idle.
If you really want to limit that process activity to a maximum, you could set the thread priority when you enter/leave the timer's event. So you would set the priority to "normal" when you enter the event, and set it to "Low" when getting out of it.
You didn't tell, but if your application uses more than one thread, this could add to the amount of CPU the OS spends on your process (read up on time slices and thread-switches for example).
The OS may also swap out some of your memory pages, thus using less memory and/or reducing memory-accesses in the "wait-time" helps too.
So, if you use only one thread and have no additional message-loops either, just calling Sleep() could be a good way, as that will signal the OS that you don't need a time slice at all for a long while to come.
Avoid YieldThread()/SwitchToThread() and your own time-keeping (using Now() for example) as that would mean lots of thread-switching is taking place, to do .... nothing!
Another method could be to use WaitForMultipleObjects with a large timeout, so your application can still respond to messages.
I would suggest creating a service application (set startup type as automatic) and use CreateTimerQueueTimer as your timer.
Memory leaks can be mitigated by reserving memory requirements/pooling classes.

Resources