Problem I'm trying to solve: My program uses System.Win.ScktComp.TServerSocket to communicate with another local process via Ethernet. Between receiving a packet from the local process and sending a response is 100ms--which shouldn't take this long. I'm trying to step through my program with the debugger to see where that 100ms is being spent.
The problem is that if I get the current time while I'm in the debugger it will obviously count the time it spent in the paused state of the debugger. Another problem is that the relevant part of my app is TTimer and event-driven so that when a routine returns you're not sure what routine will be called next.
My attempt: I can forgo using the debugger and print the current time everywhere like in all the OnTimer procedures and other events.
Much better solution: Step through with the debugger, getting the CPU time (which isn't affected by the time spent paused in the debugger) here and there to pinpoint where that 100ms is being lost.
I don't believe that you are tackling your problem the correct way, and have made that point in comments. Leaving that aside, the function that you are asking for is GetProcessTimes.
I'm trying to ... see where that 100ms is being spent.
A debugger will not be able to tell you that very easily. You need to use a profiler instead, like AQTime or similar, and let it clock your code in real-time and report the results, such as how much time was spent in specific functions and class methods.
Related
I have a BLE value set after which I need to wait for 6.25ms for other device to write into its buffer.
so I have been using usleep(6250)... As I got to know usleep considers value in micro seconds... So I am considering 6250 microseconds=6.25 ms. Is it the right api to use ? as there are different posts which say usleep should never be used in iOS etc. I am not able to make a difference in wait time by debugging it with breakpoint as I think the wait time is too less to be visible like I can with sleep(2)... Pls confirm if its right API to use and if I am passing right value to API. If not please suggest.
In general, you shouldn't sleep a thread ever. That blocks the thread and wastes system resources.
Instead, use dispatch_after() or a similar API.
As well, do you really need to wait at all? Or does the device send some kind of acknowledgement that the write was successful? I.e. is there some signal from the device that you can react to to know that the write happened?
I have an application written in Delphi 5, which runs fine on most (windows) computers.
However, occasionally the program begins to load (you can see it in task manager, uses about 2.5-3 MB of memory), but then stalls for a number of minutes, sometimes hours.
If you leave it long enough, the formshow event will eventually occur and the application window will pop up, but it seems like some other application or windows setting is preventing it from initially using all the memory it needs to run (approx. 35-40 MB).
Also, on some of my client's workstations, if they have MS Outlook running, they can close it and my application will pop up. Does anyone know what is going on here, and/or how to fix it?
Since nobody has given a better answer I'll take a stab at how to solve this:
There's something in your initialization that is locking it up somehow. Without seeing your code I do not know what it is so I'll only address how to go about finding it:
You need to log what you accomplish during startup. If you have any kind of screen showing I find the window title useful for this but it sounds like you don't--that means you need to write the log to a file. Let it get lost, kill the task and see where it got.
Note that this means you need to cleanly write your data despite an abnormal program termination. How to go about this:
A) Append, write your line, close.
B) Write your line, then flush the file handle.
C) Initially write your file to consist of a large number of blanks--ensure this is larger than the actual log will be. Write your line. In case of abnormal termination it will retain the original larger file size.
I would write a timestamp on every log item so you can see if it's just processing something too slowly.
If examining the log shows you where the problem is, fine. If, as usually happens, it's not enough you put a bunch more logging between the last item that did get logged and the next one that didn't--I've been known to log every line when hunting a cryptic problem that only happened on someone else's system.
If finding the line isn't enough to pinpoint the problem also dump the value of relevant variables.
Finally, if such intense scrutiny makes the bug go away start looking for an uninitialized variable. (While a memory stomp is also an option I doubt it's the culprit here.)
Time Profiler seems to only show function calls sorted by their CPU time ranking.
However, sometimes I'd like to see call sequence (multithreaded) during a particular run.
Do I need a custom instrument to achieve that instead?
I played around with the checkboxes that Time Profiler provided in its UI, but nothing helped. I had to resort to the good old logging, which is obviously inefficient.
It's a sampling profiler, so it only shows you the calls that were executing when it sampled; you can't see every call that happened over a period of time. That said, you can see the call-stack that leads up to every call, if you show the 'Extended Detail Pane' over on the right, which may give you a good idea what happened.
See the Apple documentation.
As an alternative solution, see How to log all methods used in iOS app
There is a tool that analyses instruments trace files and does what you're describing at:
http://timeanalyzer.excelsis.com
Currently, it only works on the main thread, and only for the first 30 seconds of profiling. It shows a function call stack waterfall: X axis is time, Y axis is the call stack.
This has happened to me on more than one occasion and has led to many lost hours chasing a ghost. As typical, when I am debugging some really difficult timing-related code I start adding tons of OutputDebugString() calls, so I can get a good picture of the sequence of related operations. The problem is, the Delphi 6 IDE seems to be able to only handle that situation for so long. I'll use a concrete example I just went through to avoid generalities (as much as possible).
I spent several days debugging my inter-thread semaphore locking code along with my DirectShow timestamp calculation code that was causing some deeply frustrating problems. After having eliminated every bug I could think of, I still was having a problem with Skype, which my application sends audio to.
After about 10 seconds the delay between my talking and hearing my voice come out of Skype on the second PC that I was using for testing, the far end of the call, started to grow. At around 20 - 30 seconds the delay started to grow exponentially and at that point triggered code I have that checks to see if a critical section was being held too long.
Fortunately it wasn't too late at night and having been through this before, I decided to stop relentlessly tracing and turned off the majority of the OutputDebugString(). Thankfully I had most of them wrapped in a conditional compiler define so it was easy to do. The instant I did this the problems went away, and it turned out my code was working fine.
So it looks like the Delphi 6 IDE starts to really bog down when the amount of OutputDebugstring() traffic is above some threshold. Perhaps it's just the task of adding strings to the Event Log debugger pane, which holds all the OutputDebugString() reports. I don't know, but I have seen similar problems in my applications when a TMemo or similar control starts to contain too many strings.
What have those of you out there done to prevent this? Is there a way of clearing the Event Log via some method call or at least a way of limiting its size? Also, what techniques do you use via conditional defines, IDE plug-ins, or whatever, to cope with this situation?
A similar problem happened to me before with Delphi 2007. Disable event viewing in the IDE and instead use DebugView from Sysinternals.
I hardly ever use OutputDebugString. I find it hard to analyze the output in the IDE and it takes extra effort to keep several sets of multiple runs.
I really prefer a good logging component suite (CodeSite, SmartInspect) and usually log to various files. Standard files for example are "General", "Debug" (standard debug info that I want to collect from a client installation as well), "Configuration", "Services", "Clients". These are all set up to "overflow" to a set of numbered files, which allows you to keep the logs of several runs by simply allowing more numbered files. Comparing log info from different runs becomes a whole lot easier that way.
In the situation you describe I would add debug statements that log to a separate logfile. For example "Trace". The code to make "Trace" available is between conditional defines. That makes turning it on pretty simple.
To avoid leaving in these extra debug statements, I tend to make the changes to turn on the "Trace" log without checking it out from source control. That way, the compiler of the build server will throw out "identifier not defined" errors on any statements unintentionally left in. If I want to keep these extra statements I either change them to go to the "Debug" log, or put them between conditional defines.
The first thing I would do is make certain that the problem is what you think it is. It has been a long time since I've used Delphi, so I'm not sure about the IDE limitations, but I'm a bit skeptical that the event log will start bogging down exponentially over time with the same number of debug strings being written in a period of 20-30 seconds. It seems more likely that the number of debug strings being written is increasing over time for some reason, which could indicate a bug in your application control flow that is just not as obvious with the logging disabled.
To be sure I would try writing a simple application that just runs in a loop writing out debug strings in chunks of 100 or so, and start recording the time it takes for each chunk, and see if the time starts to increase as significantly over a 20-30 second timespan.
If you do verify that this is the problem - or even if it's not - then I would recommend using some type of logging library instead. OutputDebugString really loses it's effectiveness when you use it for massive log dumps like that. Even if you do find a way to reset or limit the output window, you'd be losing all of that logging data.
IDE Fix Pack has an optimisation to improve performance of OutputDebugString
The IDE’s Debug Log View also got an optimization. The debugger now
updates the Log View only when the IDE is idle. This allows the IDE to
stay responsive when hundreds of OutputDebugString messages or other
debug messages are written to the Debug Log View.
Note that this only runs on Delphi 2007 and above.
I am doing a delphi application that will run on my pc 24/7 in the background and will check if it has to do some actions or not, wait 30 minutes and check again, and so on.
How can I make sure the application will not overload cpu or memory because of being running all the time.
Create a timer to run every 30 minutes, and call your checks/actions from there. Then your application can just sit idle when there is nothing to do.
Alternatively you could create a Scheduled Task that just runs periodically to do this.
The answers about timers are good solutions, and I add this:
Make sure that the timer event, or subsequent procedure called, checks for busy. i.e. if you wake up, make sure that the last batch is done before starting a new batch. This is easy to miss when things are flowing well, and then you have a situation where things are backed up and the whole system logjams at 8 in the morning because something bad happened at midnight and now there are now 16 calls stacked up (or threads, processes, etc..).
So write your timer event like this:
OnMyTimer...
begin
MyTimer.Enabled := false;
try
DoSomethingForALongTime; // Usually runs in 45 seconds, but sometimes takes 45 minutes!
finally
MyTimer.Enabled := true; // and (SomeAbortRequest = False) and (SomeHorribleErrorCount = 0);
end;
end;
The answers about timers are pretty much exactly what you're looking for. As for your question about not overloading the CPU or memory, take a look at your program in the Task Manager. When it's not doing anything, it should sit at a "steady state" of memory, not allocating any more, and using 1% or less of CPU time.
It's safe to let most programs idle for long periods. The VCL knows how to handle the idling without hogging CPU resources for you, and you just need to use a timer to make sure it wakes up and activates its event at the right time.
Most programming languages have a "sleep" function that you can call to make the program stop doing stuff.
You are in control of the memory usage; you can deallocate your memory before going to sleep and reallocate it when coming awake, but...
It might be better to just set up some recurring job. I don't know how to do this, but I suspect there's a way with Windows Scripting Host to just launch your application on whatever schedule you want.
If you want to enforce an absolute state of inactivity, I guess you could use the "sleep" function. Though I'm not sure how it would behave on a reboot. (I guess Windows would report the application as being unresponsive.)
If the application has no main form and just sitting in the tray (Or being totally invisible), it won't do much. The main message loop will handle all message it receive from the OS, but it shouldn't receive many. And the few message it will receive, it should process them (Shutdown messages, System parameters change notification, etc)
So, I think you could just set up a timer and forget about setting code to force your program to stay idle.
If you really want to limit that process activity to a maximum, you could set the thread priority when you enter/leave the timer's event. So you would set the priority to "normal" when you enter the event, and set it to "Low" when getting out of it.
You didn't tell, but if your application uses more than one thread, this could add to the amount of CPU the OS spends on your process (read up on time slices and thread-switches for example).
The OS may also swap out some of your memory pages, thus using less memory and/or reducing memory-accesses in the "wait-time" helps too.
So, if you use only one thread and have no additional message-loops either, just calling Sleep() could be a good way, as that will signal the OS that you don't need a time slice at all for a long while to come.
Avoid YieldThread()/SwitchToThread() and your own time-keeping (using Now() for example) as that would mean lots of thread-switching is taking place, to do .... nothing!
Another method could be to use WaitForMultipleObjects with a large timeout, so your application can still respond to messages.
I would suggest creating a service application (set startup type as automatic) and use CreateTimerQueueTimer as your timer.
Memory leaks can be mitigated by reserving memory requirements/pooling classes.