How to make a thread sleep/block for nanoseconds - delphi

How can I block a thread for nanoseconds or microseconds? The Sleep() function is not possible because it accepts miliseconds which is obviously not what I need.

Use the TStopWatch class, and write a while loop that spins until the desired number of ticks (100 nanosecond intervals) has elapsed.
property ElapsedTicks : Int64 read GetElapsedTicks;
This will not relinquish control to other threads; it will simply wait in the current thread for the desired period of time. There will be some degree of error; the amount of error will depend on how long it takes Delphi to execute each loop.
Further Reading
How to Accurately Measure Elapsed Time Using High-Resolution Performance Counter

Windows does not support nano sleep. The thread scheduling used by Windows is much coarser than that. The Windows thread quantum is orders of magnitude longer than a nano-second.

Related

How much computing time does the kernel need

I wrote a program for a LED display. The program allows to set the refresh rate via webconfiguration. To meet the refresh rate I measure the processing time of a loop. At the end I calculate the delay and wait until the next loop.
e.g. Refresh Rate 5 Hz -> 200 milli seconds for one loop. 50 milli seconds computing time results in 150 milli seconds delay.
The ratio of process time (50 milli seconds) to total time (200 milli seconds) indicates the processor load of my program. But to find the optimal setting, I need the actual total processor load. And not only that of my program. But since I don't know the real processor load of the delay() (in which WIFI etc. is done), I don't really know the processor load. In other words, I don't know how much time the system spends doing system tasks in the delay(150).
Is there a way to find out how much of a delay is actually used for system tasks before the processor truly waits?
In other words, I'm looking for a way to get the kernel time within a certain time frame.
Cheers Gabriel

Calclulate Power consumption when CPU don't change to LPM Mode in Contiki

I need to calculate power consumption of CPU. According to this formula.
Power(mW) = cpu * 1.8 / time.
Where time is the sum of cpu + lpm.
I need to measure at the start of certain process and at the end, however the time passed it is to short, and cpu don't change to lpm mode as seen in the next values taken with powertrace_print().
all_cpu all_lpm all_transmit all_listen
116443 1514881 148 1531616
17268 1514881 148 1532440
Calculating power consumption of cpu I got 1.8 mW (which is exactly the value of current draw of CPU in active mode).
My question is, how calculate power consumption in this case?
If MCU does not go into a LPM, then it spends all the time in active mode, so the result of 1.8 mW you get looks correct.
Perhaps you want to ask something different? If you want to measure the time required to execute a specific block of code, you can add RTIMER_NOW() calls at the start and end of the block.
The time resolution of RTIMER_NOW may be too coarse for short-time operations. You can use a higher frequency timer for that, depending on your platform, e.g. read the TBR register for timing if you're compiling for a msp430 based sensor node.

objective-c - NSTimer falling more and more behind

I have a NSTimer (running on main thread) that is supposed to go off every 0.02s. However, I notice that as memory usage start going up (the app captures a frame every tick and stores in an array) subsequent ticks begin to take more then 0.02s.
How can I solve this issue? I'm starting to think NSTimer is not suited for high-frequency tasks like this.
As the docs state,
A timer is not a real-time mechanism; it fires only when one of the
run loop modes to which the timer has been added is running and able
to check if the timer’s firing time has passed. Because of the various
input sources a typical run loop manages, the effective resolution of
the time interval for a timer is limited to on the order of 50-100
milliseconds.
Since 100 milliseconds = .1 seconds and your timer is supposed to run every 0.02 seconds, your timer schedule is far shorter than the timer's effective resolution and so you timer can easily get out of sync.

How to giva a time accurate timer in Fire Monkey?

I have to display a timer in 10th second for a sport competition. I have do this using the OnTimer event of a TTimer. the interval is set to 100. My routine display the current min:sec.10th (ex.: 02:45.7 ) correctly but it seem that my timer loose about 4 second at each minutes if I comp. to normal clock.
There is a better way to get a time accuracy timer in Delphi XE2 (or XE3) ?
You can use a timer to display the current value of the clock, but use a different approach to calculate the elapsed time.
You have to know that Windows timers are not time accurate, and even if you set it to elapse every 100 milliseconds, it can take more to fire the OnTimer event and even it can miss some intervals if for some reason elapses two or more times before your application process it.
You can, for example, use the system high-resolution performance counter to track times with nano-second accuracy.
You can also use the Delphi TStopwatch class, which encapsulates the system calls and falls back to other method (GetTickCount) if the high resolution performance counter is not available in your machine.
Take also a look at the How to Accurately Measure Elapsed Time Using High-Resolution Performance Counter delphi.about.com article.

Why does epoll_wait only provide huge 1ms timeout?

epoll_wait, select and poll functions all provide a timeout. However with epoll, it's at a large resolution of 1ms. Select & ppoll are the only one providing sub-millisecond timeout.
That would mean doing other things at 1ms intervals at best. I could do a lot of other things within 1ms on a modern CPU.
So to do other things more often than 1ms I actually have to provide a timeout of zero (essentially disabling it). And I'd probably add my own usleep somewhere in the main loop to stop it chewing up too much CPU.
So the question is, why is the timeout in milli's when I would think clearly there is a case for a higher resolution timeout.
Since you are on Linux, instead of providing a zero timeout value and manually usleeeping in the loop body, you could simply use the timerfd API. This essentially lets you create a timer (with a resolution finer than 1ms) associated with a file descriptor, which you can add to the set of monitored descriptors.
The epoll_wait interface just inherited a timeout measured in milliseconds from poll. While it doesn't make sense to poll for less than a millisecond, because of the overhead of adding the calling thread to all the wait sets, it does make sense for epoll_wait. A call to epoll_wait doesn't require ever putting the calling thread onto more than one wait set, the calling overhead is very low, and it could make sense, on very rare occasions, to block for less than a millisecond.
I'd recommend just using a timing thread. Most of what you would want to do can just be done in that timing thread, so you won't need to break out of epoll_wait. If you do need to make a thread return from epoll_wait, just send a byte to a pipe that thread is polling and the wait will terminate.
In Linux 5.11, an epoll_pwait2 API has been added, which uses a struct timespec as timeout. This means you can now wait using nanoseconds precision.

Resources