On iOS there is CACurrentMediaTime function that provides timestamp with nanosecond precision.
This function is being used by global.nativePerformanceNow react-native function.
What I don't understand is what the returned value of this function represents, or since when the timestamp is calculated.
global.nativePerformanceNow() seems to call the native functions CACurrentMediaTime() on iOS, and clock_gettime() on Android.
It seems like global.nativePerformanceNow() returns a floating point number representing the amount of milliseconds since device boot on both platforms but it isn't documented as far as I can find.
Related
it is complex to understand the canlogs without milliseconds, is there any function that prints milliseconds in write window. i've already tried with "getLocalTimeString()" function but this will print only time till seconds only.
Try using the funtion timeNowNS(); returning a float variable of the simulation time in nanoseconds. Alternatively use timeNowInt64();. Multiply the returned value with factor to gain seconds/milli seconds as you see fit.
timeNowNS() will return simulation time in nanoseconds whereas getLocalTime()/getLocalTimeString() will return system time.
appending both will make no sense as it won't be accurate
To get accurate time measurements on iOS, mach_absolute_time() should be used. Or CACurrentMediaTime(), which is based on mach_absolute_time(). This is documented in this Apple Q&A, and also explained in several StackOverflow answers (e.g. https://stackoverflow.com/a/17986909, https://stackoverflow.com/a/30363702).
When does the value returned by mach_absolute_time() wrap around? When does the value returned by CACurrentMediaTime() wrap around? Does this happen in any realistic timespan? The return value of mach_absolute_time() is of type uint64, but I'm unsure about how this maps to a real timespan.
The document you reference notes that mach_absolute_time is CPU dependent, so we can't say how much time must elapse before it wraps. On the simulator, mach_absolute_time is nanoseconds, so if it's wrapping at UInt64.max, that translates to 585 years. On my iPhone 7+, it's 24,000,000 mac_absolute_time per second, which translates to 24 thousand years. Bottom line, the theoretical maximum amount of time captured by mach_absolute_time will vary based upon CPU, but you won't ever encounter this in any practical application.
For what it's worth, consistent with those various posts you found, the CFAbsoluteTimeGetCurrent documentation warns that:
Repeated calls to this function do not guarantee monotonically increasing results. The system time may decrease due to synchronization with external time references or due to an explicit user change of the clock.
So, you definitely don't want to use NSDate/Date or CFAbsoluteTimeGetCurrent if you want accurate elapsed times. Neither ensures monotonically increasing values.
In short, when I need that sort of behavior, I generally use CACurrentMediaTime, because it enjoy the benefits of mach_absolute_time, but it converts it to seconds for me, which makes it very simple to use. And neither it nor mach_absolute_time are going to loop in any realistic time period.
I am learning Maxima, but having hard time finding how to obtain the cpu time used in call to integrate, when inside a loop construct.
The problem is that the function time(%o1) gives the CPU time used to compute line %o1.
But inside a loop, the whole loop is taken as one operation. So I can't use time() to time single call.
Here is an example
lst:[sin(x),cos(x)];
for i thru length(lst) do
(
result : integrate( lst[i],x)
);
I want to find the cpu time used for each call to integrate, not the cpu time used for the whole loop. Adding showtime: true$ does not really help. I need to obtain the CPU time used for each call, and save the value to a variable.
Is there a way in Maxima to find CPU time used by each call to integrate in the above loop?
Using wxMaxima 15.04.0, windows 7.
Maxima version: 5.36.1
Lisp: SBCL 1.2.7
I was looking for something like Mathematica's AbsoluteTiming function.
instead of elapsed real time, which on my GCL maxima seems to return
absolute real time in seconds, try the lisp function
GET-INTERNAL-RUN-TIME
which you can call from the Maxima command line by
?get-internal-run-time();
This should return run time on any common lisp system. In GCL,
in units of 100 per second.
Perhaps the function you need is elapsed_real_time.
EDIT: you would use it like this:
for i ...
do block ([t0, t1],
t0 : elapsed_real_time (),
integrate (...),
t1 : elapsed_real_time (),
time[i] : t1 - t0);
I'am non-programmer, trying to assess the time spent in (opencv-)functions. We have an AD-converter which comes with a counter that is able to count external signals (e.g. from a function generator) with a frequency of 1 MHz = 1 µs resolution. The actual counter status can be queried with a function cbIn32(..., unsigned long *pointertovalue).
So my idea was to query the counter status before and after calling the function of interest and to calculate then the difference. However, doubts came up when I calcultated the difference without a function call in between, which revealed rel. high fluctuations (values between 80 and 400 µs or so). I wondered, if calculating the average time for calling cbIn32() (approx. 180 µs) and substract this from the putative time spent in the function of interest is a valid solution.
So my first two questions:
Is that approach generally feasible or useless?
Where do the fluctuations come from?
Alternatively, we tried using getTickCount(), which seemed to deliver reasonable values. But checking forums revealed that it has a low resolution of about 10 ms, which would be insatisfactory (100 µs resolution would be appreciated). However, the values we got were in the sub-ms range.
This brings me to the next questions:
How can the time assessed for a function with getTickCount() be in the microseconds range, when the resolution is around 10 ms?
Should I trust the obtained values or not?
I also tried it with gprof, but it gave me "no time accumulated", although I am sure that the time spent in a function containing opencv-related calls is at least a few milliseconds. I even tried rebuilding opencv with ENABLE_PROFILING=ON, but same result. I read somewhere that you need to build static opencv libraries to enable profiling, but I am not sure if this would improve the situation. So the question here is:
What do I have to do so that gprof also "sees" opencv functions?
Next alternative would be the QueryPerformanceCounter() function of the WINAPI. I don't how to use it, but I would fight my way through, if you recommend it. Question to that approach:
Will it be problematic because of multiple cores?
If yes, is there an "easy" way to handle that problem?
I also tried it with verysleepy, but it exits somehow to early (worked fine with other .exe).
Newbie-friendly answers would be very, very appreciated. My goal is to find the easiest approach with highest precision. I'm working on Win7 64bit, Eclipse with MinGW.
Thx for your help...
the erlang documentation says:
erlang:now()
[...] It is also guaranteed that subsequent calls to this BIF returns continuously increasing values. Hence, the return value from now() can be used to generate unique time-stamps, and if it is called in a tight loop on a fast machine the time of the node can become skewed. [...]
I find this a little strange (especially considering that the granularity is microsecond). Why was it specced this way?
Because it can then be used to uniquely generate timestamp numbers. The os module has a variant which does not do that.