Confused over precision of lua's os.clock - lua

I thought Lua os.clock() returns times in second. But from the documentation here https://www.lua.org/pil/22.1.html, the example they have
local x = os.clock()
local s = 0
for i=1,100000 do s = s + i end
print(string.format("elapsed time: %.2f\n", os.clock() - x))
Is rounding the result to 2 decimal places. Is os.clock() returns second.ms?
Also running this in Lua gives
> print(os.clock())
0.024615
What are these decimal places?

os.clock and os.time are not the same sort of time.
os.time is dealing with "wall-clock time, the sort of time humans use.
os.clock is a counter reporting CPU time. The decimal number you get from os.clock is the number of seconds the CPU spent running the current task. The CPU time has no correlation to wall-clock time other than using the same base time units (seconds).

Related

Parse time string to hours, minutes and seconds in Lua

I am currently working on a plugin for grandMA2 lighting control using Lua. I need the current time. The only way to get the current time is the following function:
gma.show.getvar('TIME')
which always returns the current system time, which I then store in a variable. An example return value is "12h54m47.517s".
How can I separate the hours, minutes and seconds into 3 variables?
If os.date is available (and matches gma.show.getvar('TIME')), this is trivial:
If format starts with '!', then the date is formatted in Coordinated Universal Time. After this optional character, if format is the string "*t", then date returns a table with the following fields: year, month (1–12), day (1–31), hour (0–23), min (0–59), sec (0–61, due to leap seconds), wday (weekday, 1–7, Sunday is 1), yday (day of the year, 1–366), and isdst (daylight saving flag, a boolean). This last field may be absent if the information is not available.
local time = os.date('*t')
local hour, min, sec = time.hour, time.min, time.sec
This does not provide you with a sub-second precision though.
Otherwise, parsing the time string is a typical task for tostring and string.match:
local hour, min, sec = gma.show.getvar('TIME'):match('^(%d+)h(%d+)m(%d*%.?%d*)s$')
-- This is usually not needed as Lua will just coerce strings to numbers
-- as soon as you start doing arithmetic on them;
-- it still is good practice to convert the variables to the proper type though
-- (and starts being relevant when you compare them, use them as table keys or call strict functions that check their argument types on them)
hour, min, sec = tonumber(hour), tonumber(min), tonumber(sec)
Pattern explanation:
^ and $ pattern anchors: Match the full string (and not just part of it), making the match fail if the string does not have the right format.
(%d)+h: Capture hours: One or more digits followed by a literal h
(%d)+m: Capture minutes: One or more digits followed by a literal m
(%d*%.?%d*)s: Capture seconds: Zero or more digits followed by an optional dot followed by again zero or more digits, finally ending with a literal s. I do not know the specifics of the format and whether something like .1s, 1.s or 1s is occasionally emitted, but Lua's tonumber supports all of these so there should be no issue. Note that this is slightly overly permissive: It will also match . (just a dot) and an s without any leading digits. You might want (%d+%.?%d+)s instead to force digits appearing before & after the dot.
Lets do it with string method gsub()
local ts = gma.show.getvar('TIME')
local hours = ts:gsub('h.*', '')
local mins = ts:gsub('.*%f[^h]', ''):gsub('%f[m].*', '')
local secs = ts:gsub('.*%f[^m]', ''):gsub('%f[s].*', '')
To make a Timestring i suggest string method format()
-- secs as float
timestring = ('[%s:%s:%.3f]'):format(hours, mins, secs)
-- secs not as float
timestring = ('[%s:%s:%.f]'):format(hours, mins, secs)

Logitech Lua script for high precision sleep

This is my attempt to make a Logitech high-precision delay, accurate to 1ms.
Why do you need high precision delay? Because starting with Win10 release 2004, Logitech Sleep(1) is actually sleeps for 15.6ms, so you might need a more precise Sleep() to preserve the original (Win10 1909) behavior of your old scripts.
function Sleep3(time)
local a = GetRunningTime()
while GetRunningTime()-a < time do
end
end
Is Sleep3() accuracy really equals to 1ms?
Logitech GetRunningTime() just invokes WinAPI function GetTickCount
As you can see from the doc,
The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds
In other words, the values returned by GetRunningTime() are not sequential integers.
When you call GetRunningTime() in a loop, you will receive something like the following:
0,0,0,...,0,15,15,15,...,15,31,31,..,31,46,46,...
This means it is unable to make 1ms-precision delay by using GetRunningTime().
The actual precision of Sleep3() is 15ms, as usual Sleep() has.
function OnEvent(event,arg)
if event == "MOUSE_BUTTON_PRESSED" and arg == 4 then
a = GetRunningTime()
for i = 1,1000,1 do
Sleep(1)
end
OutputLogMessage((GetRunningTime() - a) / 1000)
end
end

Can I make my Lua program to sleep for AROUND a day?

I want to make my mydicebot (by seuntje) Lua program sleep AROUND A DAY, after betting for a day... like
function sleep(n)
t = os.clock()
while os.clock() - t <= n do
-- nothing
end
end
function playsleep()
sec = math.random(80000,90000)
sleep(sec) -- around 86400 seconds
end
timestart = os.time()
dur = math.random(70000,80000)
function dobet()
if os.time() - timestart < math.random then
playsleep()
end
timestart = os.time() -- reset the time counter
end
but when I call the playsleep function in the dobet function
it ends up I cannot click anything in my program, cannot move another tab also
and the CPU is not sleeping either, even get busy
and sometimes it stucks even after 90000 seconds
-- THE QUESTIONS --
A. so can I make a function where the sleep is a real sleep?
B. can it sleep until 90000 seconds?
C. or what is the max number of sleep in seconds for the variable "sec" above?
Use the posix module to sleep:
posix = require("posix")
posix.sleep(86400)
But this will still block your program and you won't be able to click anything. You will need to provide more detail about your program in order to receive better advice.
Also os is there.
Why not...
do os.execute('$(type -path sleep) '..(3600*24)) end
...?

when is vectorization favored in Julia?

I have 2 functions for determining pi numerically in Julia. The second function (which I think is vectorized) is slower than the first.
Why is vectorization slower? Are there rules when to vectorize and when not to?
function determine_pi(n)
area = zeros(Float64, n);
sum = 0;
for i=1:n
if ((rand()^2+rand()^2) <=1)
sum = sum + 1;
end
area[i] = sum*1.0/i;
end
return area
end
and another function
function determine_pi_vec(n)
res = cumsum(map(x -> x<=1?1:0, rand(n).^2+rand(n).^2))./[1:n]
return res
end
When run for n=10^7, below are execution times (after running few times)
n=10^7
#time returnArray = determine_pi(n)
#output elapsed time: 0.183211324 seconds (80000128 bytes allocated)
#time returnArray2 = determine_pi_vec(n);
#elapsed time: 2.436501454 seconds (880001336 bytes allocated, 30.71% gc time)
Vectorization is good if
It makes the code easier to read, and performance isn't critical
If its a linear algebra operation, using a vectorized style can be good because Julia can use BLAS and LAPACK to perform your operation with very specialized, high-performance code.
In general, I personally find it best to start with vectorized code, look for any speed problems, then devectorize any troublesome issues.
Your second code is slow not so much due to it being vectorized, but due to the use of an anonymous function: unfortunately in Julia 0.3, these are normally quite a bit slower. map in general doesn't perform very well, I believe because Julia can't infer the output type of the function (its still "anonymous" from the perspective of the map function). I wrote a different vectorized version which avoids anonymous functions, and is possibly a little bit easier to read:
function determine_pi_vec2(n)
return cumsum((rand(n).^2 .+ rand(n).^2) .<= 1) ./ (1:n)
end
Benchmarking with
function bench(n, f)
f(10)
srand(1000)
#time f(n)
srand(1000)
#time f(n)
srand(1000)
#time f(n)
end
bench(10^8, determine_pi)
gc()
bench(10^8, determine_pi_vec)
gc()
bench(10^8, determine_pi_vec2)
gives me the results
elapsed time: 5.996090409 seconds (800000064 bytes allocated)
elapsed time: 6.028323688 seconds (800000064 bytes allocated)
elapsed time: 6.172004807 seconds (800000064 bytes allocated)
elapsed time: 14.09414031 seconds (8800005224 bytes allocated, 7.69% gc time)
elapsed time: 14.323797823 seconds (8800001272 bytes allocated, 8.61% gc time)
elapsed time: 14.048216404 seconds (8800001272 bytes allocated, 8.46% gc time)
elapsed time: 8.906563284 seconds (5612510776 bytes allocated, 3.21% gc time)
elapsed time: 8.939001114 seconds (5612506184 bytes allocated, 4.25% gc time)
elapsed time: 9.028656043 seconds (5612506184 bytes allocated, 4.23% gc time)
so vectorized code can definitely be about as good as devectorized in some cases, even when we aren't in a the linear-algebra case.

getTickCount time unit confusion

At the answer to the question on Stack and in the book at here on page 52 I found the normal getTickCount getTickFrequency combination to measure time of execution gives time in milliseconds . However the OpenCV website says its time in seconds. I am confused. Please help...
There is no room for confusion, all the references you have given point to the same thing.
getTickCount gives you the number of clock cycles after a certain event, eg, after machine is switched on.
A = getTickCount() // A = no. of clock cycles from beginning, say 100
process(image) // do whatever process you want
B = getTickCount() // B = no. of clock cycles from beginning, say 150
C = B - A // C = no. of clock cycles for processing, 150-100 = 50,
// it is obvious, right?
Now you want to know how many seconds are these clock cycles. For that, you want to know how many seconds a single clock takes, ie clock_time_period. If you find that, simply multiply by 50 to get total time taken.
For that, OpenCV gives second function, getTickFrequency(). It gives you frequency, ie how many clock cycles per second. You take its reciprocal to get time period of clock.
time_period = 1/frequency.
Now you have time_period of one clock cycle, multiply it with 50 to get total time taken in seconds.
Now read all those references you have given once again, you will get it.
dwStartTimer=GetTickCount();
dwEndTimer=GetTickCount();
while((dwEndTimer-dwStartTimer)<wDelay)//delay is 5000 milli seconds
{
Sleep(200);
dwEndTimer=GetTickCount();
if (PeekMessage (&uMsg, NULL, 0, 0, PM_REMOVE) > 0)
{
TranslateMessage (&uMsg);
DispatchMessage (&uMsg);
}
}

Resources