Measure the performance of a Redis Lua script - lua

Is there any way to measure the performance of a Redis Lua script?
I have a lua script and I ended up to a slightly different implementation and I am wondering if there is any way to measure which of the two implementations is faster.

You can call Redis' TIME command to perform in-script "benchmarking". Something like the following should work:
local start = redis.call('TIME')
-- your logic here
local finish = redis.call('TIME')
return finish[1]-start[1]

I read in the comments that someone mensioned finish[2]-start[2] which is not a good idea because [2] has "the amount of microseconds already elapsed in the current second" and not the entire timestamp (so if we finish in a different second, this calculation will fail.)
based on: https://redis.io/commands/TIME
to get time as microseconds, I would do:
local start = redis.call('TIME')
-- your logic here
local finish = redis.call('TIME')
return (finish[1]-start[1])*1000000+(finish[2]-start[2])

Related

How do I set a timeout with millisecond accuracy in Lua?

when I try to set a timeout in Lua like this:
function wait(seconds)
local start = os.time()
repeat until os.time() > start + seconds
end
it is too inconsistent. Is there a more precise way to set a timeout that will consistently wait for the amount of time requested?
Without using external libraries, in Lua there are basically only two ways to get higher accuracy.
use os.clock
run a loop and find out how many iterations take how much time
As Nicol pointed out both will be inconsistent and unrealiable in a non real-time OS like Windows.
If your OS for whatever reason decides to clock your CPU down or to do some other crap in the background your doomed.
So think about your application and decide wether you should do it on a non-real-time OS.
On a machine where you can compile own stuff theres a way to build a sleep and msleep that dont eats up any cpu time...
Look: http://www.troubleshooters.com/codecorn/lua/lua_lua_calls_c.htm
...for: Make an msleep() Function

How to calculate CPU time in elixir when multiple actors/processes are involved?

Let's say I have a function which does some work by spawning multiple processes. I want to compare CPU time vs real time taken by this function.
def test do
prev_real = System.monotonic_time(:millisecond)
# Code to complete some task
# Spawn different processes & give each process some task
# Receive result
# Finish task
current_real = System.monotonic_time(:millisecond)
diff_real = current_real - prev_real
IO.puts "Real time " <> to_string(diff_real)
IO.puts "CPU time ?????"
end
How to calculate CPU time required by the given function? I am interested in calculating CPU time/Real time ratio.
If you are just trying to profile your code rather than implement your own profiling framework I would recommend using already existing tools like:
fprof which will give you information about time spent in functions (real and own)
percept which will provide you information about which processes in your system ware working at any given time and on what
xprof which is design to help you find which calls to your function will cause it to take more time (trigger inefficient branch of code).
They take advantage of both erlang:trace to figure out which function is being executed and for how long and erlang:system_profile with runnable_procs to determine which processes are currently running. You might start a function, hit a receive or be preemptive rescheduled and wait without doing any actual work. Combining those two might be complicated, and I would recommend using already existing tools before trying glue together your own.
You could also look into tools like erlgrind and eflame if you are looking for more visual representations of your calls.

how to find CPU time used by each call when inside a loop?

I am learning Maxima, but having hard time finding how to obtain the cpu time used in call to integrate, when inside a loop construct.
The problem is that the function time(%o1) gives the CPU time used to compute line %o1.
But inside a loop, the whole loop is taken as one operation. So I can't use time() to time single call.
Here is an example
lst:[sin(x),cos(x)];
for i thru length(lst) do
(
result : integrate( lst[i],x)
);
I want to find the cpu time used for each call to integrate, not the cpu time used for the whole loop. Adding showtime: true$ does not really help. I need to obtain the CPU time used for each call, and save the value to a variable.
Is there a way in Maxima to find CPU time used by each call to integrate in the above loop?
Using wxMaxima 15.04.0, windows 7.
Maxima version: 5.36.1
Lisp: SBCL 1.2.7
I was looking for something like Mathematica's AbsoluteTiming function.
instead of elapsed real time, which on my GCL maxima seems to return
absolute real time in seconds, try the lisp function
GET-INTERNAL-RUN-TIME
which you can call from the Maxima command line by
?get-internal-run-time();
This should return run time on any common lisp system. In GCL,
in units of 100 per second.
Perhaps the function you need is elapsed_real_time.
EDIT: you would use it like this:
for i ...
do block ([t0, t1],
t0 : elapsed_real_time (),
integrate (...),
t1 : elapsed_real_time (),
time[i] : t1 - t0);

Can I profile Lua scripts running in Redis?

I have a cluster app that uses a distributed Redis back-end, with dynamically generated Lua scripts dispatched to the redis instances. The Lua component scripts can get fairly complex and have a significant runtime, and I'd like to be able to profile them to find the hot spots.
SLOWLOG is useful for telling me that my scripts are slow, and exactly how slow they are, but that's not my problem. I know how slow they are, I'd like to figure out which parts of them are slow.
The redis EVAL docs are clear that redis does not export any timekeeping functions to lua, which makes it seem like this might be a lost cause.
So, short a custom fork of Redis, is there any way to tell which parts of my Lua script are slower than others?
EDIT
I took Doug's suggestion and used debug.sethook - here's the hook routine I inserted at the top of my script:
redis.call('del', 'line_sample_count')
local function profile()
local line = debug.getinfo(2)['currentline']
redis.call('zincrby', 'line_sample_count', 1, line)
end
debug.sethook(profile, '', 100)
Then, to see the hottest 10 lines of my script:
ZREVRANGE line_sample_count 0 9 WITHSCORES
If your scripts are processing bound (not I/O bound), then you may be able to use the debug.sethook function with a count hook:
The count hook: is called after the interpreter executes every
count instructions. (This event only happens while Lua is executing a
Lua function.)
You'll have to build a profiler based on the counts you receive in your callback.
The PepperfishProfiler would be a good place to start. It uses os.clock which you don't have, but you could just use hook counts for a very crude approximation.
This is also covered in PiL 23.3 – Profiles
In standard Lua C, you can't. It's not a built-in function - it only returns seconds. So, there are two options available: You either write your own Lua extension DLL to return the time in msec, or:
You can do a basic benchmark using a millisecond-resolution time. You can access the current millisecond time with LuaSocket. Though this adds a dependency to your project, it's an effective way to do trivial benchmarking.
require "socket"
t = socket.gettime();

Lua task scheduling

I've been writing some scripts for a game, the scripts are written in Lua. One of the requirements the game has is that the Update method in your lua script (which is called every frame) may take no longer than about 2-3 milliseconds to run, if it does the game just hangs.
I solved this problem with coroutines, all I have to do is call Multitasking.RunTask(SomeFunction) and then the task runs as a coroutine, I then have to scatter Multitasking.Yield() throughout my code, which checks how long the task has been running for, and if it's over 2 ms it pauses the task and resumes it next frame. This is ok, except that I have to scatter Multitasking.Yield() everywhere throughout my code, and it's a real mess.
Ideally, my code would automatically yield when it's been running too long. So, Is it possible to take a Lua function as an argument, and then execute it line by line (maybe interpreting Lua inside Lua, which I know is possible, but I doubt it's possible if all you have is a function pointer)? In this way I could automatically check the runtime and yield if necessary between every single line.
EDIT:: To be clear, I'm modding a game, that means I only have access to Lua. No C++ tricks allowed.
check lua_sethook in the Debug Interface.
I haven't actually tried this solution myself yet, so I don't know for sure how well it will work.
debug.sethook(coroutine.yield,"",10000);
I picked the number arbitrarily; it will have to be tweaked until it's roughly the time limit you need. Keep in mind that time spent in C functions etc will not increase the instruction count value, so a loop will reach this limit far faster than calls to long-running C functions. It may be viable to set a far lower value and instead provide a function that sees how much os.clock() or similar has increased.

Resources