Can I profile Lua scripts running in Redis? - lua

I have a cluster app that uses a distributed Redis back-end, with dynamically generated Lua scripts dispatched to the redis instances. The Lua component scripts can get fairly complex and have a significant runtime, and I'd like to be able to profile them to find the hot spots.
SLOWLOG is useful for telling me that my scripts are slow, and exactly how slow they are, but that's not my problem. I know how slow they are, I'd like to figure out which parts of them are slow.
The redis EVAL docs are clear that redis does not export any timekeeping functions to lua, which makes it seem like this might be a lost cause.
So, short a custom fork of Redis, is there any way to tell which parts of my Lua script are slower than others?
EDIT
I took Doug's suggestion and used debug.sethook - here's the hook routine I inserted at the top of my script:
redis.call('del', 'line_sample_count')
local function profile()
local line = debug.getinfo(2)['currentline']
redis.call('zincrby', 'line_sample_count', 1, line)
end
debug.sethook(profile, '', 100)
Then, to see the hottest 10 lines of my script:
ZREVRANGE line_sample_count 0 9 WITHSCORES

If your scripts are processing bound (not I/O bound), then you may be able to use the debug.sethook function with a count hook:
The count hook: is called after the interpreter executes every
count instructions. (This event only happens while Lua is executing a
Lua function.)
You'll have to build a profiler based on the counts you receive in your callback.
The PepperfishProfiler would be a good place to start. It uses os.clock which you don't have, but you could just use hook counts for a very crude approximation.
This is also covered in PiL 23.3 – Profiles

In standard Lua C, you can't. It's not a built-in function - it only returns seconds. So, there are two options available: You either write your own Lua extension DLL to return the time in msec, or:
You can do a basic benchmark using a millisecond-resolution time. You can access the current millisecond time with LuaSocket. Though this adds a dependency to your project, it's an effective way to do trivial benchmarking.
require "socket"
t = socket.gettime();

Related

How to run a python script in the background on azure

I have a uni project in which I have to run a number of machine learning algorithms like SVM, ME, Naive bayes, etc... and perform a grid search on them, to find the optimal sets of hyper-parameters. Running all these would take an exceedingly long amount of time (48-168 hours total but run- in batches) and considering my computer becomes more or less unusable while I run them, I was attempting to find a solution which allowed me to run my code externally. The scripts I have to run are in python and my plan was to run them on azure to make use of its "Azure for students" $100 credit.
My original plan was to use azure's ml notebook section and then run the python scripts in the terminal they provide. My problem with this route is as far as I can tell, when the browser closes, the computation stops which is a problem. I looked into it, and I found some articles mentioning a combination of 'ctrl-z', 'bg', and 'disown', to disconnect the process from the shell but I thought there should definitely be a better way to do it. (I also wasn't sure how this worked in my case where there were 8 processes running at once using gridsearchcv's n_jobs=-1 feature).
I then realized a better way to do this would be to use pipelines. My intent was to create a number of pipelines of the form:
(Import data in xlsx file) -> (python script to run ML) -> (export data to working directory)
And then run them until all the work is completed. In the first stage I used the parameters,
And I got the error,
My intention was to have the excel file pipe into the python script as a data frame but this implantation (and all the others I've tried) isn't working.
My question first question is, how do I get the excel data to pipe into the python script properly?
My second question is, is there a better way to go about doing this? Would running it on the shell be an easier way to do it? If so, how do ensure it runs while my browser is closed? Are there other services that would be better? My main metrics for this are price (Cheap) and time limit (ability to run for long time) but any suggestions would be greatly appreciated.
I also tried using google colab, this worked but it felt slower than running on my computer.
To run a grid search with AzureML, you would use the Sweep job. The simplest way to kick of a Sweep is via the CLI. See here for an example.
$schema: https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json
type: sweep
trial:
command: >-
python hello-sweep.py
--A ${{inputs.A}}
--B ${{search_space.B}}
--C ${{search_space.C}}
code: src
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu#latest
inputs:
A: 0.5
compute: azureml:cpu-cluster
sampling_algorithm: random
search_space:
B:
type: choice
values: ["hello", "world", "hello_world"]
C:
type: uniform
min_value: 0.1
max_value: 1.0
objective:
goal: minimize
primary_metric: random_metric
limits:
max_total_trials: 4
max_concurrent_trials: 2
timeout: 3600
display_name: hello-sweep-example
experiment_name: hello-sweep-example
description: Hello sweep job example.
You can start that job using the AzureML v2 CLI with the following command:
az ml job create -f hello-sweep.yml
That will create max_total_trials number of jobs for different parameter combinations as defined in the search_space governed by the sampling_algorithm, which can be random, grid or bayesian.
The actual job that is started is defined under trial. You need a program or script of some sort that you can execute via a command line and that can take parameters via that command line. command is that command that is executed, code is a folder on the local machine that contains the script/program you want to run and environment is a registered environment in your workspace. azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu#latest is one that is predefined in AzureML, but you can also create your own.
If you prefer Python, here is the same thing done in Python.
See here for a blog post on How to do hyperparameter tuning using Azure ML.

How start calculate statistics about internal Lua performance?

How to determine Lua internal performance, i.e. start calculate statistics on table count, reference count, function call count and other.
I guess that my Lua scenario internal performance issue in manipulate with table (i.e. create a lot of table >= 1200) and this is performance problem to my scenario.
I would like to avoid redesigning scenario and ensure point of bottleneck.
Have a look at the debug library, that should help you quite a bit already.
Using os.clock() to time execution times of blocks of code can also help you identify roughly where your bottlenecks are.
Lookin at the instructions generated from your program using luac -p -l can also give you lots of insight into how things work and what might be causing your code to run slow.
Here's a short example of how you can even output the instructions of a function at runtime:
local function map(f, a, ...) -- Just a simple example function :)
if a then
return f(a), map(f, ...)
end
end
-- This is where the magic happens ;)
f = io.popen('luac -l -p -', 'w')
f:write(string.dump(map))
f:close()

How to calculate CPU time in elixir when multiple actors/processes are involved?

Let's say I have a function which does some work by spawning multiple processes. I want to compare CPU time vs real time taken by this function.
def test do
prev_real = System.monotonic_time(:millisecond)
# Code to complete some task
# Spawn different processes & give each process some task
# Receive result
# Finish task
current_real = System.monotonic_time(:millisecond)
diff_real = current_real - prev_real
IO.puts "Real time " <> to_string(diff_real)
IO.puts "CPU time ?????"
end
How to calculate CPU time required by the given function? I am interested in calculating CPU time/Real time ratio.
If you are just trying to profile your code rather than implement your own profiling framework I would recommend using already existing tools like:
fprof which will give you information about time spent in functions (real and own)
percept which will provide you information about which processes in your system ware working at any given time and on what
xprof which is design to help you find which calls to your function will cause it to take more time (trigger inefficient branch of code).
They take advantage of both erlang:trace to figure out which function is being executed and for how long and erlang:system_profile with runnable_procs to determine which processes are currently running. You might start a function, hit a receive or be preemptive rescheduled and wait without doing any actual work. Combining those two might be complicated, and I would recommend using already existing tools before trying glue together your own.
You could also look into tools like erlgrind and eflame if you are looking for more visual representations of your calls.

get current date and time in lua in redis

How can I get current date / time in Lua embedded in Redis?
I need to have it in following format - YYYY-MM-DD, HH:MM:SS
Tried with os.date() but it does not recognize it.
Redis' Lua sandbox has only a handful of libraries, and os isn't one of these.
You can call the Redis TIME from Lua like so:
local t = redis.call('TIME')
However, you'll need to find a way to convert the epoch to the desired format and also note that it will stop you script from performing any writes (as it is a non-deterministic command).
Update: as of Redis v3.2, there is a new replication mode for scripts that is effect-based (rather than code-based). When using this mode you can actually call all the random, non-deterministic commands. More information is at EVAL's documentation page
This was already discussed in the comments, but the correct answer should have an answer:
The current time is non-deterministic i.e. it returns different values on repeated calls. This hurts replication. For this reason, the current time should be passed into your LUA script as a parameter.

Lua task scheduling

I've been writing some scripts for a game, the scripts are written in Lua. One of the requirements the game has is that the Update method in your lua script (which is called every frame) may take no longer than about 2-3 milliseconds to run, if it does the game just hangs.
I solved this problem with coroutines, all I have to do is call Multitasking.RunTask(SomeFunction) and then the task runs as a coroutine, I then have to scatter Multitasking.Yield() throughout my code, which checks how long the task has been running for, and if it's over 2 ms it pauses the task and resumes it next frame. This is ok, except that I have to scatter Multitasking.Yield() everywhere throughout my code, and it's a real mess.
Ideally, my code would automatically yield when it's been running too long. So, Is it possible to take a Lua function as an argument, and then execute it line by line (maybe interpreting Lua inside Lua, which I know is possible, but I doubt it's possible if all you have is a function pointer)? In this way I could automatically check the runtime and yield if necessary between every single line.
EDIT:: To be clear, I'm modding a game, that means I only have access to Lua. No C++ tricks allowed.
check lua_sethook in the Debug Interface.
I haven't actually tried this solution myself yet, so I don't know for sure how well it will work.
debug.sethook(coroutine.yield,"",10000);
I picked the number arbitrarily; it will have to be tweaked until it's roughly the time limit you need. Keep in mind that time spent in C functions etc will not increase the instruction count value, so a loop will reach this limit far faster than calls to long-running C functions. It may be viable to set a far lower value and instead provide a function that sees how much os.clock() or similar has increased.

Resources