How would someone create a preemptive scheduler for the Lua VM? - lua

I've been looking at lua and lvm.c. I'd very much like to implement an interface to allow me to control the VM interpreter state.
Cooperative multitasking from within lua would not work for me (user contributed code)
The debug hook gets me only about 50% of the way there, instruction execution limits, but it raises an exception which just crashes the running lua code - but I need to be able to tweak it even further.
I want to create a system where 10's of thousands of lua user scripts are running - individual threads would not work, and the execution limits would cause headache for beginning developers, I'm going to control execution speeds too. but ultimately
while true do
end
will execute forever, and I really don't care that it is.
Any ideas, help or other implementations that I could look at?
EDIT: This is not about sandboxing pretend I'm an expert in that field for this conversation
EDIT: I do not want to use an internally ran lua code coroutine based controller.
EDIT: I want to run one thread, and manage a large number of user contributed lua scripts, an external process level control mechansim would not scale at all.

You can search for Lua Sandbox implementations; for example, this wiki page and SO question provide some pointers. Note that most of the effort in sandboxing is focused on not allowing you to execute bad code, but not necessarily on preventing infinite loops. For better control you may need to combine Lua sandboxing with something like LXC or cpulimit. (not relevant based on the comments)
If you are looking for something Lua-based, lightweight, but not necessarily 100% foolproof, then you can try running your client code in a separate coroutine and set a debug hook on that coroutine that will be triggered every N-th line. In that hook you can check if the process you are running exceeded its quotes. You also need to take care of new coroutines started as those need to have their own hooks set (you either need to disable coroutine.create/wrap or to replace them with something that sets the debug hook you need).
The code in this case may look like:
local coro = coroutine.create(client_func)
debug.sethook(coro, debug_hook, "l", 1000) -- trigger hook on every 1000th line
It's not foolproof, because it may block on some IO operation and the debug hook will not help there.
[Edit based on updated question and comments]
Between "no lua code coroutine based controller" and "no external process control mechanism" I don't think you are left with much choice. It may be that your only option is to run one VM per user script and somehow give ticks to those VMs (there was a recent question on SO on this, but I can't find it). Before going this route, I would still try to do this with coroutines (which should scale to tens of thousands easily; Tir claims supporting 1M active users with coroutine-based architecture).
The mechanism would roughly look like this: you install the debug hook as I shown above and from that hook you yield back to your controller, which then decides what other coroutine (user script) to resume. I have this very mechanism working in the Lua debugger I've been developing (although it only does it for one client script). This doesn't protect you from IO calls that can block and for that you may still need to have a watchdog at the VM level to see if it's been blocked for longer than needed.
If you need to serialize and deserialize running code fragments that preserve upvalues and such, then Pluto is probably your only option.

Look at implementing lua_lock and lua_unlock.
http://www.lua.org/source/5.1/llimits.h.html#lua_lock

Take a look at lulu. It is lua VM written on lua. It's for Lua 5.1
For newer version you need to do some work. But it's then you really can make a schelduler.

Take a look at this,
https://github.com/amilamad/preemptive-task-scheduler-for-lua
I maintain this project. It,s a non blocking preemptive scheduler for running lua code. Suitable for long running game scripts.

Related

Way to write code "in the debugger" in Lua?

I just played around a bit with Lua and tried the Koneki eclipse plugin, which is quite nice. Problem is that when I make changes in a function I'm debugging at the moment the changes do not become effective when saving the changes. So I'm forced to restart the application. Would be so nice if I could make changes in the debugger and they would become effective on the fly as for example with Smalltalk or to some extend as in hot code replacement in Java. Anybody has a clue whether this is possible?
It is possible to some degree with some limitations. I've been developing an IDE/debugger that provides this functionality. It gives you access to a remote console to execute commands in the context/environment of your running application. The IDE also supports live coding, which reloads modified code as you make changes to it; see demos here.
The main limitation is that you can't modify a currently running function (at least without changes to Lua VM). This means that the effect of your changes to the currently running function will only be seen after you exit and re-enter that function. It works well for environments that call the same function repeatedly (for example a game engine calling draw), but may not work in your case.
Another challenge is dealing with upvalues (values that are created outside of your function and are referenced inside it). There are methods to "read" current upvalues and re-create them when the (new) function is created, but it requires some code analysis to find what functions will be recreated to query them for upvalues, to get the current values, and then to create a new environment with those upvalue and assign proper values to them. My current implementation doesn't do this, which means you need to use global variables as a workaround.
There was also relevant discussion just the other day on the Lua mailing list.

Clone a lua state

Recently, I have encountered many difficulties when I was developing using C++ and Lua. My situation is: for some reason, there can be thousands of Lua-states in my C++ program. But these states should be same just after initialization. Of course, I can do luaL_loadlibs() and lua_loadfile() for each state, but that is pretty heavy(in fact, it takes a rather long time for me even just initial one state). So, I am wondering the following schema: What about keeping a separate Lua-state(the only state that has to be initialized) which is then cloned for other Lua-states, is that possible?
When I started with Lua, like you I once wrote a program with thousands of states, had the same problem and thoughts, until I realized I was doing it totally wrong :)
Lua has coroutines and threads, you need to use these features to do what you need. They can be a bit tricky at first but you should be able to understand them in a few days, it'll be well worth your time.
take a look to the following lua API call I think it is what you exactly need.
lua_State *lua_newthread (lua_State *L);
This creates a new thread, pushes it on the stack, and returns a pointer to a lua_State that represents this new thread. The new thread returned by this function shares with the original thread its global environment, but has an independent execution stack.
There is no explicit function to close or to destroy a thread. Threads are subject to garbage collection, like any Lua object.
Unfortunately, no.
You could try Pluto to serialize the whole state. It does work pretty well, but in most cases it costs roughly the same time as normal initialization.
I think it will be hard to do exactly what you're requesting here given that just copying the state would have internal references as well as potentially pointers to external data. One would need to reconstruct those internal references in order to not just have multiple states pointing to the clone source.
You could serialize out the state after one starts up and then load that into subsequent states. If initialization is really expensive, this might be worth it.
I think the closest thing to doing what you want that would be relatively easy would be to put the states in different processes by initializing one state and then forking, however your operating system supports it:
http://en.wikipedia.org/wiki/Fork_(operating_system)
If you want something available from within Lua, you could try something like this:
How do you construct a read-write pipe with lua?

using Kernel#fork for backgrounding processes, pros? cons?

I'd like some thoughts on whether using fork{} to 'background' a process from a rails app is such a good idea or not...
From what I gather fork{my_method; Process#setsid} does in fact do what it's supposed to do.
1) creates another processes with a different PID
2) doesn't interrupt the calling process (e.g. it continues w/o waiting for the fork to finish)
3) executes the child until it finishes
..which is cool, but is it a good idea? What exactly is fork doing? Does it create a duplicate instance of my entire rails mongrel/passenger instance in memory? If so that would be very bad. Or, does it somehow do it without consuming a huge swath of memory.
My ultimate goal was to do away with my background daemon/queue system in favor of forking these processes (primarily sending emails) -- but if this won't save memory then it's definitely a step in the wrong direction
The fork does make a copy of your entire process, and, depending on exactly how you are hooked up to the application server, a copy of that as well. As noted in the other discussion this is done with copy-on-write so it's tolerable. Unix is built around fork(2), after all, so it has to manage it fairly fast. Note that any partially buffered I/O, open files, and lots of other stuff are also copied, as well as the state of the program that is spring-loaded to write them out, which would be incorrect.
I have a few thoughts:
Are you using Action Mailer? It seems like email would be easily done with AM or by Process.popen of something. (Popen will do a fork, but it is immediately followed by an exec.)
immediately get rid of all that state by executing Process.exec of another ruby interpreter plus your functionality. If there is too much state to transfer or you really need to use those duplicated file descriptors, you might do something like IO#popen instead so you can send the subprocess work to do. The system will share the pages containing the text of the Ruby interpreter of the subprocess with the parent automatically.
in addition to the above, you might want to consider the use of the daemons gem. While your rails process is already a daemon, using the gem might make it easier to keep one background task running as a batch job server, and make it easy to start, monitor, restart if it bombs, and shut down when you do...
if you do exit from a fork(2)ed subprocess, use exit! instead of exit
having a message queue and a daemon already set up, like you do, kinda sounds like a good solution to me :-)
Be aware that it will prevent you from using JRuby on Rails as fork() is not implemented (yet).
The semantics of fork is to copy the entire memory space of the process into a new process, but many (most?) systems will do that by just making a copy of the virtual memory tables and marking it copy-on-write. That means that (at first, at least) it doesn't use that much more physical memory, just enough to make the new tables and other per-process data structures.
That said, I'm not sure how well Ruby, RoR, etc. interacts with copy-on-write forking. In particular garbage collection could be problematic if it touches many memory pages (causing them to be copied).

Low-level Lua interpreter

Is there a way to run Lua code from a C/C++ program at a more fine-grained level than a standard "lua_pcall" function call? Ideally I'd like to be able to loop over a list of low-level bytecode instructions (assuming it has such things) and run them one by one, so that I could write my own scheduler which had more control of things than just running a complete Lua function from start to finish.
The reason I want to do this is because I wish to implement C functions which Lua code can call which would cause the program to wait until a certain (potentially long-winded) action had completed before continuing execution. There would be a high proportion of such function calls in a typical Lua script, so the idea of rewriting it to use callbacks once the action has completed isn't really practical.
Perhaps side-stepping the question, but you could use Lua coroutines rather than custom C stuff to wait until some event occurs.
For example, one coroutine could call a waitForEvent() function. In there, you can switch to another coro until that event occurs, then resume the first one. Take a look at the lua coro docs for more about that.
Jder's suggestion to use coroutines will work very well if you can write those long waiting C routines using Lua's cooperative threading (explicit yield) feature. You'll still use lua_pcall() to enter Lua, but the entry point will be your coroutine manager function.
This only works though if the C routines don't do anything while they wait. If they are long running because they calculate something for example, then you need to run multiple OS threads. Lua is threadsafe -- just create multiple threads and run lua_open() in each thread.
From http://www.lua.org/pil/24.1.html
The Lua library defines no global
variables at all. It keeps all its
state in the dynamic structure
lua_State and a pointer to this
structure is passed as an argument to
all functions inside Lua. This
implementation makes Lua reentrant and
ready to be used in multithreaded
code.
You can also combine the two approaches. If you have a wrapper Lua function to start an OS thread, you can yield after you start the thread. The coroutine manager will keep track of threads and continue a coroutine when the thread it started has finished. This lets you use a single Lua interpreter with multiple worker threads running pure C code.
If you go the OS threading way, please have a look at Lua Lanes. I would see it the perfect solution to what you're trying to achieve (= throw one addon module to the mix and you'll be making clear, understandable and simple code with multithreading seamlessly built in).
Please tell us how your issue got solved. :)
Does the debugging interface help?

How to implement a code coverage tool using Win32 Debugging API

I am trying to understand how to implement a Code Coverage tool using the Win32 Debugging API.
My thinking has been to utilize the Win32 Debugging API to launch a process in debug mode - and track what CPU instructions has been executed. After having tracked all CPU instructions I would then use the map file to map it to what source code lines were executed.
As far as I understand, there would be two ways of knowing what CPU instructions have been executing.
Would be to launch the process in debug mode - set all threads in single step mode and let the debugging app note all instructions that has been executed
Would be make a more intelligent approach where you would know a lot more about x86 instructions and basically replace the next branch instruction with a breakpoint. Then keeping track of the delta instructions between the two breakpoints.
Update - new suggested approaches inspired by Michael's response:
Start with the map file and insert breakpoints for the beginning of each line and let the debug framework be notified every time a breakpoint hits.
Start with the map file - binary instrumentation to insert a "hook" that get called at entry of each source line - avoiding the callback through the debugger framework.
Using a VM Technology - such as VMware to find out what instructions in a particular process was executed - I don't fully understand this approach...
Could someone validate one of the approaches above or maybe suggest an alternative - please note that the use case is line-by-line code coverage and not performance profiling - thus we need to know if each single source line is visited.
My primary goal (although no particular plan is in place...) would be to create a simple code coverage tool for Delphi primarily.
Thanks!
One approach is hooking all api calls and function calls to compare with table made from the source. Thus you discovers what is covered.
There is many api for hooking, one is Trappola API hooking
This could work - each single step event will create an exception and you could record the hit IP address in your map of executed code lines.
Unfortunately, I imagine this would be glacially slow. It'd be incredibly inefficient, as each single line of code results in 1000's of times more work, as an exception is generated, trapped, a message sent to your debugger, and then a round trip back after you record the hit. It might be better to try to set breakpoints instead for each covered line and clear them after they are hit. That'd be faster, but most likely still very slow.
The core problem is you're trying to use the debugger as a code coverage tool which it is not intended for. A quick search shows several code coverage tools for Delphi on the Internet.
I would suggest, in stead of hooking for each line of code, you can go for the each block. What I mean to say hook for block of codes. It will be faster and you can get the count of lines as well from the blocks count.

Resources