How to get current function call stack depth in Lua? - lua

I want to get current function call stack efficiently, Is there a api or sth to do this?
I tried a method like this, but its too slow.
local function GetStackDepth()
local depth = 0
while true do
if not debug.getinfo(3 + depth) then
break
end
depth = depth + 1
end
return depth
end
edit:
The really problem is Im writing a profiler tool, and using debug.sethook do sth in call and return event. but on lua5.1 or lua-jit situation, when a tail return happend i got two call event and just one return event like this:
call ------ the 1st call event
call
return
so my solution to this problem is get the call stack depth of the current event, on a return event when the depth is less than the 1st call event's depth, i know its a tail return, then i can handle it properly.
But I found the GetStackDepth() its self cost a lot time (too slow), which affect my profiler result.
i can't change the lua version.

For tail calls, you get a different event (or get istailcall in the activation record). I have to admit I am using the C API for the profiling code, but it should not be too tricky to translate that into native lua; although if you have performance issues, note that Roberto Ierusalimschy writes in Programming in Lua, 3rd ed, chapter 24 “The Debug Library”:
For performance reasons, the official interface to these primitives is through the C API.
I have a main C-style debug hook that forwards calls to event handlers:
void debugHook(lua_State * const ls, lua_Debug * const ar)
{
ASSERT(ls != nullptr);
ASSERT(ar != nullptr);
CallGraphTracker & dbg = getCallGraphTracker();
switch(ar->event)
{
case LUA_HOOKCALL:
dbg.handleEventCall(ls, ar);
break;
case LUA_HOOKTAILCALL:
dbg.handleEventTailCall(ls, ar);
break;
case LUA_HOOKRET:
dbg.handleEventReturn(ls, ar);
break;
default:
SHOULD_NEVER_BE_REACHED();
}
}
The event handlers, in my case, build a call graph based on the lua function pointers; the CallGraphTracker has Nodes (one root node and the further ones being added as subNodes) and keeps track of the curNode. (Actually, I also build function infos, but I have removed that code, to keep this simple. Obviously, the Nodes could store all kinds of additional info, too.)
The function Node::addCall just increases a number if a subnode for the same function pointer (luaPtr), line (ar->currentline), and “tailcall-ness” (true or false) already exists, otherwise a new Node is created. In both cases, the relevant subnode is returned and used as the new currentNode.
void CallGraphTracker::handleEventCall(lua_State * ls, lua_Debug * ar)
{
// Get function info and transform into simple function pointer.
lua_getinfo(ls, "f", ar);
ASSERT(lua_isfunction(ls, -1));
void const * luaPtr = lua_topointer(ls, -1);
ASSERT(luaPtr != nullptr);
// Add call.
lua_getstack(ls, 1, ar);
lua_getinfo(ls, "l", ar);
curNode = curNode->addCall(luaPtr, ar->currentline, false);
}
void CallGraphTracker::handleEventTailCall(lua_State * ls, lua_Debug * ar)
{
// Get function info and transform into simple function pointer.
lua_getinfo(ls, "nSf", ar);
ASSERT(lua_isfunction(ls, -1));
void const * luaPtr = lua_topointer(ls, -1);
ASSERT(luaPtr != nullptr);
// Add tail call.
lua_getstack(ls, 1, ar);
lua_getinfo(ls, "l", ar);
curNode = curNode->addCall(luaPtr, ar->currentline, true);
}
void CallGraphTracker::handleEventReturn(lua_State *, lua_Debug *)
{
while(curNode->isTailCall())
{
ASSERT(curNode->getCallerNode() != nullptr);
curNode = curNode->getCallerNode();
}
ASSERT(curNode->getCallerNode() != nullptr);
curNode = curNode->getCallerNode();
}
You'll notice that in handleEventReturn I traverse back the tailcalls first and then do the “proper” return. You can't assume a fixed number of tailcalls (strictly speaking) in case of tailcall-recursion of the form
function f(num)
if num == 0 then
return
end
-- Do something
return f(num-1)
end
Unfortunately, while this handles the tailcalls correctly, my current problem is that under error handling there can still be a mismatch between calls and returns, so I was hoping to be able to ask lua directly for the depth of the call stack, rather than having to do the slow “trial and error”.
My current plan (I have not yet implemented it) is to keep track of my expected call depth, then get information on that level. If that is what I expected, great, no further tests nececssary, moderate cost involved, otherwise I walk up (or is that down?) the stack until I get to a point where my stored call graph information fits the info returned from lua for that stack depth.

Related

nvwgf2umx.dll CComPtr Crash Sometimes

I am receiving a very strange bug right now. I'm currently writing a small project in DirectX 11 and making use of ATL CComPtr's for the COM components. In one instance, I'm wrapping an ID3D11Buffer in a CComPtr. In most of my application, this has been fine and seen no crashes, however, for some reason in this very particular instance, I'm crashing occasionally.
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.ByteWidth = sizeof(MiscCBuffer);
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
bd.MiscFlags = 0;
hr = device->CreateBuffer(&bd, nullptr, &_gcVoxelBuffer);
if (FAILED(hr)) {
throw std::exception("[E] Creating constant buffer in TerrainComponent onAwake.");
}
This is the code I'm using to create the constant buffer. The CPU buffer's values are set like this
float dimX = _instanceDimensions.x;
float dimY = _instanceDimensions.y;
float dimZ = _instanceDimensions.z;
_cVoxelBuffer.misc.x = dimX;
_cVoxelBuffer.misc.y = dimY;
_cVoxelBuffer.misc.z = dimZ;
_cVoxelBuffer.misc.w = 0;
The MiscCBuffer struct only holds a XMFLOAT4. Finally, to update the constant buffer on the GPU with the CPU data, I use this code.
updateD11Buffer(_gcVoxelBuffer, _cVoxelBuffer, context);
template <class T>
updateD11Buffer(const CComPtr<ID3D11Buffer>& gcBuffer, const T& cbuffer, const CComPtr<ID3D11DeviceContext>& ctx){
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
ctx->Map(gcBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
memcpy(mappedResource.pData, &cbuffer, sizeof(cbuffer));
ctx->Unmap(gcBuffer, 0);
}
As for the error itself, it sometimes happens when the program first launches. It could successfully launch 10 times in a row, and then fail the next 3 times.
Exception thrown at 0x00007FFB003B273B (nvwgf2umx.dll) in ECS_Simulation.exe: 0xC0000005: Access violation reading location 0x000001BE69F9F000.
I have tried reading online but a lot of posts regarding nvwgf2umx.dll crashing with an access violation come from shipped game titles, other posts regarding access violations are usually caused by NULL pointers. In my case, I have checked the _gcVoxelBuffer and _gcVoxelBuffer.p, both of which are valid pointers.
In addition, the D3D Context object is pointing to a valid location, and the CPU side buffer object is also valid to the best of my knowledge.
I'm not sure if this is really the problem, but it's a problem.
Instead try:
template <class T>
updateD11Buffer(const CComPtr<ID3D11Buffer>& gcBuffer, const T& cbuffer, const CComPtr<ID3D11DeviceContext>& ctx)
{
D3D11_MAPPED_SUBRESOURCE mappedResource = {};
if (SUCCEEDED(ctx->Map(gcBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource))
{
memcpy(mappedResource.pData, &cbuffer, sizeof(cbuffer));
ctx->Unmap(gcBuffer, 0);
}
}
ZeroMemory is ancient Win32 pattern. With C++11 or later compilers, uniform initialization is much easier to use.
Note that a more flexible design would be:
template <class T>
updateD11Buffer(ID3D11Buffer* gcBuffer, const T& cbuffer, ID3D11DeviceContext* ctx)
{
...
}
// call using updateD11Buffer(_gcVoxelBuffer.Get(), _cVoxelBuffer, context.Get());
This version doesn't force the use of a particular smart-pointer, and is less of a "thick syntax forest".
PS: ATL's CComPtr is a bit dated and has a few quirks to it. For example, &_gcVoxelBuffer assumes that _gcVoxelBuffer is always null so you can easily get resource leaks.
You should take a look at WRL's ComPtr which is "ATL 2.0". See this article.

Saxon-C CentOS8 Compile

I am trying to evaluate Saxon-C 1.2.1 HE on CentOS8 and installation seems to have gone ok. Trying out the samples by cd samples/cppTests && build64-linux.sh though leads to a myriad of compilation errors to the tune of the following:
../../Saxon.C.API/SaxonProcessor.h:599:32: error: division ‘sizeof (JNINativeMethod*) / sizeof (JNINativeMethod)’ does not compute the number of array elements [-Werror=sizeof-pointer-div]
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
Before I summarily and trustfully switched off -Werror=sizeof-pointer-div i checked the source code and what's going on there do seem dubious.
bool registerCPPFunction(char * libName, JNINativeMethod * gMethods=NULL){
if(libName != NULL) {
setConfigurationProperty("extc", libName);
}
if(gMethods == NULL && nativeMethodVect.size()==0) {
return false;
} else {
if(gMethods == NULL) {
//copy vector to gMethods
gMethods = new JNINativeMethod[nativeMethodVect.size()];
}
return registerNativeMethods(sxn_environ->env, "com/saxonica/functions/>
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
}
return false;
}
more specifically sizeof(gMethods) / sizeof(gMethods[0]) would not seem to calculate anything useful by any margin. The intention was probably rather to output some code that would arrive at the same value as nativeMethodVect.size() but seeing this project's source for the very first time i might be mistaking and the division is in fact intentional ?
I am inclined to guess the intention was in fact closer to b than to a in the following example:
#include <cstdio>
struct test
{
int x, y, z;
};
int main()
{
test *a = new test[32], b[32];
printf("%d %d\n", sizeof(a)/sizeof(a[0]), sizeof(b)/sizeof(b[0]));
return 0;
}
which output 0 32 which is expected as the sizeof(a) gives the size of a pointer not the size of an array's memory region.
That bit of code is to support the feature of user defined extension functions in XSLT stylesheets and XQuery queries. If a user is not using these features then they don't need that bit of code. In fact User defined extension functions is only available in Saxon-PE/C and Saxon-EE/C so it should not be in the Saxon-HE/C code base. I have created the following bug issue to investigate the error above and to https://saxonica.plan.io/issues/4477
I would think the workaround would be to either remove the code in question if the extension function feature is not used or remove the compile flag -Werror=sizeof-pointer-div.
The intent was code is as follows:
jobject JNICALL cppNativeCall(jstring funcName, jobjectArray arguments, jobjectArray argTypes){
//native call code here
}
JNINativeMethod cppMethods[] =
{
{
fname,
funcParameters,
(void *)&cppNativeCall
}
};
bool nativeFound = processor->registerNativeMethods(env, "NativeCall",
cppMethods, sizeof(cppMethods) / sizeof(cppMethods[0]));

MemFault in GC when calling back a closure from C

I working with Keil, MDK-ARM Pro 4.71 for a Cortex-M3 target(STM32F107).
I have compiled the Lua interpreter and a Lua "timer" module that interfaces the chip's timers. I'd like to call a lua function when the timer is elapsed.
Here is a sample use :
t = timer.open()
t.event = function() print("Bing !") end
t:start()
Until here, everything works fine :-) ! I see the "Bing !" message being printed each time the timer elapses.
Now if I use a closure :
t = timer.open()
i = 0
t.event = function() i = i + 1; print(i); end
t:start()
I'm having a bad memory access in the GC after some amount of timer's updates. Since it's an embedded context with very few memory, I may be running out of memory quite fast if there is a leak.
Here is the "t.event" setter (ELIB_TIMER is a C structure representing my timer) :
static int ElibTimerSetEvent(lua_State* L)
{
ELIB_TIMER* pTimer_X = ElibCheckTimer(L, 1, TRUE);
if (pTimer_X->LuaFuncKey_i != LUA_REFNIL)
{
luaL_unref(L, LUA_REGISTRYINDEX, pTimer_X->LuaFuncKey_i);
pTimer_X->LuaFuncKey_i = LUA_REFNIL;
}
if (!lua_isnil(L, 2))
{
pTimer_X->LuaFuncKey_i = luaL_ref(L, LUA_REGISTRYINDEX);
}
return 0;
}
And here is the native callback implementation :
static void ElibTimerEventHandler(SYSEVT_HANDLE Event_H)
{
ELIB_TIMER* pTimer_X = (ELIB_TIMER*)SWLIB_SYSEVT_GetSideData(Event_H);
lua_State* L = pTimer_X->L;
int i = lua_gettop(L);
if (pTimer_X->LuaFuncKey_i != LUA_REFNIL)
{
lua_rawgeti(L, LUA_REGISTRYINDEX, pTimer_X->LuaFuncKey_i);
lua_call(L, 0, 0);
lua_settop(L, i);
}
}
This is synchronized externally, so this isn't a synchronization issue.
Am I doing something wrong ?
EDIT
Here is the callstack (with a lua_pcall instead of lua_call, but it is the same). The first line is my hard fault handler.
I have found the problem ! I ran out of stack (native stack, not Lua) space :p.
I guess this specific script was causing a particularly long call stack. After increasing the allocated memory for my native stack, the problem is gone. On the opposite, if I reduce it, I can't even initialize the interpreter.
Many thanks to those who tried to help here.
Found a bug in your C code. You broke the lua stack in static int ElibTimerSetEvent(lua_State* L)
luaL_ref will pop the value on the top of Lua stack: http://www.lua.org/manual/5.2/manual.html#luaL_ref
So you need to copy the value to be refed, before the call to luaL_ref:
lua_pushvalue(L, 2); // push the callback to stack top, and then it will be consumed by luaL_ref()
Please fix this and try again.

Lua crashing for no apparent reason

We have Lua integrated into a project but we've found an odd test case that crashes consistently on ARM:
data = {"A","B","C","D","E","F","G","H","I","J"};
function OnTick(_object)
local params = {};
return 1;
end
Here is the basics of how the function is being called from C++:
lua_getglobal(Lua, function_name->c_str()); // Push function name that we want to call onto the stack
if (lua_isnil(Lua, -1))
{
// Error
lua_pop(Lua, 1);
return -1;
}
lua_pushlightuserdata(Lua, (void*)object); // Push the reference object onto the stack
if (lua_pcall(Lua, 1, 1, 0) != 0)
{
// Error
lua_pop(Lua, 1);
return -1;
}
lua_pop(Lua, 1);
return 1;
OnTick crashes after being called around 5 times.
Lua appears to be crashing when the garbage collector tries to clean up. Anyone else come across something like this and solved it?
Resolved this issue, the client code was corrupting the Lua state

Bad file descriptor on pthread_detach

My pthread_detach calls fail with a "Bad file descriptor" error. The calls are in the destructor for my class and look like this -
if(pthread_detach(get_sensors) != 0)
printf("\ndetach on get_sensors failed with error %m", errno);
if(pthread_detach(get_real_velocity) != 0)
printf("\ndetach on get_real_velocity failed with error %m", errno);
I have only ever dealt with this error when using sockets. What could be causing this to happen in a pthread_detach call that I should look for? Or is it likely something in the thread callback that could be causing it? Just in case, the callbacks look like this -
void* Robot::get_real_velocity_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_real_velocity_thread_i();
}
inline void Robot::get_real_velocity_thread_i() {
while(1) {
usleep(14500);
sensor_packet temp = get_sensor_value(REQUESTED_VELOCITY);
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
} //end while
}
/*Callback for get sensors thread*/
void* Robot::get_sensors_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_sensors_thread_i();
} //END GETSENSORS_THREAD
inline void Robot::get_sensors_thread_i() {
while(1) {
usleep(14500);
if(sensorsstreaming) {
unsigned char receive;
int read = 0;
read = connection.PollComport(port, &receive, sizeof(unsigned char));
if((int)receive == 19) {
read = connection.PollComport(port, &receive, sizeof(unsigned char));
unsigned char rest[54];
read = connection.PollComport(port, rest, 54);
/* ***SET SENSOR VALUES*** */
//bump + wheel drop
sensor_values[0] = (int)rest[1];
sensor_values[1] = -1;
//wall
sensor_values[2] = (int)rest[2];
sensor_values[3] = -1;
...
...
lots more setting just like the two above
} //end if header == 19
} //end if sensors streaming
} //end while
} //END GET_SENSORS_THREAD_I
Thank you for any help.
The pthread_* functions return an error code; they do not set errno. (Well, they may of course, but not in any way that is documented.)
Your code should print the value returned by pthread_detach and print that.
Single Unix Spec documents two return values for this function: ESRCH (no thread by that ID was found) and EINVAL (the thread is not joinable).
Detaching threads in the destructor of an object seems silly. Firstly, if they are going to be detached eventually, why not just create them that way?
If there is any risk that the threads can use the object that is being destroyed, they need to be stopped, not detached. I.e. you somehow indicate to the threads that they should shut down, and then wait for them to reach some safe place after which they will not touch the object any more. pthread_join is useful for this.
Also, it is a little late to be doing that from the destructor. A destructor should only be run when the thread executing it is the only thread with a reference to that object. If threads are still using the object, then you're destroying it from under them.

Resources