I working with Keil, MDK-ARM Pro 4.71 for a Cortex-M3 target(STM32F107).
I have compiled the Lua interpreter and a Lua "timer" module that interfaces the chip's timers. I'd like to call a lua function when the timer is elapsed.
Here is a sample use :
t = timer.open()
t.event = function() print("Bing !") end
t:start()
Until here, everything works fine :-) ! I see the "Bing !" message being printed each time the timer elapses.
Now if I use a closure :
t = timer.open()
i = 0
t.event = function() i = i + 1; print(i); end
t:start()
I'm having a bad memory access in the GC after some amount of timer's updates. Since it's an embedded context with very few memory, I may be running out of memory quite fast if there is a leak.
Here is the "t.event" setter (ELIB_TIMER is a C structure representing my timer) :
static int ElibTimerSetEvent(lua_State* L)
{
ELIB_TIMER* pTimer_X = ElibCheckTimer(L, 1, TRUE);
if (pTimer_X->LuaFuncKey_i != LUA_REFNIL)
{
luaL_unref(L, LUA_REGISTRYINDEX, pTimer_X->LuaFuncKey_i);
pTimer_X->LuaFuncKey_i = LUA_REFNIL;
}
if (!lua_isnil(L, 2))
{
pTimer_X->LuaFuncKey_i = luaL_ref(L, LUA_REGISTRYINDEX);
}
return 0;
}
And here is the native callback implementation :
static void ElibTimerEventHandler(SYSEVT_HANDLE Event_H)
{
ELIB_TIMER* pTimer_X = (ELIB_TIMER*)SWLIB_SYSEVT_GetSideData(Event_H);
lua_State* L = pTimer_X->L;
int i = lua_gettop(L);
if (pTimer_X->LuaFuncKey_i != LUA_REFNIL)
{
lua_rawgeti(L, LUA_REGISTRYINDEX, pTimer_X->LuaFuncKey_i);
lua_call(L, 0, 0);
lua_settop(L, i);
}
}
This is synchronized externally, so this isn't a synchronization issue.
Am I doing something wrong ?
EDIT
Here is the callstack (with a lua_pcall instead of lua_call, but it is the same). The first line is my hard fault handler.
I have found the problem ! I ran out of stack (native stack, not Lua) space :p.
I guess this specific script was causing a particularly long call stack. After increasing the allocated memory for my native stack, the problem is gone. On the opposite, if I reduce it, I can't even initialize the interpreter.
Many thanks to those who tried to help here.
Found a bug in your C code. You broke the lua stack in static int ElibTimerSetEvent(lua_State* L)
luaL_ref will pop the value on the top of Lua stack: http://www.lua.org/manual/5.2/manual.html#luaL_ref
So you need to copy the value to be refed, before the call to luaL_ref:
lua_pushvalue(L, 2); // push the callback to stack top, and then it will be consumed by luaL_ref()
Please fix this and try again.
Related
I am trying to evaluate Saxon-C 1.2.1 HE on CentOS8 and installation seems to have gone ok. Trying out the samples by cd samples/cppTests && build64-linux.sh though leads to a myriad of compilation errors to the tune of the following:
../../Saxon.C.API/SaxonProcessor.h:599:32: error: division ‘sizeof (JNINativeMethod*) / sizeof (JNINativeMethod)’ does not compute the number of array elements [-Werror=sizeof-pointer-div]
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
Before I summarily and trustfully switched off -Werror=sizeof-pointer-div i checked the source code and what's going on there do seem dubious.
bool registerCPPFunction(char * libName, JNINativeMethod * gMethods=NULL){
if(libName != NULL) {
setConfigurationProperty("extc", libName);
}
if(gMethods == NULL && nativeMethodVect.size()==0) {
return false;
} else {
if(gMethods == NULL) {
//copy vector to gMethods
gMethods = new JNINativeMethod[nativeMethodVect.size()];
}
return registerNativeMethods(sxn_environ->env, "com/saxonica/functions/>
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
}
return false;
}
more specifically sizeof(gMethods) / sizeof(gMethods[0]) would not seem to calculate anything useful by any margin. The intention was probably rather to output some code that would arrive at the same value as nativeMethodVect.size() but seeing this project's source for the very first time i might be mistaking and the division is in fact intentional ?
I am inclined to guess the intention was in fact closer to b than to a in the following example:
#include <cstdio>
struct test
{
int x, y, z;
};
int main()
{
test *a = new test[32], b[32];
printf("%d %d\n", sizeof(a)/sizeof(a[0]), sizeof(b)/sizeof(b[0]));
return 0;
}
which output 0 32 which is expected as the sizeof(a) gives the size of a pointer not the size of an array's memory region.
That bit of code is to support the feature of user defined extension functions in XSLT stylesheets and XQuery queries. If a user is not using these features then they don't need that bit of code. In fact User defined extension functions is only available in Saxon-PE/C and Saxon-EE/C so it should not be in the Saxon-HE/C code base. I have created the following bug issue to investigate the error above and to https://saxonica.plan.io/issues/4477
I would think the workaround would be to either remove the code in question if the extension function feature is not used or remove the compile flag -Werror=sizeof-pointer-div.
The intent was code is as follows:
jobject JNICALL cppNativeCall(jstring funcName, jobjectArray arguments, jobjectArray argTypes){
//native call code here
}
JNINativeMethod cppMethods[] =
{
{
fname,
funcParameters,
(void *)&cppNativeCall
}
};
bool nativeFound = processor->registerNativeMethods(env, "NativeCall",
cppMethods, sizeof(cppMethods) / sizeof(cppMethods[0]));
I want to get current function call stack efficiently, Is there a api or sth to do this?
I tried a method like this, but its too slow.
local function GetStackDepth()
local depth = 0
while true do
if not debug.getinfo(3 + depth) then
break
end
depth = depth + 1
end
return depth
end
edit:
The really problem is Im writing a profiler tool, and using debug.sethook do sth in call and return event. but on lua5.1 or lua-jit situation, when a tail return happend i got two call event and just one return event like this:
call ------ the 1st call event
call
return
so my solution to this problem is get the call stack depth of the current event, on a return event when the depth is less than the 1st call event's depth, i know its a tail return, then i can handle it properly.
But I found the GetStackDepth() its self cost a lot time (too slow), which affect my profiler result.
i can't change the lua version.
For tail calls, you get a different event (or get istailcall in the activation record). I have to admit I am using the C API for the profiling code, but it should not be too tricky to translate that into native lua; although if you have performance issues, note that Roberto Ierusalimschy writes in Programming in Lua, 3rd ed, chapter 24 “The Debug Library”:
For performance reasons, the official interface to these primitives is through the C API.
I have a main C-style debug hook that forwards calls to event handlers:
void debugHook(lua_State * const ls, lua_Debug * const ar)
{
ASSERT(ls != nullptr);
ASSERT(ar != nullptr);
CallGraphTracker & dbg = getCallGraphTracker();
switch(ar->event)
{
case LUA_HOOKCALL:
dbg.handleEventCall(ls, ar);
break;
case LUA_HOOKTAILCALL:
dbg.handleEventTailCall(ls, ar);
break;
case LUA_HOOKRET:
dbg.handleEventReturn(ls, ar);
break;
default:
SHOULD_NEVER_BE_REACHED();
}
}
The event handlers, in my case, build a call graph based on the lua function pointers; the CallGraphTracker has Nodes (one root node and the further ones being added as subNodes) and keeps track of the curNode. (Actually, I also build function infos, but I have removed that code, to keep this simple. Obviously, the Nodes could store all kinds of additional info, too.)
The function Node::addCall just increases a number if a subnode for the same function pointer (luaPtr), line (ar->currentline), and “tailcall-ness” (true or false) already exists, otherwise a new Node is created. In both cases, the relevant subnode is returned and used as the new currentNode.
void CallGraphTracker::handleEventCall(lua_State * ls, lua_Debug * ar)
{
// Get function info and transform into simple function pointer.
lua_getinfo(ls, "f", ar);
ASSERT(lua_isfunction(ls, -1));
void const * luaPtr = lua_topointer(ls, -1);
ASSERT(luaPtr != nullptr);
// Add call.
lua_getstack(ls, 1, ar);
lua_getinfo(ls, "l", ar);
curNode = curNode->addCall(luaPtr, ar->currentline, false);
}
void CallGraphTracker::handleEventTailCall(lua_State * ls, lua_Debug * ar)
{
// Get function info and transform into simple function pointer.
lua_getinfo(ls, "nSf", ar);
ASSERT(lua_isfunction(ls, -1));
void const * luaPtr = lua_topointer(ls, -1);
ASSERT(luaPtr != nullptr);
// Add tail call.
lua_getstack(ls, 1, ar);
lua_getinfo(ls, "l", ar);
curNode = curNode->addCall(luaPtr, ar->currentline, true);
}
void CallGraphTracker::handleEventReturn(lua_State *, lua_Debug *)
{
while(curNode->isTailCall())
{
ASSERT(curNode->getCallerNode() != nullptr);
curNode = curNode->getCallerNode();
}
ASSERT(curNode->getCallerNode() != nullptr);
curNode = curNode->getCallerNode();
}
You'll notice that in handleEventReturn I traverse back the tailcalls first and then do the “proper” return. You can't assume a fixed number of tailcalls (strictly speaking) in case of tailcall-recursion of the form
function f(num)
if num == 0 then
return
end
-- Do something
return f(num-1)
end
Unfortunately, while this handles the tailcalls correctly, my current problem is that under error handling there can still be a mismatch between calls and returns, so I was hoping to be able to ask lua directly for the depth of the call stack, rather than having to do the slow “trial and error”.
My current plan (I have not yet implemented it) is to keep track of my expected call depth, then get information on that level. If that is what I expected, great, no further tests nececssary, moderate cost involved, otherwise I walk up (or is that down?) the stack until I get to a point where my stored call graph information fits the info returned from lua for that stack depth.
We have Lua integrated into a project but we've found an odd test case that crashes consistently on ARM:
data = {"A","B","C","D","E","F","G","H","I","J"};
function OnTick(_object)
local params = {};
return 1;
end
Here is the basics of how the function is being called from C++:
lua_getglobal(Lua, function_name->c_str()); // Push function name that we want to call onto the stack
if (lua_isnil(Lua, -1))
{
// Error
lua_pop(Lua, 1);
return -1;
}
lua_pushlightuserdata(Lua, (void*)object); // Push the reference object onto the stack
if (lua_pcall(Lua, 1, 1, 0) != 0)
{
// Error
lua_pop(Lua, 1);
return -1;
}
lua_pop(Lua, 1);
return 1;
OnTick crashes after being called around 5 times.
Lua appears to be crashing when the garbage collector tries to clean up. Anyone else come across something like this and solved it?
Resolved this issue, the client code was corrupting the Lua state
My pthread_detach calls fail with a "Bad file descriptor" error. The calls are in the destructor for my class and look like this -
if(pthread_detach(get_sensors) != 0)
printf("\ndetach on get_sensors failed with error %m", errno);
if(pthread_detach(get_real_velocity) != 0)
printf("\ndetach on get_real_velocity failed with error %m", errno);
I have only ever dealt with this error when using sockets. What could be causing this to happen in a pthread_detach call that I should look for? Or is it likely something in the thread callback that could be causing it? Just in case, the callbacks look like this -
void* Robot::get_real_velocity_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_real_velocity_thread_i();
}
inline void Robot::get_real_velocity_thread_i() {
while(1) {
usleep(14500);
sensor_packet temp = get_sensor_value(REQUESTED_VELOCITY);
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
} //end while
}
/*Callback for get sensors thread*/
void* Robot::get_sensors_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_sensors_thread_i();
} //END GETSENSORS_THREAD
inline void Robot::get_sensors_thread_i() {
while(1) {
usleep(14500);
if(sensorsstreaming) {
unsigned char receive;
int read = 0;
read = connection.PollComport(port, &receive, sizeof(unsigned char));
if((int)receive == 19) {
read = connection.PollComport(port, &receive, sizeof(unsigned char));
unsigned char rest[54];
read = connection.PollComport(port, rest, 54);
/* ***SET SENSOR VALUES*** */
//bump + wheel drop
sensor_values[0] = (int)rest[1];
sensor_values[1] = -1;
//wall
sensor_values[2] = (int)rest[2];
sensor_values[3] = -1;
...
...
lots more setting just like the two above
} //end if header == 19
} //end if sensors streaming
} //end while
} //END GET_SENSORS_THREAD_I
Thank you for any help.
The pthread_* functions return an error code; they do not set errno. (Well, they may of course, but not in any way that is documented.)
Your code should print the value returned by pthread_detach and print that.
Single Unix Spec documents two return values for this function: ESRCH (no thread by that ID was found) and EINVAL (the thread is not joinable).
Detaching threads in the destructor of an object seems silly. Firstly, if they are going to be detached eventually, why not just create them that way?
If there is any risk that the threads can use the object that is being destroyed, they need to be stopped, not detached. I.e. you somehow indicate to the threads that they should shut down, and then wait for them to reach some safe place after which they will not touch the object any more. pthread_join is useful for this.
Also, it is a little late to be doing that from the destructor. A destructor should only be run when the thread executing it is the only thread with a reference to that object. If threads are still using the object, then you're destroying it from under them.
I'm attempting to write a bit of JS that will read a file and write it out to a stream. The deal is that the file is extremely large, and so I have to read it bit by bit. It seems that I shouldn't be running out of memory, but I do. Here's the code:
var size = fs.statSync("tmpfile.tmp").size;
var fp = fs.openSync("tmpfile.tmp", "r");
for(var pos = 0; pos < size; pos += 50000){
var buf = new Buffer(50000),
len = fs.readSync(fp, buf, 0, 50000, (function(){
console.log(pos);
return pos;
})());
data_output.write(buf.toString("utf8", 0, len));
delete buf;
}
data_output.end();
For some reason it hits 264900000 and then throws FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory. I'd figure that the data_output.write() call would force it to write the data out to data_output, and then discard it from memory, but I could be wrong. Something is causing the data to stay in memory, and I've no idea what it would be. Any help would be greatly appreciated.
I had a very similar problem. I was reading in a very large csv file with 10M lines, and writing out its json equivalent. I saw in the windows task manager that my process was using > 2GB of memory. Eventually I figured out that the output stream was probably slower than the input stream, and that the outstream was buffering a huge amount of data. I was able to fix this by pausing the instream every 100 writes to the outstream, and waiting for the outstream to empty. This gives time for the outstream to catch up with the instream. I don't think it matters for the sake of this discussion, but I was using 'readline' to process the csv file one line at a time.
I also figured out along the way that if, instead of writing every line to the outstream, I concatenate 100 or so lines together, then write them together, this also improved the memory situation and made for faster operation.
In the end, I found that I could do the file transfer (csv -> json) using just 70M of memory.
Here's a code snippet for my write function:
var write_counter = 0;
var out_string = "";
function myWrite(inStream, outStream, string, finalWrite) {
out_string += string;
write_counter++;
if ((write_counter === 100) || (finalWrite)) {
// pause the instream until the outstream clears
inStream.pause();
outStream.write(out_string, function () {
inStream.resume();
});
write_counter = 0;
out_string = "";
}
}
You should be using pipes, such as:
var fp = fs.createReadStream("tmpfile.tmp");
fp.pipe(data_output);
For more information, check out: http://nodejs.org/docs/v0.5.10/api/streams.html#stream.pipe
EDIT: the problem in your implementation, btw, is that by doing it in chunks like that, the write buffer isn't going to get flushed, and you're going to read in the entire file before writing much of it back out.
According to the documentation, data_output.write(...) will return true if the string has been flushed, and false if it has not (due to the kernel buffer being full). What kind of stream is this?
Also, I'm (fairly) sure this isn't the problem, but: how come you allocate a new Buffer on each loop iteration? Wouldn't it make more sense to initialize buf before the loop?
I don't know how the synchronous file functions are implemented, but have you considered using the asynch ones? That would be more likely to allow garbage collection and i/o flushing to happen. So instead of a for loop, you would trigger the next read in the callback function of the previous read.
Something along these lines (note also that, per other comments, I'm reusing the Buffer):
var buf = new Buffer(50000),
var pos = 0, bytesRead;
function readNextChunk () {
fs.read(fp, buf, 0, 50000, pos,
function(err, bytesRead){
if (err) {
// handle error
}
else {
data_output.write(buf.toString("utf8", 0, bytesRead));
pos += bytesRead;
if (pos<size)
readNextChunk();
}
});
}
readNextChunk();