I was doing some calculations to use as the testcases for a question I'm going to post on PCCG Stack Exchange, and I noticed that in a piece of code like this:
for i = 0, 20 do
io.write(i..": ")
diff(i)
end
(where diff is a function which does some pretty heavy calculation and prints the result), the result of diff is first calculated and, only then, i: and the result of diff are printed.
But why is this happening? Shouldn't I see i: before and during the calculation, and the result of the calculation only after? Why is it waiting for diff to execute before?
I first noticed this using Luajit, but it also happens on vanilla Lua and even outside of a for loop.
Just as with many other output functions in many other languages, io.write output is buffered. It is evaluated, it is just your output is now in intermediate buffer pending flushing or filling of this buffer. Add an io.flush() call if you need your data to go through right now.
Related
I'm new to Lua and I have question regarding to memory management in Lua.
Question 1) When calling function using io.popen(), I saw many Lua programmers wrote a close statement after using popen() function. I wonder what is the reason for that? For example, to demonstrate look at this code:
handle = io.popen("ls -a")
output = handle:read("*all")
handle:close()
print(output)
handle = io.popen("date")
output = handle:read("*all")
handle:close()
print(output)
I heard Lua can manage memory itself. So do I really need to write handle:close like above? What will happen to memory if I just ignore the handle:close() statement and just write it like this?
handle = io.popen("ls -a")
handle = io.popen("date")
output = handle:read("*all")
Question 2) From the code in question 1, in term of memory usage, can we write the handle:close() statement at the end with only one line instead of two like this ?:
handle = io.popen("ls -a")
output = handle:read("*all")
-- handle:close() -- dont close it yet do at the end
print(output)
handle = io.popen("date") -- this use the same variable `handle` previously
output = handle:read("*all")
handle:close() -- only one statement to close all above
print(output)
You can see that I didn't close this from the first statement when I use io.popen but I close it at the end, will this make the program slow because I close it only with one close statement at the end?
Lua will close the file handle automatically when the garbage collector gets around to collecting it.
Lua Manual 5.4: file:close
Closes file. Note that files are automatically closed when their handles are garbage collected, but that takes an unpredictable amount of time to happen.
BUT, it is best practice to close the handles yourself as soon as you are done with the handle, this is because it will take an unknown amount of time for the GC to do it.
This is not an issue of memory but of a much more limited resource of open file handles, something like 512 on a windows machine, a small pool for all the applications running on it.
As for the second question, when you reassign a variable AND there are no other remaining references to the previous value, that value will eventually be collected by the GC.
Question 1
In this case the close is not for memory reasons, but to close the file. When a file handle gets collected, it will be closed automatically, but if a program doesn't generate much garbage (which some programmers specifically optimize for), the GC might not run for quite a while after the program is done with the file handling and the file would stay open.
Also, if the variable stays in scope, then the GC won't get to collect it at all until the scope ends, which might be a very long time.
Question 2
That wouldn't work. Methods get called on values, not on variables, so when you assign a new value to a variable, the old one just disappears. Calling a method on the new value won't affect any other value that used to be stored in the variable.
I'm porting FFT code from Java to Lua, and I'm starting to worry a bit about the fact that in Lua the array part of a table starts indexing at 1 while in Java array indexing starts at 0.
For the input array this causes no problem because the Java code is set up to handle the possibility that the data under consideration is not located at the start of the array. However, all of the working arrays internal to the code are assumed to starting indexing at 0. I know that the code will work as written -- Lua tables are awesome like that -- but I have no sense at all about the performance hit I might incur by having the "0" element of the array going into the hash table part of the underlying C structure (or indeed, if that is what will happen).
My question: is this something worth worrying about? Should I be planning to profile and hand-optimize the code? (The code will eventually be used to transform many relatively small (> 100 time points) signals of varying lengths not known in advance.)
I have made small, probably not that reliable, test:
local arr = {}
for i=0,10000000 do
arr[i] = i*2
end
for k, v in pairs(arr) do
arr[k] = v*v
end
And similar version with 1 as the first index. On my system:
$ time lua example0.lua
real 2.003s
$ time lua example1.lua
real 2.014s
I was also interested how table.insert will perform
for i=1,10000000 do
table.insert(arr, 2*i)
...
and, suprisingly
$ time lua example2.lua
real 6.012s
Results:
Of course, it depends on what system you're running it, probably also whic lua version, but it seems that it makes little to no difference between zero-start and one-start. Bigger difference is caused by the way you insert things to array.
I think the correct answer in this case is changing the algorithm so that everything is indexed with 1. And consider that part of the conversion.
Your FFT will be less surprising to another Lua user (like me), given that all "array-like" tables are indexed by one.
It might not be as stressful as you might think, given the way numeric loops are structured in Lua (where the "start" and the "end" are "inclusive"). You would be exchanging this:
for i=0,#array-1 do
... (do stuff with i)
end
By this:
for i=1,#array do
... (do stuff with i)
end
The non-numeric loops would remain unchanged (except that you will be able to use ipairs too, if you so desire).
I'm running some tests for a chip via Verilog, and I've run into a little bit of a problem where I am scratching my head a little. I'm testing coverage on the code to make sure all states happen over randomized testing of all parameters, etc.
In evaluating two values of the following type:
case(state_vector)
STATE1:
...
STATE2:
if(!var1 && var2)
state_vector = STATE1;
else
state_vector = STATE2;
STATE3:
...
Now the problem is that in doing coverage analysis the statement after the else statement is never reached, meaning that the if-statement always evaluates to true.
I originally assumed that the values of var1 and var2 were 0 and 1, respectively. Upon double checking before finishing my report I noticed that this assumption was incorrect, as a waveform analysis shows that var1 is always 1 and var2 is always 0 throughout the entire simulation.
Now I will test to make sure the values change the way I want them, but I'm curious as to how in Verilog this may happen. Essentially I am slipping into a state because the if-statement evaluates to true for infinitesimal unit of time.
Any ideas on how to better evaluate this problem? I'd like to check that another function isn't changing my state_vector at the same time I'm trying to check my current state.
Quick and dirty way is to sprinkle $display("%t %m got here",$time); around the code in question and make sure there are labels for the begin-end blocks eg:
begin : meaningful_label
$display("%t %m got here",$time);
... code ...
$display("%t %m got here too",$time);
end
If the display statement(s) are called, then state_vector is being assigned somewhere else. Otherwise the something something is preventing the code from exciting.
To further debug:
Not display message:
add more display messages to higher levels.
Displaying messages:
Some waveform viewers have active drivers tracing. If your viewer does not have this feature, then add messages around all other assigning statements and watch for time-stamp when the condition should be true.
I am trying to run this code but it keeps crashing:
log10(x):=log(x)/log(10);
char(x):=floor(log10(x))+1;
mantissa(x):=x/10**char(x);
chop(x,d):=(10**char(x))*(floor(mantissa(x)*(10**d))/(10**d));
rnd(x,d):=chop(x+5*10**(char(x)-d-1),d);
d:5;
a:10;
Ibwd:[[30,rnd(integrate((x**60)/(1+10*x^2),x,0,1),d)]];
for n from 30 thru 1 step -1 do Ibwd:append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd);
Maxima crashes when it evaluates the last line. Any ideas why it may happen?
Thank you so much.
The problem is that the difference becomes negative and your rounding function dies horribly with a negative argument. To find this out, I changed your loop to:
for n from 30 thru 1 step -1 do
block([],
print (1/(2*n-1)-a*last(first(Ibwd))),
print (a*last(first(Ibwd))),
Ibwd: append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd),
print (Ibwd));
The last difference printed before everything fails miserably is -316539/6125000. So now try
rnd(-1,3)
and see the same problem. This all stems from the fact that you're taking the log of a negative number, which Maxima interprets as a complex number by analytic continuation. Maxima doesn't evaluate this until it absolutely has to and, somewhere in the evaluation code, something's dying horribly.
I don't know the "fix" for your specific example, since I'm not exactly sure what you're trying to do, but hopefully this gives you enough info to find it yourself.
If you want to deconstruct a floating point number, let's first make sure that it is a bigfloat.
say z: 34.1
You can access the parts of a bigfloat by using lisp, and you can also access the mantissa length in bits by ?fpprec.
Thus ?second(z)*2^(?third(z)-?fpprec) gives you :
4799148352916685/140737488355328
and bfloat(%) gives you :
3.41b1.
If you want the mantissa of z as an integer, look at ?second(z)
Now I am not sure what it is that you are trying to accomplish in base 10, but Maxima
does not do internal arithmetic in base 10.
If you want more bits or fewer, you can set fpprec,
which is linked to ?fpprec. fpprec is the "approximate base 10" precision.
Thus fpprec is initially 16
?fpprec is correspondingly 56.
You can easily change them both, e.g. fpprec:100
corresponds to ?fpprec of 335.
If you are diddling around with float representations, you might benefit from knowing
that you can look at any of the lisp by typing, for example,
?print(z)
which prints the internal form using the Lisp print function.
You can also trace any function, your own or system function, by trace.
For example you could consider doing this:
trace(append,rnd,integrate);
If you want to use machine floats, I suggest you use, for the last line,
for n from 30 thru 1 step -1 do :
Ibwd:append([[n-1,rnd(1/(2.0*n- 1.0)-a*last(first(Ibwd)),d)]],Ibwd);
Note the decimal points. But even that is not quite enough, because integration
inserts exact structures like atan(10). Trying to round these things, or compute log
of them is probably not what you want to do. I suspect that Maxima is unhappy because log is given some messy expression that turns out to be negative, even though it initially thought otherwise. It hands the number to the lisp log program which is perfectly happy to return an appropriate common-lisp complex number object. Unfortunately, most of Maxima was written BEFORE LISP HAD COMPLEX NUMBERS.
Thus the result (log -0.5)= #C(-0.6931472 3.1415927) is entirely unexpected to the rest of Maxima. Maxima has its own form for complex numbers, e.g. 3+4*%i.
In particular, the Maxima display program predates the common lisp complex number format and does not know what to do with it.
The error (stack overflow !!!) is from the display program trying to display a common lisp complex number.
How to fix all this? Well, you could try changing your program so it computes what you really want, in which case it probably won't trigger this error. Maxima's display program should be fixed, too. Also, I suspect there is something unfortunate in simplification of logs of numbers that are negative but not obviously so.
This is probably waaay too much information for the original poster, but maybe the paragraph above will help out and also possibly improve Maxima in one or more places.
It appears that your program triggers an error in Maxima's simplification (algebraic identities) code. We are investigating and I hope we have a bug fix soon.
In the meantime, here is an idea. Looks like the bug is triggered by rnd(x, d) when x < 0. I guess rnd is supposed to round x to d digits. To handle x < 0, try this:
rnd(x, d) := if x < 0 then -rnd1(-x, d) else rnd1(x, d);
rnd1(x, d) := (... put the present definition of rnd here ...);
When I do that, the loop runs to completion and Ibwd is a list of values, but I don't know what values to expect.
I've been writing some scripts for a game, the scripts are written in Lua. One of the requirements the game has is that the Update method in your lua script (which is called every frame) may take no longer than about 2-3 milliseconds to run, if it does the game just hangs.
I solved this problem with coroutines, all I have to do is call Multitasking.RunTask(SomeFunction) and then the task runs as a coroutine, I then have to scatter Multitasking.Yield() throughout my code, which checks how long the task has been running for, and if it's over 2 ms it pauses the task and resumes it next frame. This is ok, except that I have to scatter Multitasking.Yield() everywhere throughout my code, and it's a real mess.
Ideally, my code would automatically yield when it's been running too long. So, Is it possible to take a Lua function as an argument, and then execute it line by line (maybe interpreting Lua inside Lua, which I know is possible, but I doubt it's possible if all you have is a function pointer)? In this way I could automatically check the runtime and yield if necessary between every single line.
EDIT:: To be clear, I'm modding a game, that means I only have access to Lua. No C++ tricks allowed.
check lua_sethook in the Debug Interface.
I haven't actually tried this solution myself yet, so I don't know for sure how well it will work.
debug.sethook(coroutine.yield,"",10000);
I picked the number arbitrarily; it will have to be tweaked until it's roughly the time limit you need. Keep in mind that time spent in C functions etc will not increase the instruction count value, so a loop will reach this limit far faster than calls to long-running C functions. It may be viable to set a far lower value and instead provide a function that sees how much os.clock() or similar has increased.