Lua - local required eveytime a local variable is assigned? - lua

I've found a strange piece of code in the Lua documentation :
function trim8(s)
local i1,i2 = find(s,'^%s*')
if i2 >= i1 then s = sub(s,i2+1) end
local i1,i2 = find(s,'%s*$')
if i2 >= i1 then s = sub(s,1,i1-1) end
return s
end
Why is local used once again with i1 and i2? Aren't they already declared among local variables? Do you have to repeat the local keyword every time you want to assign them?

No, it is not necessary to use local over and over. The variables i1 and i2 will be local in the scope of the function because of the first line itself.
While it should not be done, there is nothing wrong with defining the same variables over and over. It will just assign a new position in stack to the newer, and shadow the older one.
The following is the instruction output for a simple function:
function t()
local i = 2
local i = 3
end
t()
function <temp.lua:1,4> (3 instructions, 12 bytes at 00658990)
0 params, 2 slots, 0 upvalues, 2 locals, 2 constants, 0 functions
1 [2] LOADK 0 -1 ; 2
2 [3] LOADK 1 -2 ; 3
3 [4] RETURN 0 1
and updating the second local i = 3 to just i = 3:
function t()
local i = 2
i = 3
end
t()
function <temp.lua:1,4> (3 instructions, 12 bytes at 00478990)
0 params, 2 slots, 0 upvalues, 1 local, 2 constants, 0 functions
1 [2] LOADK 0 -1 ; 2
2 [3] LOADK 0 -2 ; 3
3 [4] RETURN 0 1
Notice the difference at the second instruction.
Apart from that, the function is quite inefficient. You can instead use the following:
function Trim(sInput)
return sInput:match "^%s*(.-)%s*$"
end

Technically, using local or not in the second declaration are not equivalent. Using a second local would declare another variable.
However in your example code, they have basically the same. Check these simpler code:
local a = 0
local a = 1
and
local a = 0
a = 1
Use luac -p -l outputs the following result:
0+ params, 2 slots, 0 upvalues, 2 locals, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 0
2 [2] LOADK 1 -2 ; 1
3 [2] RETURN 0 1
and
0+ params, 2 slots, 0 upvalues, 1 local, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 0
2 [2] LOADK 0 -2 ; 1
3 [2] RETURN 0 1

Related

Why ChunkSpy .function part has four parameters?

When using ChunkSpy, I find one thing makes me coufused. Let's see the following example
>a = 1
; source chunk: (interactive mode)
; x86 standard (32-bit, little endian, doubles)
; function [0] definition (level 1)
; 0 upvalues, 0 params, 2 stacks
.function 0 0 2 2
.const "a" ; 0
.const 1 ; 1
[1] loadk 0 1 ; 1
[2] setglobal 0 0 ; a
[3] return 0 1
; end of function
Since here is 0 upvalues, 0 params, 2 stacks, why there are four parameters in .function 0 0 2 2
In another example, we can see that
>local a; function b() a = 1 return a end
; source chunk: (interactive mode)
; x86 standard (32-bit, little endian, doubles)
; function [0] definition (level 1)
; 0 upvalues, 0 params, 2 stacks
.function 0 0 2 2
.local "a" ; 0
.const "b" ; 0
; function [0] definition (level 2)
; 1 upvalues, 0 params, 2 stacks
.function 1 0 0 2
.upvalue "a" ; 0
.const 1 ; 0
[1] loadk 0 0 ; 1
[2] setupval 0 0 ; a
[3] getupval 0 0 ; a
[4] return 0 2
[5] return 0 1
; end of function
[1] closure 1 0 ; 1 upvalues
[2] move 0 0
[3] setglobal 1 0 ; b
[4] return 0 1
; end of function
So I guess the first parameter is upvalues, but what is the use of the second?
I get the answer by the help of Egor Skriptunoff from comments.
The .fucntion part with four parameters are meaning like below:
number of upvales
number of named parameters
flag of arg : 1=VARARG_HASARG, 2=VARARG_ISVARARG, 4=VARARG_NEEDSARG. It's always 0 for normal function, and 2 for main function chunk.
number of stacks

Why t[1] output is 1 for local t = {1, [1] = "a", [2] = "b"}

Test Code:
local t = {1, [1] = "a", [2] = "b"}
print("t[1]: ", t[1])
for _, v in pairs(t) do
print(v)
end
Output:
t[1]: 1
1
b
The order that fields are set in table constructors is not defined if you have duplicate keys.
Currently, the compiler batches list entries (50 list entries per batch).
The bytecode for your constructor can be seen by running luac -l on your script:
1 [1] NEWTABLE 0 1 2
2 [1] LOADK 1 -1 ; 1
3 [1] SETTABLE 0 -1 -2 ; 1 "a"
4 [1] SETTABLE 0 -3 -4 ; 2 "b"
5 [1] SETLIST 0 1 1 ; 1
Note the SETLIST at the end. For {10,20,30, [1] = "a", [2] = "b"}, the bytecode is:
1 [1] NEWTABLE 0 3 2
2 [1] LOADK 1 -1 ; 10
3 [1] LOADK 2 -2 ; 20
4 [1] LOADK 3 -3 ; 30
5 [1] SETTABLE 0 -4 -5 ; 1 "a"
6 [1] SETTABLE 0 -6 -7 ; 2 "b"
7 [1] SETLIST 0 3 1 ; 1
If the constructor began with a list of length 60 say, then the final value of t[1] would be "a".

Torch: Concatenating tensors of different dimensions

I have a x_at_i = torch.Tensor(1,i) that grows at every iteration where i = 0 to n. I would like to concatenate all tensors of different sizes into a matrix and fill the remaining cells with zeroes. What is the most idiomatic way to this. For example:
x_at_1 = 1
x_at_2 = 1 2
x_at_3 = 1 2 3
x_at_4 = 1 2 3 4
X = torch.cat(x_at_1, x_at_2, x_at_3, x_at_4)
X = [ 1 0 0 0
1 2 0 0
1 2 3 0
1 2 3 4 ]
If you know n and assuming you have access to your x_at_i easily at each iteration I would try something like
X = torch.Tensor(n, n):zero()
for i = 1, n do
X[i]:narrow(1, 1, i):copy(x_at[i])
end

Batch processing in Torch with ClassNLLCriterion

I'm trying to implement a simple NN in Torch to learn more about it. I created a very simple dataset: binary numbers from 0 to 15 and my goal is to classify the numbers into two classes - class 1 are numbers 0-3 and 12-15, class 2 are the remaining ones. The following code is what i have now (i have removed the data loading routine only):
require 'torch'
require 'nn'
data = torch.Tensor( 16, 4 )
class = torch.Tensor( 16, 1 )
network = nn.Sequential()
network:add( nn.Linear( 4, 8 ) )
network:add( nn.ReLU() )
network:add( nn.Linear( 8, 2 ) )
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
for i = 1, 300 do
prediction = network:forward( data )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class )
network:zeroGradParameters()
grad = criterion:backward( prediction, class )
network:backward( data, grad )
network:updateParameters( 0.1 )
end
This is how the data and class Tensors look like:
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
[torch.DoubleTensor of size 16x4]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Which is what I expect it to be. However when running this code, i get the following error on line loss = criterion:forward( prediction, class ):
torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:69: attempt to
perform arithmetic on a nil value
When i modify the training routine like this (processing a single data point at a time instead of all 16 in a batch) it works and the network successfully learns to recognize the two classes:
for k = 1, 300 do
for i = 1, 16 do
prediction = network:forward( data[i] )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class[i] )
network:zeroGradParameters()
grad = criterion:backward( prediction, class[i] )
network:backward( data[i], grad )
network:updateParameters( 0.1 )
end
end
I'm not sure what might be wrong with the "batch processing" i'm trying to do. A brief look at the ClassNLLCriterion didn't help, it seems i'm giving it the expected input (see below), but it still fails. The input it receives (prediction and class Tensors) looks like this:
-0.9008 -0.5213
-0.8591 -0.5508
-0.9107 -0.5146
-0.8002 -0.5965
-0.9244 -0.5055
-0.8581 -0.5516
-0.9174 -0.5101
-0.8040 -0.5934
-0.9509 -0.4884
-0.8409 -0.5644
-0.8922 -0.5272
-0.7737 -0.6186
-0.9422 -0.4939
-0.8405 -0.5648
-0.9012 -0.5210
-0.7820 -0.6116
[torch.DoubleTensor of size 16x2]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Can someone help me out here? Thanks.
Experience has shown that nn.ClassNLLCriterion expects target to be a 1D tensor of size batch_size or a scalar. Your class is a 2D one (batch_size x 1) but class[i] is 1D, that's why your non-batch version works.
So, this will solve your problem:
class = class:view(-1)
Alternatively, you can replace
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
with the equivalent:
criterion = nn.CrossEntropyCriterion()
The interesting thing is that nn.CrossEntropyCriterion is also able to take a 2D tensor. Why is nn.ClassNLLCriterion not?

Torch tensors swapping dimensions

I came across these two lines (back-to-back) of code in a torch project:
im4[{1,{},{}}] = im3[{3,{},{}}]
im4[{3,{},{}}] = im3[{1,{},{}}]
What do these two lines do? I assumed they did some sort of swapping.
This is covered in indexing in the Torch Tensor Documentation
Indexing using the empty table {} is shorthand for all indices in that dimension. Below is a demo which uses {} to copy an entire row from one matrix to another:
> a = torch.Tensor(3, 3):fill(0)
0 0 0
0 0 0
0 0 0
> b = torch.Tensor(3, 3)
> for i=1,3 do for j=1,3 do b[i][j] = (i - 1) * 3 + j end end
> b
1 2 3
4 5 6
7 8 9
> a[{1, {}}] = b[{3, {}}]
> a
7 8 9
0 0 0
0 0 0
This assignment is equivalent to: a[1] = b[3].
Your example is similar:
im4[{1,{},{}}] = im3[{3,{},{}}]
im4[{3,{},{}}] = im3[{1,{},{}}]
which is more clearly stated as:
im4[1] = im3[3]
im4[3] = im3[1]
The first line assigns the values from im3's third row (a 2D sub-matrix) to im4's first row and the second line assigns the first row of im3 to the third row of im4.
Note that this is not a swap, as im3 is never written and im4 is never read from.

Resources