Torch: Concatenating tensors of different dimensions - lua

I have a x_at_i = torch.Tensor(1,i) that grows at every iteration where i = 0 to n. I would like to concatenate all tensors of different sizes into a matrix and fill the remaining cells with zeroes. What is the most idiomatic way to this. For example:
x_at_1 = 1
x_at_2 = 1 2
x_at_3 = 1 2 3
x_at_4 = 1 2 3 4
X = torch.cat(x_at_1, x_at_2, x_at_3, x_at_4)
X = [ 1 0 0 0
1 2 0 0
1 2 3 0
1 2 3 4 ]

If you know n and assuming you have access to your x_at_i easily at each iteration I would try something like
X = torch.Tensor(n, n):zero()
for i = 1, n do
X[i]:narrow(1, 1, i):copy(x_at[i])
end

Related

How to stitch two images have different absolute coordinates?

for example, stitch
first image
1 1 1
1 1 1
1 1 1
second image
2 2 2 2
2 2 2 2
2 2 2 2
and What I want
0 0 0 2 2 2 2
1 1 1 2 2 2 2
1 1 1 2 2 2 2
1 1 1 0 0 0 0
or
1 1 1 0 0 0 0
1 1 1 2 2 2 2
1 1 1 2 2 2 2
0 0 0 2 2 2 2
In python, that is easy to make like..
temp_panorama = np.zeros((1's height+abs(2's upper part length), 1's width+2's width))
temp_panorama[(2's upper part length) : 1's height, 0 : 1's width] = img1[:]
temp_panorama[0 : 2's height, 1's width +1 :] = img2[:, :]
but how can I implement the same function in C++'s opencv?
use subimages:
// ROI where first image will be placed
cv::Rect firstROI = cv::Rect(x1,y2, first.cols, first.height);
cv::Rect secondROI = cv::Rect(x2,y2, second.cols, second.height);
// create an image big enought to hold the result
cv::Mat canvas = cv::Mat::zeros(cv::Size(std::max(x1+first.cols, x2+second.cols), std::max(y1+first.rows, y2+second.rows)), first.type());
// use subimages:
first.copyTo(canvas(firstROI));
second.copyTo(canvas(secondROI));
in your example:
x1 = 0,
y1 = 1,
x2 = 3,
y2 = 0
first.cols == 3
first.rows == 3
second.cols == 4
second.rows == 3

Count the number of contiguous subarrays with maximum element as x

Given an array of numbers, print the number of subarrays with maximum element as x. For example :
Input :
arr = [1, 2, 3, 3, 1]
x = [3,2,1,4]
output : 11,2,2,0
Subarrays for x = 1:
1
1
Subarrays for x = 2:
2
1 2
Subarrays for x = 3:
1 2 3
1 2 3 3
1 2 3 3 1
2 3
2 3 3
2 3 3 1
3
3 3
3 3 1
3
3 1
There are no subarray with maximum element as 4. So for x = 4 we have to print 0.
My first attempt was to generate all subarrays and count that.The time complexity of this approach is very bad(O(n^3))

Is there a way to use Machine Learning classify discrete and infinite scale data?

The data like that:
x y
7773 0
9805 4
7145 0
7645 1
2529 1
4814 2
6027 2
7499 2
3367 1
8861 5
9776 2
8009 5
3844 2
1218 2
1120 1
4553 0
3017 1
2582 2
1691 2
5342 0
...
The real function f(x) is: (Return the circle count of a decimal integer)
# 0 1 2 3 4 5 6 7 8 9
_f_map = [1, 0, 0, 0, 0, 0, 1, 0, 2, 1]
def f(x):
x = int(x)
assert x >= 0
if x == 0:
return 1
r = 0
while x:
r += _f_map[x % 10]
x /= 10
return r
The training data and test data can be produced by random:
data = []
target = []
for i in xrange(3000):
x = random.randint(0, 999999) #hardcode a scale
data.append([x])
target.append(f(x))
The real function is discrete and infinite scale.
Is there a way or a model can classify this data?
I tried SVM(Support Vector Machine), and acquired a 20% accuracy rate.
Looks like a typical use case of sequential models. You can easily learn LSTM/ other recurrent neural network to do so by considering your numbers as sequences of integers feeded to the network. At this point it just has to learn sum operation and a simple mapping(your f_map).

Batch processing in Torch with ClassNLLCriterion

I'm trying to implement a simple NN in Torch to learn more about it. I created a very simple dataset: binary numbers from 0 to 15 and my goal is to classify the numbers into two classes - class 1 are numbers 0-3 and 12-15, class 2 are the remaining ones. The following code is what i have now (i have removed the data loading routine only):
require 'torch'
require 'nn'
data = torch.Tensor( 16, 4 )
class = torch.Tensor( 16, 1 )
network = nn.Sequential()
network:add( nn.Linear( 4, 8 ) )
network:add( nn.ReLU() )
network:add( nn.Linear( 8, 2 ) )
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
for i = 1, 300 do
prediction = network:forward( data )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class )
network:zeroGradParameters()
grad = criterion:backward( prediction, class )
network:backward( data, grad )
network:updateParameters( 0.1 )
end
This is how the data and class Tensors look like:
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
[torch.DoubleTensor of size 16x4]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Which is what I expect it to be. However when running this code, i get the following error on line loss = criterion:forward( prediction, class ):
torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:69: attempt to
perform arithmetic on a nil value
When i modify the training routine like this (processing a single data point at a time instead of all 16 in a batch) it works and the network successfully learns to recognize the two classes:
for k = 1, 300 do
for i = 1, 16 do
prediction = network:forward( data[i] )
--print( "prediction: " .. tostring( prediction ) )
--print( "class: " .. tostring( class ) )
loss = criterion:forward( prediction, class[i] )
network:zeroGradParameters()
grad = criterion:backward( prediction, class[i] )
network:backward( data[i], grad )
network:updateParameters( 0.1 )
end
end
I'm not sure what might be wrong with the "batch processing" i'm trying to do. A brief look at the ClassNLLCriterion didn't help, it seems i'm giving it the expected input (see below), but it still fails. The input it receives (prediction and class Tensors) looks like this:
-0.9008 -0.5213
-0.8591 -0.5508
-0.9107 -0.5146
-0.8002 -0.5965
-0.9244 -0.5055
-0.8581 -0.5516
-0.9174 -0.5101
-0.8040 -0.5934
-0.9509 -0.4884
-0.8409 -0.5644
-0.8922 -0.5272
-0.7737 -0.6186
-0.9422 -0.4939
-0.8405 -0.5648
-0.9012 -0.5210
-0.7820 -0.6116
[torch.DoubleTensor of size 16x2]
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
[torch.DoubleTensor of size 16x1]
Can someone help me out here? Thanks.
Experience has shown that nn.ClassNLLCriterion expects target to be a 1D tensor of size batch_size or a scalar. Your class is a 2D one (batch_size x 1) but class[i] is 1D, that's why your non-batch version works.
So, this will solve your problem:
class = class:view(-1)
Alternatively, you can replace
network:add( nn.LogSoftMax() )
criterion = nn.ClassNLLCriterion()
with the equivalent:
criterion = nn.CrossEntropyCriterion()
The interesting thing is that nn.CrossEntropyCriterion is also able to take a 2D tensor. Why is nn.ClassNLLCriterion not?

Torch tensors swapping dimensions

I came across these two lines (back-to-back) of code in a torch project:
im4[{1,{},{}}] = im3[{3,{},{}}]
im4[{3,{},{}}] = im3[{1,{},{}}]
What do these two lines do? I assumed they did some sort of swapping.
This is covered in indexing in the Torch Tensor Documentation
Indexing using the empty table {} is shorthand for all indices in that dimension. Below is a demo which uses {} to copy an entire row from one matrix to another:
> a = torch.Tensor(3, 3):fill(0)
0 0 0
0 0 0
0 0 0
> b = torch.Tensor(3, 3)
> for i=1,3 do for j=1,3 do b[i][j] = (i - 1) * 3 + j end end
> b
1 2 3
4 5 6
7 8 9
> a[{1, {}}] = b[{3, {}}]
> a
7 8 9
0 0 0
0 0 0
This assignment is equivalent to: a[1] = b[3].
Your example is similar:
im4[{1,{},{}}] = im3[{3,{},{}}]
im4[{3,{},{}}] = im3[{1,{},{}}]
which is more clearly stated as:
im4[1] = im3[3]
im4[3] = im3[1]
The first line assigns the values from im3's third row (a 2D sub-matrix) to im4's first row and the second line assigns the first row of im3 to the third row of im4.
Note that this is not a swap, as im3 is never written and im4 is never read from.

Resources