Error when saving images to an HDF5 file in Lua - lua

When trying to a bunch of images in Lua to an HDF5 file, I get the following error:
/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/hdf5/group.lua:97: attempt to call method 'adjustForData' (a nil value)
stack traceback:
/home/ubuntu/torch/install/share/lua/5.1/hdf5/group.lua:97: in function '_writeData'
/home/ubuntu/torch/install/share/lua/5.1/hdf5/group.lua:307: in function '_write_or_append'
/home/ubuntu/torch/install/share/lua/5.1/hdf5/group.lua:270: in function </home/ubuntu/torch/install/share/lua/5.1/hdf5/group.lua:269>
/home/ubuntu/torch/install/share/lua/5.1/hdf5/file.lua:84: in function '_write_or_append'
/home/ubuntu/torch/install/share/lua/5.1/hdf5/file.lua:58: in function 'write'
This is where the error occurs:
for i = 1, #input_images_caffe do
newFile:write('images', input_images_caffe[i], 'w')
end
The images inside input_image_caffe, come from:
local input_size = math.ceil(params.input_scale * params.image_size)
local input_image_list = params.input_image:split(',')
local input_images_caffe = {}
local img_caffe
for _, img_path in ipairs(input_image_list) do
local img = image.load(img_path, 3)
img = image.scale(img, input_size, 'bilinear')
img_caffe = preprocess(img):float()
table.insert(input_images_caffe, img_caffe)
end
This function is used to preprocess the images:
function preprocess(img)
local mean_pixel = torch.DoubleTensor({103.939, 116.779, 123.68})
local perm = torch.LongTensor{3, 2, 1}
img = img:index(1, perm):mul(256.0)
mean_pixel = mean_pixel:view(3, 1, 1):expandAs(img)
img:add(-1, mean_pixel)
return img
end
Some examples of what input_images_caffe could contain:
{
1 : FloatTensor - size: 3x405x512
2 : FloatTensor - size: 3x512x393
}
Or:
{
1 : FloatTensor - size: 3x405x512
}
The HDF5 is created with:
local newFile = hdf5.open(params.output_hdf5, 'w')
And I am using the torch-hdf5 library:
https://github.com/deepmind/torch-hdf5
I am not sure what I am doing wrong here?

newFile:write('images', input_images_caffe[i], 'w')
Try with newFile:write('images', input_images_caffe[i]) instead. The third parameter should be an (optional) options object, but you are passing a string, which doesn't have adjustForData method, hence the error you are getting.

Related

Lua function getting different datatype when given a number

I have this problem in a Lua class. Here is the code of my class:
local Temp = {}
function Temp:new(tmp)
local self = {temp = -273.15}
if tmp > self.temp then
self.temp = tmp
end
local setC = function(usrTmp)
if usrTmp < -273.15 then
self.temp = -273.15
else
self.temp = usrTmp
end
end
local getC = function()
return self.temp
end
local getF = function()
return self.temp * 1.8 + 32
end
local getK = function()
return self.temp + 273.15
end
return {
setC = setC,
getC = getC,
getF = getF,
getK = getK
}
end
return Temp
And here is my main method:
temp = require "tempClass"
io.write("Please enter the initial temperature: ")
usrTemp = io.read("*n")
myTemp = temp:new(usrTemp)
print("The current temperature in Celsius is: ".. myTemp:getC())
print("The current temperature in Fahrenheit is: " .. myTemp:getF())
print("The current temperature in Kelvin is: " .. myTemp:getK())
io.write("Please enter new temperature: ")
changeTemp = io.read("*n")
myTemp:setC(changeTemp)
print("The current temperature in Celsius is: " .. myTemp:getC())
print("The current temperature in Fahrenheit is: " .. myTemp:getF())
print("The current temperature in Kelvin is: " .. myTemp:getK())
io.write("Please enter new temperature: ")
My problem is the if usrTmp < -273.15 then line in the setC function. I'm getting this error message:
lua: ./tempClass.lua:10: attempt to compare table with number
stack traceback:
./tempClass.lua:10: in function 'setC'
[string "<eval>"]:14: in main chunk
I know, however, that usrTmp is a number. If I call type on the variable before the function, I get type number. In the function, the type is table. Why is usrTmp a table in the function? How can I fix this? Thanks!
You need to be explicit about the self parameter when defining functions that shall be used with it. The function setC should have an additional such parameter:
local setC = function(self, usrTmp)
-- as before...
end
Recall that these two invocations are identical:
myTemp:setC(changeTemp)
myTemp.setC(myTemp, changeTemp)
That should explain the actual error message your received.
In addition, you need to turn Table.new into an ordinary (not self-parameter-enhanced) function. It's not connected to an instance yet, it is supposed to return one. And finally, the state variable temp must be included in the table that Table.new returns:
function Temp.new(tmp)
-- ^ note the dot instead of the colon
-- function body as before, but all functions now need the self parameter, e.g.:
local getC = function(self)
return self.temp
end
return {
temp = self.temp,
setC = setC,
getC = getC,
getF = getF,
getK = getK
}
end

Unable to do parameter sharing in Torch between [sub]networks

I am trying to share the parameters between the encoder/decoder sub-networks of an architecture with another encoder/decoder in a different architecture. This is necessary for my problem since at the test time it requires a lot of computation (and time) to do a forward pass on the original architecture and then extract the decoder results. However, what I noticed was although I have explicitly asked for parameter sharing when doing clone(), the parameters are not shared and each architecture has its own parameters while training.
I am showing the difference between the results of the two architecture via some print() statements by forward-propagating some random vectors into the decoder and encoders of both architectures (you can do also compare their weights).
So I wonder, can anyone help me find out what I'm doing wrong when sharing the parameters?
Below I post a simplified version of my code:
require 'nn'
require 'nngraph'
require 'cutorch'
require 'cunn'
require 'optim'
input = nn.Identity()()
encoder = nn.Sequential():add(nn.Linear(100, 20)):add(nn.ReLU(true)):add(nn.Linear(20, 10))
decoder = nn.Sequential():add(nn.Linear(10, 20)):add(nn.ReLU(true)):add(nn.Linear(20, 100))
code = encoder(input)
reconstruction = decoder(code)
outsideCode = nn.Identity()()
decoderCloned= decoder:clone('weight', 'bias', 'gradWeight', 'gradBias')
outsideReconstruction = decoderCloned(nn.JoinTable(1)({code, outsideCode}))
dumbNet = nn.Sequential():add(nn.Linear(100, 10))
codeRecon = dumbNet(outsideReconstruction)
input2 = nn.Identity()()
encoderTestTime = encoder:clone('weight', 'bias', 'gradWeight', 'gradBias')
decoderTestTime = decoder:clone('weight', 'bias', 'gradWeight', 'gradBias')
codeTest = encoderTestTime(input2)
reconTest = decoderTestTime(codeTest)
gMod = nn.gModule({input, outsideCode}, {reconstruction, codeRecon})
gModTest = nn.gModule({input2}, {reconTest})
criterion1 = nn.BCECriterion()
criterion2 = nn.MSECriterion()
-- Okay, the module has been created. Now it's time to do some other stuff
params, gParams = gMod:getParameters()
numParams = params:nElement()
memReqForParams = numParams * 5 * 4 / 1024 / 1024 -- Convert to MBs
-- If enough memory on GPU, move stuff to the GPU
if memReqForParams <= 1000 then
gMod = gMod:cuda()
gModTest = gModTest:cuda()
criterion1 = criterion1:cuda()
criterion2 = criterion2:cuda()
params, gParams = gMod:getParameters()
end
-- Data
Data = torch.rand(200, 100):cuda()
Data[Data:gt(0.5)] = 1
Data[Data:lt(0.5)] = 0
fakeCodes = torch.rand(400, 10):cuda()
config = {learningRate = 0.001}
state = {}
-- Start training
print ("\nEncoders before training: \n\tgMod's Encoder: " .. gMod:get(2):forward(torch.ones(1, 100):cuda()):sum() .. "\n\tgModTest's Encoder: " .. gModTest:get(2):forward(torch.ones(1, 100):cuda()):sum())
print ("\nDecoders before training: \n\tgMod's Decoder: " .. gMod:get(3):forward(torch.ones(1, 10):cuda()):sum() .. "\n\tgModTest's Decoder: " .. gModTest:get(3):forward(torch.ones(1, 10):cuda()):sum())
gMod:training()
for i=1, Data:size(1) do
local opfunc = function(x)
if x ~= params then
params:copy(x)
end
gMod:zeroGradParameters()
recon, outsideRecon = unpack(gMod:forward({Data[{{i}}], fakeCodes[{{i}}]}))
err = criterion1:forward(recon, Data[{{i}}])
df_dw = criterion1:backward(recon, Data[{{i}}])
errFake = criterion2:forward(outsideRecon, fakeCodes[{{i*2-1, i * 2}}])
df_dwFake = criterion2:backward(outsideRecon, fakeCodes[{{i*2-1, i * 2}}])
errorGrads = {df_dw, df_dwFake}
gMod:backward({Data[{{i}}], fakeCodes[{{i*2-1, i * 2}}]}, errorGrads)
return err, gParams
end
x, reconError = optim.adam(opfunc, params, config, state)
end
print ("\n\nEncoders after training: \n\tgMod's Encoder: " .. gMod:get(2):forward(torch.ones(1, 100):cuda()):sum() .. "\n\tgModTest's Encoder: " .. gModTest:get(2):forward(torch.ones(1, 100):cuda()):sum())
print ("\nDecoders after training: \n\tgMod's Decoder: " .. gMod:get(3):forward(torch.ones(1, 10):cuda()):sum() .. "\n\tgModTest's Decoder: " .. gModTest:get(3):forward(torch.ones(1, 10):cuda()):sum())
I got the solution to the problem with the help of fmassa on a GitHub issue I had opened for this problem here. One can use nn.Container to resolve the issue of parameter sharing as follow:
container = nn.Container()
container:add(gMod)
container:add(gModTest)
params, gradParams = container:getParameters()

Weird "attempt call field "drawers"(a table value)" error

I was working on this project about a year ago. I came back to it and now it throws an error when I run it the error is "attempt call field "drawers"(a table value)".
This is where the drawers field is
local Renderer = {}
local num_of_layers = 2
local insert = table.insert
local remove = table.remove
function Renderer:create()
local render = {}
render.drawers = {}
for i = 0, num_of_layers do
render.drawers[i] = {}
end
function render:addRenderer(obj, layer)
local l = layer or 0
insert(self.drawers(l), i, obj)
end
return render
end
return Renderer
This is where it is being called
local tlm = {}
function tlm:load()
renderer:addRenderer(self)
gameloop:addLoop(self)
end
This is not correct:
insert(self.drawers(l), obj)
self.drawers is not a function but a table. therefor a function call like self.drawers(1) results in an error.
If you wanted to insert an element to the table self.drawers at index l using Luas standard functions you should call:
table.insert(self.drawers, i, obj)
If you want to replace the value at index l you can simply write self.drawers[l] = obj
http://www.lua.org/manual/5.3/manual.html#pdf-table.insert

Simple LZW Compression doesnt work

I wrote simple class to compress data. Here it is:
LZWCompressor = {}
function LZWCompressor.new()
local self = {}
self.mDictionary = {}
self.mDictionaryLen = 0
-- ...
self.Encode = function(sInput)
self:InitDictionary(true)
local s = ""
local ch = ""
local len = string.len(sInput)
local result = {}
local dic = self.mDictionary
local temp = 0
for i = 1, len do
ch = string.sub(sInput, i, i)
temp = s..ch
if dic[temp] then
s = temp
else
result[#result + 1] = dic[s]
self.mDictionaryLen = self.mDictionaryLen + 1
dic[temp] = self.mDictionaryLen
s = ch
end
end
result[#result + 1] = dic[s]
return result
end
-- ...
return self
end
And i run it by:
local compressor = LZWCompression.new()
local encodedData = compressor:Encode("I like LZW, but it doesnt want to compress this text.")
print("Input length:",string.len(originalString))
print("Output length:",#encodedData)
local decodedString = compressor:Decode(encodedData)
print(decodedString)
print(originalString == decodedString)
But when i finally run it by lua, it shows that interpreter expected string, not Table. That was strange thing, because I pass argument of type string. To test Lua's logs, i wrote at beggining of function:
print(typeof(sInput))
I got output "Table" and lua's error. So how to fix it? Why lua displays that string (That i have passed) is a table? I use Lua 5.3.
Issue is in definition of method Encode(), and most likely Decode() has same problem.
You create Encode() method using dot syntax: self.Encode = function(sInput),
but then you're calling it with colon syntax: compressor:Encode(data)
When you call Encode() with colon syntax, its first implicit argument will be compressor itself (table from your error), not the data.
To fix it, declare Encode() method with colon syntax: function self:Encode(sInput), or add 'self' as first argument explicitly self.Encode = function(self, sInput)
The code you provided should not run at all.
You define function LZWCompressor.new() but call CLZWCompression.new()
Inside Encode you call self:InitDictionary(true) which has not been defined.
Maybe you did not paste all relevant code here.
The reason for the error you get though is that you call compressor:Encode(sInput) which is equivalent to compressor.Encode(self, sInput). (syntactic sugar) As function parameters are not passed by name but by their position sInput inside Encode is now compressor, not your string.
Your first argument (which happens to be self, a table) is then passed to string.len which expects a string.
So you acutally call string.len(compressor) which of course results in an error.
Please make sure you know how to call and define functions and how to use self properly!

Unusual behavior of image saving and loading in torch7

I noticed an unusual behavior with torch7. I know a little about torch7. So I don't know how this behavior can be explained or corrected.
So, I am using CIFAR-10 dataset. Simply I fetched data for an image from CIFAR-10 and then saved it in my directory. When I loaded that saved image, it was different.
Here is my code -
require 'image'
i1 = testData.data[2] --fetching data from CIFAR-10
image.save("1.png", i) --saving the data as image
i2 = image.load("1.png") --loading the saved image
if(i1 == i2) then --checking if image1(i1) and image2(i2) are different
print("same")
end
Is this behavior expected? I thought png was supposed to be lossless.
If so how this can be corrected?
Code for loading CIFAR-10 dataset-
-- load dataset
trainData = {
data = torch.Tensor(50000, 3072),
labels = torch.Tensor(50000),
size = function() return trsize end
}
for i = 0,4 do
local subset = torch.load('cifar-10-batches-t7/data_batch_' .. (i+1) .. '.t7', 'ascii')
trainData.data[{ {i*10000+1, (i+1)*10000} }] = subset.data:t()
trainData.labels[{ {i*10000+1, (i+1)*10000} }] = subset.labels
end
trainData.labels = trainData.labels + 1
local subset = torch.load('cifar-10-batches-t7/test_batch.t7', 'ascii')
testData = {
data = subset.data:t():double(),
labels = subset.labels[1]:double(),
size = function() return tesize end
}
testData.labels = testData.labels + 1
testData.data = testData.data:reshape(10000,3,32,32)
== operator compares pointers to two tensors, not contents:
a = torch.Tensor(3, 5):fill(1)
b = torch.Tensor(3, 5):fill(1)
print(a == b)
> false
print(a:eq(b):all())
> true

Resources