tf.data.dataset.from_tensor_slices for RNN model - tf.data.dataset

I have an image dataset including RGB images: img1.png, img2.png ... img250.png. I have extracted 100 small patches with size [64,64,3] from each image. So, I have now dataset like img1_1.png, img1_2.png ...img1_100.png, img2_1.png, img2_2.png, ... img2_100.png, img3_1, .....
I want to create a data generator with tf.data.dataset.from_tensor_slices to pass all patches of each image to an RNN model. So, I wanna the generator creates output like this : [batch_size, 100, 64, 64, 3]
How can I do that?
250: is the number of images,
100: is the number of patches that were extracted from each image,
64,64,3: the size of each patch
For each iteration, for example, I want to select for example 32 of the 250 images randomly, and put all 100 patches of them together and create the following format (32, 100, 64, 64, 3)
Because of the memory usage, I cannot load all data to a variable. I just have the 25000 patches with name img1_1.png, img1_2.png ...img1_100.png, img2_1.png, img2_2.png, ... img2_100.png, img3_1, ..... img250_1.png, img250_2.png ...img250_100.png.
I think its better to use format
tf.data.dataset.from_tensor_slices((patch_files,labels)), but I don't know how.
It is also important to note that the size of the label vector is (250,1). If the batch size is 32, the generator should output label batches with size (32,1)

Related

How to perform image augmentation for sequence of images representing a sample

I want to know how to perform image augmentaion for sequence image data.
The shape of my input to the model looks as below.
(None,30,112,112,3)
Where 30 is the number of images present in one sample. 112*112 are heigth and width,3 is the number of channels.
Currently I have 17 samples(17,30,112,112,3) which are not enough therefore i want make some sequence image augmentation so that I will have atleast 50 samples as (50,30,112,112,3)
(Note : My data set is not of type video,rather they are in the form of sequence of images captured at every 3 seconds.So,we can say that it is in the form of already extacted frames)
17 samples, each having 30 sequence images are stored in separate folders in a directory.
folder_1
folder_2,
.
.
.
folder_17
Can you Please let me know the code to perform data augmentation?
Here is an illustration of using imgaug library for a single image
# Reading an image using OpenCV
import cv2
img = cv2.imread('flower.jpg')
# Appending images 5 times to a list and convert to an array
images_list = []
for i in range(0,5):
images_list.append(img)
images_array = np.array(images_list)
The array images_array has shape (5, 133, 200, 3) => (number of images, height, width, number of channels)
Now our input is set. Let's do some augmentation:
# Import 'imgaug' library
import imgaug as ia
import imgaug.augmenters as iaa
# preparing a sequence of functions for augmentation
seq = iaa.Sequential([
iaa.Fliplr(0.5),
iaa.Crop(percent=(0, 0.1)),
iaa.LinearContrast((0.75, 1.5)),
iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.5),
iaa.Multiply((0.8, 1.2), per_channel=0.2)
],random_order=True)
Refer to this page for more functions
# passing the input to the Sequential function
images_aug = seq(images=images_array)
images_aug is an array that contains the augmented images
# Display all the augmented images
for img in images_aug:
cv2.imshow('Augmented Image', img)
cv2.waitKey()
Some augmented results:
You can extend the above for your own problem.

Pytorch VNet final softmax activation layer for segmentation. Different channel dimensions to labels. How do I get prediction output?

I am trying to build a V-Net. When I pass the images to segment during training, the output has 2 channels after the softmax activation (as specified in the architecture in the attached image) but the label and input has 1. How do I convert this such that output is the segmented image? Do I just take one of the channels as the final output when training (e.g output = output[:, 0, :, :, :]) and the other channel would be background?
outputs = network(inputs)
batch_size = 32
outputs.shape: [32, 2, 64, 128, 128]
inputs.shape: [32, 1, 64, 128, 128]
labels.shape: [32, 1, 64, 128, 128]
Here is my Vnet forward pass:
def forward(self, x):
# Initial input transition
out = self.in_tr(x)
# Downward transitions
out, residual_0 = self.down_depth0(out)
out, residual_1 = self.down_depth1(out)
out, residual_2 = self.down_depth2(out)
out, residual_3 = self.down_depth3(out)
# Bottom layer
out = self.up_depth4(out)
# Upward transitions
out = self.up_depth3(out, residual_3)
out = self.up_depth2(out, residual_2)
out = self.up_depth1(out, residual_1)
out = self.up_depth0(out, residual_0)
# Pass to convert to 2 channels
out = self.final_conv(out)
# return softmax
out = F.softmax(out)
return out [batch_size, 2, 64, 128, 128]
V Net architecture as described in (https://arxiv.org/pdf/1606.04797.pdf)
That paper has two outputs as they predict two classes:
The network predictions, which consist of two volumes having the same resolution as the original input data, are processed through a soft-max layer which
outputs the probability of each voxel to belong to foreground and to background.
Therefore this is not an autoencoder, where your inputs are passed back through the model as outputs. They use a set of labels which distinguish between their pixels of interest (foreground) and other (background). You will need to change your data if you wish to use the V-net in this manner.
It won't be as simple as designating a channel as output because this will be a classification task rather than a regression task. You will need annotated labels to work with this model architecture.

Error: "DimensionMismatch("matrix A has dimensions (1024,10), vector B has length 9")" using Flux in Julia

i'm still new in Julia and in machine learning in general, but I'm quite eager to learn. In the current project i'm working on I have a problem about dimensions mismatch, and can't figure what to do.
I have two arrays as follow:
x_array:
9-element Array{Array{Int64,N} where N,1}:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 72, 73]
[11, 12, 13, 14, 15, 16, 17, 72, 73]
[18, 12, 19, 20, 21, 22, 72, 74]
[23, 24, 12, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 72, 74]
[36, 37, 38, 39, 40, 38, 41, 42, 72, 73]
[43, 44, 45, 46, 47, 48, 72, 74]
[49, 50, 51, 52, 14, 53, 72, 74]
[54, 55, 41, 56, 57, 58, 59, 60, 61, 62, 63, 62, 64, 72, 74]
[65, 66, 67, 68, 32, 69, 70, 71, 72, 74]
y_array:
9-element Array{Int64,1}
75
76
77
78
79
80
81
82
83
and the next model using Flux:
model = Chain(
LSTM(10, 256),
LSTM(256, 128),
LSTM(128, 128),
Dense(128, 9),
softmax
)
I zip both arrays, and then feed them into the model using Flux.train!
data = zip(x_array, y_array)
Flux.train!(loss, Flux.params(model), data, opt)
and immediately throws the next error:
ERROR: DimensionMismatch("matrix A has dimensions (1024,10), vector B has length 9")
Now, I know that the first dimension of matrix A is the sum of the hidden layers (256 + 256 + 128 + 128 + 128 + 128) and the second dimension is the input layer, which is 10. The first thing I did was change the 10 for a 9, but then it only throws the error:
ERROR: DimensionMismatch("dimensions must match")
Can someone explain to me what dimensions are the ones that mismatch, and how to make them match?
Introduction
First off, you should know that from an architectural standpoint, you are asking something very difficult from your network; softmax re-normalizes outputs to be between 0 and 1 (weighted like a probability distribution), which means that asking your network to output values like 77 to match y will be impossible. That's not what is causing the dimension mismatch, but it's something to be aware of. I'm going to drop the softmax() at the end to give the network a fighting chance, especially since it's not what's causing the problem.
Debugging shape mismatches
Let's walk through what actually happens inside of Flux.train!(). The definition is actually surprisingly simple. Ignoring everything that doesn't matter to us, we are left with:
for d in data
gs = gradient(ps) do
loss(d...)
end
end
Therefore, let's start by pulling the first element out of your data, and splatting it into your loss function. You didn't specify your loss function or optimizer in the question. Although softmax usually means you should use crossentropy loss, your y values are very much not probabilities, and so if we drop the softmax we can just use the dead-simple mse() loss. For optimizer, we'll default to good old ADAM:
model = Chain(
LSTM(10, 256),
LSTM(256, 128),
LSTM(128, 128),
Dense(128, 9),
#softmax, # commented out for now
)
loss(x, y) = Flux.mse(model(x), y)
opt = ADAM(0.001)
data = zip(x_array, y_array)
Now, to simulate the first run of Flux.train!(), we take first(data) and splat that into loss():
loss(first(data)...)
This gives us the error message you've seen before; ERROR: DimensionMismatch("matrix A has dimensions (1024,10), vector B has length 12"). Looking at our data, we see that yes, indeed, the first element of our dataset has a length of 12. And so we will change our model to instead expect 12 values instead of 10:
model = Chain(
LSTM(12, 256),
LSTM(256, 128),
LSTM(128, 128),
Dense(128, 9),
)
And now we re-run:
julia> loss(first(data)...)
50595.52542674723 (tracked)
Huzzah! It worked! We can run this again:
julia> loss(first(data)...)
50578.01417593167 (tracked)
The value changes because the RNN holds memory within itself which gets updated each time we run the network, otherwise we would expect the network to give the same answer for the same inputs!
The problem comes, however, when we try to run the second training instance through our network:
julia> loss([d for d in data][2]...)
ERROR: DimensionMismatch("matrix A has dimensions (1024,12), vector B has length 9")
Understanding LSTMs
This is where we run into Machine Learning problems more than programming problems; the issue here is that we have promised to feed that first LSTM network a vector of length 10 (well, 12 now) and we are breaking that promise. This is a general rule of deep learning; you always have to obey the contracts you sign about the shape of the tensors that are flowing through your model.
Now, the reasons you're using LSTMs at all is probably because you want to feed in ragged data, chew it up, then do something with the result. Maybe you're processing sentences, which are all of variable length, and you want to do sentiment analysis, or somesuch. The beauty of recurrent architectures like LSTMs is that they are able to carry information from one execution to another, and they are therefore able to build up an internal representation of a sequence when applied upon one time point after another.
When building an LSTM layer in Flux, you are therefore declaring not the length of the sequence you will feed in, but rather the dimensionality of each time point; imagine if you had an accelerometer reading that was 1000 points long and gave you X, Y, Z values at each time point; to read that in, you would create an LSTM that takes in a dimensionality of 3, then feed it 1000 times.
Writing our own training loop
I find it very instructive to write our own training loop and model execution function so that we have full control over everything. When dealing with time series, it's often easy to get confused about how to call LSTMs and Dense layers and whatnot, so I offer these simple rules of thumb:
When mapping from one time series to another (E.g. constantly predict future motion from previous motion), you can use a single Chain and call it in a loop; for every input time point, you output another.
When mapping from a time series to a single "output" (E.g. reduce sentence to "happy sentiment" or "sad sentiment") you must first chomp all the data up and reduce it to a fixed size; you feed many things in, but at the end, only one comes out.
We're going to re-architect our model into two pieces; first the recurrent "pacman" section, where we chomp up a variable-length time sequence into an internal state vector of pre-determined length, then a feed-forward section that takes that internal state vector and reduces it down to a single output:
pacman = Chain(
LSTM(1, 128), # map from timepoint size 1 to 128
LSTM(128, 256), # blow it up even larger to 256
LSTM(256, 128), # bottleneck back down to 128
)
reducer = Chain(
Dense(128, 9),
#softmax, # keep this commented out for now
)
The reason we split it up into two pieces like this is because the problem statement wants us to reduce a variable-length input series to a single number; we're in the second bullet point above. So our code naturally must take this into account; we will write our loss(x, y) function to, instead of calling model(x), it will instead do the pacman dance, then call the reducer on the output. Note that we also must reset!() the RNN state so that the internal state is cleared for each independent training example:
function loss(x, y)
# Reset internal RNN state so that it doesn't "carry over" from
# the previous invocation of `loss()`.
Flux.reset!(pacman)
# Iterate over every timepoint in `x`
for x_t in x
y_hat = pacman(x_t)
end
# Take the very last output from the recurrent section, reduce it
y_hat = reducer(y_hat)
# Calculate reduced output difference against `y`
return Flux.mse(y_hat, y)
end
Feeding this into Flux.train!() actually trains, albeit not very well. ;)
Final observations
Although your data is all Int64's, it's pretty typical to use floating point numbers with everything except embeddings (an embedding is a way to take non-numeric data such as characters or words and assign numbers to them, kind of like ASCII); if you're dealing with text, you're almost certainly going to be working with some kind of embedding, and that embedding will dictate what the dimensionality of your first LSTM is, whereupon your inputs will all be "one-hot" encoded.
softmax is used when you want to predict probabilities; it's going to ensure that for each input, the outputs are all between [0...1] and moreover that they sum to 1.0, like a good little probability distribution should. This is most useful when doing classification, when you want to wrangle your wild network output values of [-2, 5, 0.101] into something where you can say "we have 99.1% certainty that the second class is correct, and 0.7% certainty it's the third class."
When training these networks, you're often going to want to batch multiple time series at once through your network for hardware efficiency reasons; this is both simple and complex, because on one hand it just means that instead of passing a single Sx1 vector through (where S is the size of your embedding) you're instead going to be passing through an SxN matrix, but it also means that the number of timesteps of everything within your batch must match (because the SxN must remain the same across all timesteps, so if one time series ends before any of the others in your batch you can't just drop it and thereby reduce N halfway through a batch). So what most people do is pad their timeseries all to the same length.
Good luck in your ML journey!

Transforming MPSNNImageNode using Metal Performance Shader

I am currently working on replicating YOLOv2 (not tiny) on iOS (Swift4) using MPS.
A problem is that it is hard for me to implement space_to_depth function (https://www.tensorflow.org/api_docs/python/tf/space_to_depth) and concatenation of two results from convolutions (13x13x256 + 13x13x1024 -> 13x13x1280). Could you give me some advice on making these parts? My codes are below.
...
let conv19 = MPSCNNConvolutionNode(source: conv18.resultImage,
weights: DataSource("conv19", 3, 3, 1024, 1024))
let conv20 = MPSCNNConvolutionNode(source: conv19.resultImage,
weights: DataSource("conv20", 3, 3, 1024, 1024))
let conv21 = MPSCNNConvolutionNode(source: conv13.resultImage,
weights: DataSource("conv21", 1, 1, 512, 64))
/*****
1. space_to_depth with conv21
2. concatenate the result of conv20(13x13x1024) to the result of 1 (13x13x256)
I need your help to implement this part!
******/
I believe space_to_depth can be expressed in form of a convolution:
For instance, for an input with dimension [1,2,2,1], Use 4 convolution kernels that each output one number to one channel, ie. [[1,0],[0,0]] [[0,1],[0,0]] [[0,0],[1,0]] [[0,0],[0,1]], this should put all input numbers from spatial dimension to depth dimension.
MPS actually has a concat node. See here: https://developer.apple.com/documentation/metalperformanceshaders/mpsnnconcatenationnode
You can use it like this:
concatNode = [[MPSNNConcatenationNode alloc] initWithSources:#[layerA.resultImage, layerB.resultImage]];
If you are working with the high level interface and the MPSNNGraph, you should just use a MPSNNConcatenationNode, as described by Tianyu Liu above.
If you are working with the low level interface, manhandling the MPSKernels around yourself, then this is done by:
Create a 1280 channel destination image to hold the result
Run the first filter as normal to produce the first 256 channels of the result
Run the second filter to produce the remaining channels, with the destinationFeatureChannelOffset set to 256.
That should be enough in all cases, except when the data is not the product of a MPSKernel. In that case, you'll need to copy it in yourself or use something like a linear neuron (a=1,b=0) to do it.

How do I create a dataset with multiple images the same format as CIFAR10?

I have images 1750*1750 and I would like to label them and put them into a file in the same format as CIFAR10. I have seen a similar answer before that gave an answer:
label = [3]
im = Image.open(img)
im = (np.array(im))
print(im)
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
array = np.array(list(label) + list(r) + list(g) + list(b), np.uint8)
array.tofile("info.bin")
but it doesn't include how to add multiple images in a single file. I have looked at CIFAR10 and tried to append the arrays in the same way, but all I got was the following error:
E tensorflow/core/client/tensor_c_api.cc:485] Read less bytes than requested
Note that I am using Tensorflow to do my computations, and I have been able to isolate the problem from the data.
The CIFAR-10 binary format represents each example as a fixed-length record with the following format:
1-byte label.
1 byte per pixel for the red channel of the image.
1 byte per pixel for the green channel of the image.
1 byte per pixel for the blue channel of the image.
Assuming you have a list of image filenames called images, and a list of integers (less than 256) called labels corresponding to their labels, the following code would write a single file containing these images in CIFAR-10 format:
with open(output_filename, "wb") as f:
for label, img in zip(labels, images):
label = np.array(label, dtype=np.uint8)
f.write(label.tostring()) # Write label.
im = np.array(Image.open(img), dtype=np.uint8)
f.write(im[:, :, 0].tostring()) # Write red channel.
f.write(im[:, :, 1].tostring()) # Write green channel.
f.write(im[:, :, 2].tostring()) # Write blue channel.

Resources