Pixel-wise classification on a large image using Unet and .npy dataset - image-processing

I have an image ~(5000*5000*8).I converted this big image to small images(eg:400 images with 256*256*8 dim (x,y,channel)) and imported those to numpy array.now I have an array (400,256,256,8) for one of my class and another array (285,256,256,8) for my second class and I save these arrays to npy files. I want to classify this images pixel by pixel, I have a label matrix with 2 class. now I want to classify this image by a UNET customised method and I use deep cognition and peltarion websites to config my network and data, so I need a method to help me for classify my image pixel-wise. please help me.

Related

Can a Keras CNN model build with 2 channel of 28x28 size image, predict real world images(RGB)?

I m building a CNN model with tensorflow Keras and the dataset available is in black and white.
I m using ImageDataGenerator available from keras.preprocessing.image api to convert image to array. By default it converts every image to 3 channel input. So will my model be able to predict real world image(colored imaged) if the trained image is in color and not black and white?
Also in ImageDataGenerator there is parameter named "color_mode" where it can take input as "grayscale" and gives us 2d array to be used in model. If I go with this approach do I need to convert real world image into grayscale as well?
The color space of the images you train should be the same as the color space of the images your application images.
If luminance is the the most important e.g. OCR, then training on gray scale images should produce a more efficient image. But if you are to recognize things that could appear in different colors, it may be interesting to use a color input.
If the color is not important and you train using 3-channel images, e.g. RGB, you will have to give examples in enough colors to avoid it to overfitting to the color. e.g you want to distinguish a car from a tree, you may end up with a model that maps any green object to a tree and all the rest to cars.

How to prepare images for Haar Cascade? Positive vs Training samples

I am preparing to classify my own object using openCV Haar Cascade. I understand that negative images are photos without your object. Positive images are with you object included. The part that confuses me is how my positive images need to be setup. I have read numerous explanations. Its still a bit confusing to me. I've read 3 different methods on preparing samples.
1) Positive images are actual(take up full size of image) and converted to .vec file.
2) Images are apart of background and object box dimension are noted in file then converted a .vec file
3) Positive image is distorted and added to negative background.
Here are some links of articles I've read
https://www.academia.edu/9149928/A_complete_guide_to_train_a_cascade_classifier_filter
https://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html
http://note.sonots.com/SciSoftware/haartraining.html#w0a08ab4
Do I crop my positive images for training or do I keep as is and include the rectangle dimension of the object within the image?

how to make/create ground truth image file for training Hyperspectral image data?

Need to train hyperspectral data, how can I build a training dataset from scracth and a ground truth array/image to be used for the same? Spectral python doesn't support training by png files.
We need to manually create masks for particular classes on any 2D image and convert it into a .raw ENVI file with a single band.
Ground truth image need not be a hypercube necessarily.

How can I feed an image into my neural network?

So far I have trained my neural network is trained on the MNIST data set (from this tutorial). Now, I want to test it by feeding my own images into it.
I've processed the image using OpenCV by making the dimensions 28x28 pixels, turning it into grayscale, and using adaptive thresholding. Where do I proceed from here?
An 'image' is a 28x28 array of values from 0-1... so not really an image. Just greyscaling your original image will not make it fit for input. You have to go through the following steps.
Load your image into your programming langauge, with 784 rgb values representing pixels
For each rgb value, take the average of r, g and b. Then divide this value by 255. You will now have the greyscale of an image, a value between 0 and 1.
Replace the rgb values with the greyscale values
You will now have an image which looks like this (see the right array):
So you must do everything through your programming language. If you just greyscale an image with a photoeditor, the pixels will still be r,g,b.
You can use libraries like PIL, skimage that let you load the data into numpy arrays in python and also support many image operations like grayscaling, scaling etc.
After you have processed the image and read the data into numpy array you can then feed this to your network.

use trained keras cnn to generate feature maps

I trained up a very vanilla CNN using keras/theano that does a pretty good job of detecting whether a small (32X32) portion of an image contains a (relatively simple) object of type A or B (or neither). The output is an array of three numbers [prob(neither class), prob(A), prob(B)]. Now, I want to take a big image (512X680, methinks) and sweep across the image, running the trained model on each 32X32 sub-image to generate a feature map that's 480X648, at each point consisting of a 3-vector of the aforementioned probabilities. Basically, I want to use my whole trained CNN as a (nonlinear) filter with three-dimensional output. At the moment, I am cutting each 32X32 out of the image one at a time and running the model on it, then dropping the resulting 3-vectors in a big 3X480X648 array. However, this approach is very slow. Is there a faster/better way to do this?

Resources