How to train bigger size images in Google Colab? - image-processing

The image file attached shows the u-net model and the related attributes to it. The image size that I want to train is 572x572x3 as in the original paper.
I am running it in colab, however, my colab keeps on crashing.
I am using:
batch size = 4
epochs = 20
How can I train this model for that image size? The image size at the output is 560x560x3.
Thank you
Architecture
u-net

Related

Image size (324768098 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack

Was facing an issue in classification model trained on fastai.
Running images in batches on a classification model. But due to 1 image, entire batch is failing. The classification model when trained was trained on preprocessed images, by applying transformations on image size = 224. When the particular image is passed through, it shows this error and entire batch fails.
"Image size (324768098) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack."
Why is this error showing then?
Had preprocessed the images when I trained the model, by providing image size to be 224.
I'd like to respect the limit and not increase the limit (as described here: Pillow in Python won't let me open image ("exceeds limit"))
How can I load it in a way that the resolution is lowered just below the limit and the lower resolution image is referenced in img without causing any error.
It'd be great to have a Python solution but if not, any other solution will work too.

Autoencoder during training has loss about 0.003 and accuracy is also above 95 still yielding poor decoded image

I have implemented an auto encoder architecture to learn features of human face in pytorch with 5 encoding and 5 decoding layer .Model architecture is attached below.The image sizes that were passed to model were of varied sizes for eye its 128*128 similar different window size for different face component but still could not understand despite a low loss reconstructed image is very blurry attached is image for refernce.The model architecture was derived from paper of deep face drawing.
custom datatset that we prepared contain human face portrat with white background.attached is image with both black and white backgrounds.enter image description here[[enter image description here](https://i.stack.imgur.com/2NlgA.jpg)](https://i.stack.imgur.com/jET2F.png)
I tried this autoencoder but coud not get problem.Imge from decoder is not very well reconstructed.It would be really helpful if somebody can address this

What is the optimal image resolution to train an image classifier with CreateML?

I want to create an image classifier model using CreateML. I have images available in very high resolution but that comes at a cost in terms of data traffic and processing time, so I prefer to use images as small as possible.
The docs say that:
The images (...) don’t have to be a particular size, nor do they need to be the same size as each other. However, it’s best to use images that are at least 299 x 299 pixels.
I trained a test model with images of various sizes > 299x299px and the model parameters in Xcode show the dimension 299x299px which I understand is the normalized image size:
This dimension seems to be determined by the CreateML Image Classifier algorithm and is not configurable.
Does it make any sense to train the model with images that are larger than 299x299px?
If the image dimensions are not a square (same height as width) will the training image be center cropped to 299x299px during the process of normalization, or will the parts of the image that are outside the square influence the model?
From reading and experience training image classification models (but no direct inside Apple knowledge), it appears that Create ML scales incoming images to fit a square image 299 x 299. You would be wasting disk space and preprocessing time by providing larger images.
The best documentation I can find is to look at the mlmodel file created by CreateML for an image classifier template. The input is explicitly defined as color image 299 x 299. No option to change that setting in the stand-alone app.
Here is some documentation (applies to Classifer template which uses ScenePrint by default):
https://developer.apple.com/documentation/createml/mlimageclassifier/featureextractortype/sceneprint_revision
There may be a Center/Crop option in the Playground workspace, but I never found it in the standalone app version of Create ML.

keras - flow_from_directory function - target_size parameter

Keras has this function called flow_from_directory and one of the parameters is called target_size. Here is the explanation for it:
target_size: Tuple of integers (height, width), default: (256, 256).
The dimensions to which all images found will be resized.
The thing that is unclear to me is whether it is just cropping the original image into 256x256 matrix (in this case we do not take the entire image) or it is just reducing the resolution of the image (while still showing us the entire image)?
If it is -let's say - just reducing the resolution:
Assume that I have some xray images with the size 1024x1024 each (for breast cancer detection). And if I want to apply transfer learning to a pretrained Convolutional Neural Network which only takes 224x224 input images, will I not be loosing important data/information when I reduce the size of the image (and resolution) from 1024x1024 down to 224x224? Isn't there any such risk?
Thank you in advance!
Reducing the resolution (risizing)
Yes, you are loosing data
The best way for you is to rebuild your CNN to work with your original image size, i.e. 1024*1024
It is reducing the resolution of the image (while still showing us the entire image)
That is true that you are losing data, but you can work with an image size a bit larger than 224224 like 512 * 512 512 as it will keep most of the information and will train in comparatively less time and resources than the original image(10241024).

Training HOG-linear svm base on TUD-brussel data set

is there anyone ever tried to train an HOG-liner svm pedestrian detector base on The TUD-brussel dataset(which is introduced from this website):
https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/people-detection-pose-estimation-and-tracking/multi-cue-onboard-pedestrian-detection/
I tried to implement it on opencv through visual studio 2012. I cropped positive samples from the original positive images base on their annotations(about 1777 samples for total). Negative samples was cropped from original negative images randomly, 20 samples for each image(about 3840 samples for total).
I also adapted two round bootstrapping(checking for hardexamples and retrain) to improve its performance. However, the test result for this detector on TUD-brussel was awful, about 97% miss rate when FPPG(false positive per image) equals to 1. I found another paper which achieved a reasonable result when trainning on TUD-brussel with HOG(on Figure3(a)):
https://www1.ethz.ch/igp/photogrammetry/publications/pdf_folder/walk10cvpr.pdf.
Is anybody have any idea on training HOG+linear SVM on TUD-brussel?
I have to face with a similar situation recently.I developed an image classifier with hog and linear svm in python using pycharm.Problem i faced was it took lot of time to train.
Solution:
Simple I resized each image to 250*250.it really incresed performance in my situation
Resize each image
convert to gray scale
find PCA
flat that and append it to training list
append labels to training labels
for file in listing1:
img = cv2.imread(path1 + file)
res=cv2.resize(img,(250,250))
gray_image = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
xarr=np.squeeze(np.array(gray_image).astype(np.float32))
m,v=cv2.PCACompute(xarr)
arr= np.array(v)
flat_arr= arr.ravel()
training_set.append(flat_arr)
training_labels.append(1)
Now Training
trainData=np.float32(training_set)
responses=np.float32(training_labels)
svm = cv2.SVM()
svm.train(trainData,responses, params=svm_params)
svm.save('svm_data.dat')

Resources