Keras has this function called flow_from_directory and one of the parameters is called target_size. Here is the explanation for it:
target_size: Tuple of integers (height, width), default: (256, 256).
The dimensions to which all images found will be resized.
The thing that is unclear to me is whether it is just cropping the original image into 256x256 matrix (in this case we do not take the entire image) or it is just reducing the resolution of the image (while still showing us the entire image)?
If it is -let's say - just reducing the resolution:
Assume that I have some xray images with the size 1024x1024 each (for breast cancer detection). And if I want to apply transfer learning to a pretrained Convolutional Neural Network which only takes 224x224 input images, will I not be loosing important data/information when I reduce the size of the image (and resolution) from 1024x1024 down to 224x224? Isn't there any such risk?
Thank you in advance!
Reducing the resolution (risizing)
Yes, you are loosing data
The best way for you is to rebuild your CNN to work with your original image size, i.e. 1024*1024
It is reducing the resolution of the image (while still showing us the entire image)
That is true that you are losing data, but you can work with an image size a bit larger than 224224 like 512 * 512 512 as it will keep most of the information and will train in comparatively less time and resources than the original image(10241024).
Related
I have been playing around with some generative models, namely StyleGAN2 and diffusion models for image generation, specifically a homegrown dataset of 64 x 64 resolution. I have roughly 260 000 data samples.
These images are all grayscale, and every pixel intensity really matters, e.g. if the intensity (out of 255) of a pixel is 160 when it should be 155 for the image to look correct this can have a big impact on the quality of the image overall. This is within the context of the other pixels, of course, otherwise, we'd always be generating the same image.
The self-referential scores (FID and Kernal Inception Distance, KID) can be very low at the end of training - i.e. when the loss has converged, because the distribution of generated samples and real samples are very similar. You can also see this visually, where the images look roughly correct. But unfortunately, I really need the pixel values to be narrower.
One solution I was thinking of was to increase the resolution to 128x128 and essentially duplicate the pixel values in each dimension. This way the "noise" of individual pixels can be averaged out over 4 pixels during generation. Does this make sense, or does this sound naive? Thanks!
i'm work on graduation project for image forgery detection using CNN , Most of the paper i read before feed the data set to the network they Down scale the image size, i want to know how Does this process effect image information ?
Images are resized/rescaled to a specific size for a few reasons:
(1) It allows the user to set the input size to their network. When designing a CNN you need to know the shape (dimensions) of your data at each step; so, having a static input size is an easy way to make sure your network gets data of the shape it was designed to take.
(2) Using a full resolution image as the input to the network is very inefficient (super slow to compute).
(3) For most cases the features desired to be extracted/learned from an image are also present when downsampling the image. So in a way resizing an image to a smaller size will denoise the image, filtering out much of the unimportant features within the image for you.
Well you change the images size. Of course it changes it's information.
You cannot reduce image size without omitting information. Simple case: Throw away every second pixel to scale image to 50%.
Scaling up adds new pixels. In its simplest form you duplicate pixels, creating redundant information.
More complex solutions create new pixels (less or more) by averaging neighbouring pixels or interpolating between them.
Scaling up is reversible. It doesn't create nor destroy information.
Scaling down divides the amount of information by the square of the downscaling factor*. Upscaling after downscaling results in a blurred image.
(*This is true in a first approximation. If the image doesn't have high frequencies, they are not lost, hence no loss of information.)
I wish to use transfer learning to process images, and my images have different sizes.
I think in general convolutional layers can take variable input size, but fully connected layers can only take input of specific size.
However, the Keras implementation of VGG-16 or ResNet50 can take any image size larger than 32x32, although they do have fully connected layers. I wonder how it is done to get fix fully connected layer size for different image dimensions?
Thanks very much!
What you are saying is misleading, you can build a VGG/ResNet Keras model with any input image size larger than 32x32, but once the model is built, you can't change the input size, and that is usually the problem. So the model cannot really take variable sized images.
I need a dataset(image). So I downloaded images, for training purpose I resized images twice. From random sizes to (200,300), using that resized images I resized them again to (64,64). Is there any possibility that I can face problems while training. Does a picture loss it's data when resized again and again.
can u please explain me in detail. Thanks in advance
Images fundamentally lose their data when down sampling. If a pixel is the fundamental piece of data in an image and you remove pixels, then you have removed data. Different down sample methods lose different amounts of data. For instance a bilinear or bicubic down sample method will use multiple pixels in the larger image to generate a single pixel in the smaller image, whereas nearest neighbor downsampling uses a single pixel in the larger image to generate a single pixel in the smaller image, thereby losing more data.
Whether the down sampling will affect your training depends on more information than you have provided.
How can size of an image effect training the model for this task?
My current training set holds images that are 2880 X 1800, but I am worried this may be too large to train. In total my sample size will be about 200-500 images.
Would this just mean that I need more resources (GPU,RAM, Distribution) when training my model?
If this is too large, how should I go about resizing? -- I want to mimic real-world photo resolutions as best as possible for better accuracy.
Edit:
I would also be using TFRecord format for the image files
Your memory and processing requirements will be proportional to the pixel size of your image. Whether this is too large for you to process efficiently will depend on your hardware constraints and the time you have available.
With regards to resizing the images there is no one answer, you have to consider how to best preserve information that'll be required for your algorithm to learn from your data while removing information that won't be useful. Reducing the size of your input images won't necessarily be a negative for accuracy. Consider two cases:
Handwritten digits
Here the images could be reduced considerably in size and maintain all the structural information necessary to be correctly identified. Have a look at the MNIST data set, these images are distributed at 28 x 28 resolution and identifiable to 99.7%+ accuracy.
Identifying Tree Species
Imagine a set of images of trees where individual leaves could help identify species. Here you might find that reducing the image size reduces small scale detail on leaf shape in a way that's detrimental to the model, but you might find that you get a similar result with a tight crop (which preserves individual leaves) rather than an image resize. If this is the case you may find that creating multiple crops from the same image gives you an augmented data set for training that considerably improves results (which is something to consider, if possible, given your training set is very small)
Deep learning models are achieving results around human level in many image classification tasks: if you struggle to identify your own images then it's less likely you'll train an algorithm to. This is often a useful starting point when considering the level of scaling that might be appropriate.
If you are using GPUs to train, this will def affect your training time. Tensorflow does most of the GPU allocation so you don't have to worry about that. But with big photos you will be experiencing long training time although your dataset is small. You should consider data-augmentation.
You could complement your resizing with the data-augmentation. Resize in equal dimensions and then perform reflection and translation (as in geometric movement)
If your images are too big, your GPU might run out of memory before it can start training because it has to store the convolution outputs on its memory. If that happens, you can do some of the following things to reduce memory consumption:
resize the image
reduce batch size
reduce model complexity
To resize your image, there are many scripts just one Google search away, but I will add that in your case 1440 by 900 is probably a sweet spot.
Higher resolution images will result in a higher training time and an increased memory consumption (mainly GPU memory).
Depending on your concrete task, you might want to reduce the image size in order to therefore fit a reasonable batch size of let's say 32 or 64 on the GPU - for stable learning.
Your accuracy is probably affected more by the size of your training set. So instead of going for image size, you might want to go for 500-1000 sample images. Recent publications like SSD - Single Shot MultiBox Detector achieve high accuracy values like an mAP of 72% on the PascalVOC dataset - with "only" using 300x300 image resolution.
Resizing and augmentation: SSD for instance just scales every input image down to 300x300, independent of the aspect ratio - does not seem to hurt. You could also augment your data by mirroring, translating, ... etc (but I assume there are built-in methods in Tensorflow for that).