caffe resnet50: I don't want resizing to 256 and cropping 224? - image-processing

I have training data already aligned to 224x224 in dimension. It would be wrong to resize to 256x256 and then random crop. Is there a way to skip this transformation and use the images as it is?

Yes, you only need to edit accordingly the transformation params of the input data layer you are using.

Related

Use first MLModel MLMultiArray output as second MLModel MLMultiArray Input

I have two CoreML MLMModels (converted from .pb).
The first model outputs a Float32 3 × 512 × 512 MLMultiArray, which basically describes an image.
The second model input is a Float32 1 × 360 × 640 × 3 MLMultiArray, which is also an image but with a different size.
I know that in theory, I can convert the second model input into an image, and then convert the first model output to an image (post-prediction), resize it, and feed the second model, but It feels not very efficient and there is already a significant delay caused by the models, so I'm trying to improve performance.
Is it possible to "resize"/"reshape"/"transposed" the first model output to match the second model input? I'm using https://github.com/hollance/CoreMLHelpers (by the amazing
Matthijs Hollemans) helpers, but I don't really understand how to do it without damaging the data and keeping it as efficient as possible.
Thanks!
You don't have to turn them into images. Some options for using MLMultiArrays instead of images:
You could take the 512x512 output from the first model and chop off a portion to make it 360x512, and then pad the other dimension to make it 360x640. But that's probably not what you want. In case it is, you'll have to write to code for this yourself.
You can also resize the 512x512 output to 360x640 by hand. To do this you will need to implement a suitable resizing option yourself (probably bilinear interpolation) or convert the data so you can use OpenCV or the vImage framework.
Let the model do the above. Add a ResizeBilinearLayer to the model, followed by a PermuteLayer or TransposeLayer to change the order of the dimensions. Now the image will be resized to 360x640 pixels, and the output of the first model is 1x360x640x3. This is easiest if you add these operation to the original model and then let coremltools convert them to the appropriate Core ML layers.

Does scale up or down images effect image information?

i'm work on graduation project for image forgery detection using CNN , Most of the paper i read before feed the data set to the network they Down scale the image size, i want to know how Does this process effect image information ?
Images are resized/rescaled to a specific size for a few reasons:
(1) It allows the user to set the input size to their network. When designing a CNN you need to know the shape (dimensions) of your data at each step; so, having a static input size is an easy way to make sure your network gets data of the shape it was designed to take.
(2) Using a full resolution image as the input to the network is very inefficient (super slow to compute).
(3) For most cases the features desired to be extracted/learned from an image are also present when downsampling the image. So in a way resizing an image to a smaller size will denoise the image, filtering out much of the unimportant features within the image for you.
Well you change the images size. Of course it changes it's information.
You cannot reduce image size without omitting information. Simple case: Throw away every second pixel to scale image to 50%.
Scaling up adds new pixels. In its simplest form you duplicate pixels, creating redundant information.
More complex solutions create new pixels (less or more) by averaging neighbouring pixels or interpolating between them.
Scaling up is reversible. It doesn't create nor destroy information.
Scaling down divides the amount of information by the square of the downscaling factor*. Upscaling after downscaling results in a blurred image.
(*This is true in a first approximation. If the image doesn't have high frequencies, they are not lost, hence no loss of information.)

What happens if we resize images repeatedly

I need a dataset(image). So I downloaded images, for training purpose I resized images twice. From random sizes to (200,300), using that resized images I resized them again to (64,64). Is there any possibility that I can face problems while training. Does a picture loss it's data when resized again and again.
can u please explain me in detail. Thanks in advance
Images fundamentally lose their data when down sampling. If a pixel is the fundamental piece of data in an image and you remove pixels, then you have removed data. Different down sample methods lose different amounts of data. For instance a bilinear or bicubic down sample method will use multiple pixels in the larger image to generate a single pixel in the smaller image, whereas nearest neighbor downsampling uses a single pixel in the larger image to generate a single pixel in the smaller image, thereby losing more data.
Whether the down sampling will affect your training depends on more information than you have provided.

keras - flow_from_directory function - target_size parameter

Keras has this function called flow_from_directory and one of the parameters is called target_size. Here is the explanation for it:
target_size: Tuple of integers (height, width), default: (256, 256).
The dimensions to which all images found will be resized.
The thing that is unclear to me is whether it is just cropping the original image into 256x256 matrix (in this case we do not take the entire image) or it is just reducing the resolution of the image (while still showing us the entire image)?
If it is -let's say - just reducing the resolution:
Assume that I have some xray images with the size 1024x1024 each (for breast cancer detection). And if I want to apply transfer learning to a pretrained Convolutional Neural Network which only takes 224x224 input images, will I not be loosing important data/information when I reduce the size of the image (and resolution) from 1024x1024 down to 224x224? Isn't there any such risk?
Thank you in advance!
Reducing the resolution (risizing)
Yes, you are loosing data
The best way for you is to rebuild your CNN to work with your original image size, i.e. 1024*1024
It is reducing the resolution of the image (while still showing us the entire image)
That is true that you are losing data, but you can work with an image size a bit larger than 224224 like 512 * 512 512 as it will keep most of the information and will train in comparatively less time and resources than the original image(10241024).

What information is neccessary to restore an image from a scaled down version?

I have an image and a version that is scaled down to exactly half the width and height. The Lanczos filter (with a = 3) has been used to scale the image. Color spaces can be ignored, all colors are in a linear space.
Since the small image contains one pixel for each 2x2 pixel block of the original I'm thinking it should be possible to restore the original image from the small one with just 3 additional color values per 2x2 pixel block. However, I do not know how to calculate those 3 color values.
The original image has four times as much information as the scaled version. Using the original image I want to calculate the 3/4 of information that is missing in the scaled version such that I can use the scaled version and the calculated missing information to reconstruct the original image.
Consider the following use-case: Over a network you send the scaled image to a user as a thumbnail. Now the user wants to see the image at full size. How can we avoid repeating information that is already in the thumbnail? As far as I can tell progressive image compression algorithms do not manage to do this with more complex filtering.
For the box filter the problem is trivial. But since the kernels of the Lanczos filter overlap each other I do not know how to solve it. Given that this is just a linear system of equations I believe it is solvable. Additionally I would rather avoid deconvolution in frequency space.
How can I calculate the information that is missing in the down-scaled version and use it to restore the original image?

Resources