How to interpolate a very large image (+10Gigabytes)? - image-processing

I have a monochrome image whose size is greater than 10 gigabytes (50000x50000). This image has a lot of "holes", pixels with NULL value.
Conventionally, I know how to use python griddata function to read in the whole image and fill the pixels with NULL value with different interpolation method. But the problem now is that I can't process the whole image at once due to the size of this image, which will give me an memory exhausted error.
So, now my idea is that I could divide this image into 2500 (50x50) windows and I run the interpolation method on each window. But the obvious problem is that for each window, the NULL pixel is interpolated with neighboring pixels only in the same window, which is against the nature of an image, because the pixel on the edge of a window can not be interpolated by the pixel in neighboring windows. In order to solve this problem, overlapping the windows may be a solution. I can only think of this solution. Does anyone know if there is an efficient and intact method to interpolate a very large image.

You'd need to give a lot more information I think, but if the holes are small, one simple solution is just to replace null pixels with values from a median filter.
For example, using pyvips:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
image = (image == 0).ifthenelse(image.median(5), image)
image.write_to_file(sys.argv[2])
That will open the image in sequential mode (we only need to make one pass over the image, so we don't need random access to pixels). A 5x5 median filter can fill holes up to perhaps three pixels across. You can use a larger window to fill larger holes, but of course it'll get slower.
It should be pretty quick, and it'll work on images of any size using only a little memory.
You'd need to consider something more complex if you need to fill large areas.

Related

What information is neccessary to restore an image from a scaled down version?

I have an image and a version that is scaled down to exactly half the width and height. The Lanczos filter (with a = 3) has been used to scale the image. Color spaces can be ignored, all colors are in a linear space.
Since the small image contains one pixel for each 2x2 pixel block of the original I'm thinking it should be possible to restore the original image from the small one with just 3 additional color values per 2x2 pixel block. However, I do not know how to calculate those 3 color values.
The original image has four times as much information as the scaled version. Using the original image I want to calculate the 3/4 of information that is missing in the scaled version such that I can use the scaled version and the calculated missing information to reconstruct the original image.
Consider the following use-case: Over a network you send the scaled image to a user as a thumbnail. Now the user wants to see the image at full size. How can we avoid repeating information that is already in the thumbnail? As far as I can tell progressive image compression algorithms do not manage to do this with more complex filtering.
For the box filter the problem is trivial. But since the kernels of the Lanczos filter overlap each other I do not know how to solve it. Given that this is just a linear system of equations I believe it is solvable. Additionally I would rather avoid deconvolution in frequency space.
How can I calculate the information that is missing in the down-scaled version and use it to restore the original image?

How can I write a histogram-like kernel filter for CoreImage?

In the docs for Kernel Routine Rules, it says 'A kernel routine computes an output pixel by using an inverse mapping back to the corresponding pixels of the input images. Although you can express most pixel computations this way—some more naturally than others—there are some image processing operations for which this is difficult, if not impossible. For example, computing a histogram is difficult to describe as an inverse mapping to the source image.'
However, apple obviously is doing it somehow because they do have a CIAreaHistogram Core Image Filter that does just that.
I can see one theoretical way to do it with the given limitations:
Lets say you wanted a 256 element red-channel histogram...
You have a 256x1 pixel output image. The kernel function gets called for each of those 256 pixels. The kernel function would have to read EVERY PIXEL IN THE ENTIRE IMAGE each time its called, checking if that pixel's red value matches that bucket and incrementing a counter. When its processed every pixel in the entire image for that output pixel, it divides by the total number of pixels and sets that output pixel value to that calculated value. The problem is, assuming it actually works, this is horribly inefficient, since every input pixel is accessed 256 times, although every output pixel is written only once.
What would be optimal would be a way for the kernel to iterate over every INPUT pixel, and let us update any of the output pixels based on that value. Then the input pixels would each be read only once, and the output pixels would be read and written a total of (input width)x(input height) times altogether.
Does anyone know of any way to get this kind of filter working? Obviously there's a filter available from apple for doing a histogram, but I need it for doing a more limited form of histogram. (For example, a blue histogram limited to samples that have a red value in a given range.)
The issue with this is that custom kernel code in Core Image works like a function which remaps pixel by pixel. You don't actually have a ton of information to go off of except for the pixel that you are currently computing. A custom core image filter sort of goes like this
for i in 1 ... image.width
for j in 1 ... image.height
New_Image[i][j] = CustomKernel(Current_Image[i][j])
end
end
So actually, it's not really plausible to make your own histogram via custom kernels, because you literally do not have any control over the new image other than in that CustomKernel function that has been made. This is actually one of the reasons that CIImageProcessor was created for iOS10, you probably would have an easier time making a histogram via that function(and also producing other cool affects via image processing), and I suggest checking out the WWDC 2016 video on it ( Raw images and live images session).
IIRC, if you really want to make a histogram, it is still possible, but you will have to work with the UIImage version, and then convert the resulting image to an RBG image for which you can do the counting, and storing them in bins. I would recommend Simon Gladman's book on this, as he has a chapter devoted to histograms, but there is a lot more that goes into the core image default version because they have MUCH more control over the image than we do using the framework.

warpPerspective and warpAffine is not working for large images, height>32k

I am trying to warp 16-bit satellite images. I have Panchromatic images. My reference image is 8192x81920 pixels, and a Red channel image is 4096x40960 pixels. When I use warpAffine or warpPerspective, pixels with row values greater than 32767 are not warped correctly. Is this a simple bug? May I correct it with change of variable types?
I have checked warpPerspectiveInvoker function but could not see an easy fix.
Currently I divided image into 32k sized tiles, and warped each indiviual tiles.
At the moment for my data results seems reasonable.
This does appear to be an open bug in OpenCV. cv::warpPerspective() uses short internally to generate the distortion maps. 32767 is the maximum representable value in a short, so any values larger than this will cause problems.
You could try hacking the warpPerspectiveInvoker and replace short instances with something larger, like int, but I cannot guarantee this would work. It might be worth a try, though.

Increase image size, without messing up clarity

Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.

Image Comparison

What is the efficient way to compare two images in visual c..?
Also in which format images has to be stored.(bmp, gif , jpeg.....)?
Please provide some suggestions
If the images you are trying to compare have distinctive characteristics that you are trying to differentiate then PCA is an excellent way to go. The question of what format of the file you need is irrelevant really; you need to load it into the program as an array of numbers and do analysis.
Your question opens a can of worms in terms of complexity.
If you want to compare two images to check if they are the same, then you need to perform an md5 on the file (removing possible metainfos which could distort your result).
If you want to compare if they look the same, then it's a completely different story altogether. "Look the same" is intended in a very loose meaning (e.g. they are exactly the same image but stored with two different file formats). For this, you need advanced algorithms, which will give you a probability for two images to be the same. Not being an expert in the field, I would perform the following "invented out of my head" algorithm:
take an arbitrary set of pixel points from the image.
for each pixel "grow" a polygon out of the surrounding pixels which are near in color (according to HSV colorspace)
do the same for the other image
for each polygon of one image, check the geometrical similitude with all the other polygons in the other image, and pick the highest value. Divide this value by the area of the polygon (to normalize).
create a vector out of the highest values obtained
the higher is the norm of this vector, the higher is the chance that the two images are the same.
This algorithm should be insensitive to color drift and image rotation. Maybe also scaling (you normalize against the area). But I restate: not an expert, there's probably much better, and it could make kittens cry.
I did something similar to detect movement from a MJPEG stream and record images only when movement occurs.
For each decoded image, I compared to the previous using the following method.
Resize the image to effectively thumbnail size (I resized fairly hi-res images down by a factor of ten
Compare the brightness of each pixel to the previous image and flag if it is much lighter or darker (threshold value 1)
Once you've done that for each pixel, you can use the count of different pixels to determine whether the image is the same or different (threshold value 2)
Then it was just a matter of tuning the two threshold values.
I did the comparisons using System.Drawing.Bitmap, but as my source images were jpg, there were some artifacting.
It's a nice simple way to compare images for differences if you're going to roll it yourself.
If you want to determine if 2 images are the same perceptually, I believe the best way to do it is using an Image Hashing algorithm. You'd compute the hash of both images and you'd be able to use the hashes to get a confidence rating of how much they match.
One that I've had some success with is pHash, though I don't know how easy it would be to use with Visual C. Searching for "Geometric Hashing" or "Image Hashing" might be helpful.
Testing for strict identity is simple: Just compare every pixel in source image A to the corresponding pixel value in image B. If all pixels are identical, the images are identical.
But I guess don't want this kind of strict identity. You probably want images to be "identical" even if certain transformations have been applied to image B. Examples for these transformations might be:
changing image brightness globally (for every pixel)
changing image brightness locally (for every pixel in a certain area)
changing image saturation golbally or locally
gamma correction
applying some kind of filter to the image (e.g. blurring, sharpening)
changing the size of the image
rotation
e.g. printing an image and scanning it again would probably include all of the above.
In a nutshell, you have to decide which transformations you want to treat as "identical" and then find image measures that are invariant to those transformations. (Alternatively, you could try to revert the translations, but that's not possible if the transformation removes information from the image, like e.g. blurring or clipping the image)

Resources