Why supersampling is not widely used for image scaling? - image-processing

I look for an appropriate image scaling algorithm and wondered why supersampling is not as popular as bicubic, bilinear or even lanczos.
By supersampling I mean a method that divides the source image into equal rectangles, each rectangle corresponding to a pixel in the destination image. In my opinion, this is the most natural and accurate method. It takes into account all pixels of the source image, while bilinear might skip some pixels. As far as I can see, the quality is also very high, comparable with lanczos.
Why do popular image libraries (such as GraphicsMagic, GD or PIL) not implement this algorithm? I found realizations only in Intel IPP and AMD Framewave projects. I know at least one disadvantage: it can only be used for downscaling, but am I missing something else?
For comparison, this is a 4.26x scaled down image. From left to right: GraphicsMagic Sinc filter (910ms), Framewave Super method (350ms), GraphicsMagic Triangle filter (320ms):

Now I know the answer. Because pixel is not a little square. And that is why supersampling resizing gives aliased result. This can be seen on thin water jets on sample image. This is not fatal and supersampling can be used for scaling to 2x, 3x and so on to dramatically reduce picture size before resize to exact dimensions with another method. This technique is used in jpeglib to open images in smaller size.
Of course we still can think about pixels as squares and actually, GD library does. It's imagecopyresampled is true supersampling.

You are a bit mistaken (when saying that linear rescaling misses pixels). Assuming You are rescaling the image by at most factor of 2, Bilinear interpolation takes into account all the pixels of the source image. If you smooth the image a bit and use bilinear interpolation this gives you high quality results. For most practical cases even bi-qubic interpolation is not needed.
Since bi-linear interpolation is extremely fast (can be easily executed in fixed point calculations) it is by far the best image rescaling algorithm when dealing with real time processing.
If you intend to shrink the image by more than factor of 2 than bilinear interpolation is mathematically wrong and with larger factors even bi-cubic starts to make mistakes. That is why in image processing software (like photoshop) we use better algorithms (yet much more CPU demanding).
The answer to your question is speed consideration.
Given the speed of your CPU/GPU, the image size and desired frame rate you can easily compute how many operations you can do for every pixel. For example - with 2GHZ CPU and 1[Gpix] image size, you can only make few calculations for each pixel every second.
Given the amount of allowed calculations - you select the best algorithms. So the decision is usually not driven by image quality but rather by speed considerations.
Another issue about super sampling - Sometimes if you do it in frequency domain, it works much better. This is called frequency interpolation. But you will not want to calculate FFT just for rescaling an image.
Moreover - I don't know if you are familiar with back projection. This is a way to interpolate the image from destination to source instead of from source to destination. Using back projection you can enlarge the image by a factor of 10, use bilinear interpolation and still be mathematically correct.

Computational burden and increased memory demand is most likely the answer you are looking for. That's why adaptive supersampling was introduced which compromises between burden/memory demand and effectiveness.
I guess supersampling is still too heavy even for today's hardware.

Short answer: They are super-sampling. I think the problem is terminology.
In your example, you are scaling down. This means decimating, not interpolating. Decimation will produce aliasing if no super-sampling is used. I don't see aliasing in the images you posted.
A sinc filter involves super-sampling. It is especially good for decimation because it specifically cuts off frequencies above those that can be seen in the final image. Judging from the name, I suspect the triangle filter also is a form of super-sampling. The second image you show is blurry, but I see no aliasing. So my guess is that it also uses some form of super-sampling.
Personally, I have always been confused by Adobe Photoshop, which asks me if I want "bicubic" or "bilinear" when I am scaling. But Bilinear, Bicubic, and Lanczos are interpolation methods, not decimation methods.
I can also tell you that modern video games also use super-sampling. Mipmapping is a commonly-used shortcut to realtime decimation by pre-decimating individual images by powers of two.

Related

is it possible to take low resolution image from street camera, increase it and see image details

I would like to know if it is possible to take low resolution image from street camera, increase it
and see image details (for example a face, or car plate number). Is there any software that is able to do it?
Thank you.
example of image: http://imgur.com/9Jv7Wid
Possible? Yes. In existence? not to my knowledge.
What you are referring to is called super-resolution. The way it works, in theory, is that you combine multiple low resolution images, and then combine them to create a high-resolution image.
The way this works is that you essentially map each image onto all the others to form a stack, where the target portion of the image is all the same. This gets extremely complicated extremely fast as any distortion (e.g. movement of the target) will cause the images to differ dramatically, on the pixel level.
But, let's you have the images stacked and have removed the non-relevant pixels from the stack of images. You are left hopefully with a movie/stack of images that all show the exact same image, but with sub-pixel distortions. A sub-pixel distortion simply means that the target has moved somewhere inside the pixel, or has moved partially into the neighboring pixel.
You can't measure if the target has moved within the pixel, but you can detect if the target has moved partially into a neighboring pixel. You can do this by knowing that the target is going to give off X amount of photons, so if you see 1/4 of the photons in one pixel and 3/4 of the photons in the neighboring pixel you know it's approximate location, which is 3/4 in one pixel and 1/4 in the other. You then construct an image that has a resolution of these sub-pixels and place these sub-pixels in their proper place.
All of this gets very computationally intensive, and sometimes the images are just too low-resolution and have too much distortion from image to image to even create a meaningful stack of images. I did read a paper about a lab in a university being able to create high-resolution images form low-resolution images, but it was a very very tightly controlled experiment, where they moved the target precisely X amount from image to image and had a very precise camera (probably scientific grade, which is far more sensitive than any commercial grade security camera).
In essence to do this in the real world reliably you need to set up cameras in a very precise way and they need to be very accurate in a particular way, which is going to be expensive, so you are better off just putting in a better camera than relying on this very imprecise technique.
Actually it is possible to do super-resolution (SR) out of even a single low-resolution (LR) image! So you don't have to hassle taking many LR images with sub-pixel shifts to achieve that. The intuition behind such techniques is that natural scenes are full of many repettitive patterns that can be use to enahance the frequency content of similar patches (e.g. you can implement dictionary learning in your SR reconstruction technique to generate the high-resolution version). Sure the enhancment may not be as good as using many LR images but such technique is simpler and more practicle.
Photoshop would be your best bet. But know that you cannot reliably inclrease the size of an image without making the quality even worse.

EMGU OpenCV disparity only on certain pixels

I'm using the EMGU OpenCV wrapper for c#. I've got a disparity map being created nicely. However for my specific application I only need the disparity values of very few pixels, and I need them in real time. The calculation is taking about 100 ms now, I imagine that by getting disparity for hundreds of pixel values rather than thousands things would speed up considerably. I don't know much about what's going on "under the hood" of the stereo solver code, is there a way to speed things up by only calculating the disparity for the pixels that I need?
First of all, you fail to mention what you are really trying to accomplish, and moreover, what algorithm you are using. E.g. StereoGC is a really slow (i.e. not real-time), but usually far more accurate) compared to both StereoSGBM and StereoBM. Those last two can be used real-time, providing a few conditions are met:
The size of the input images is reasonably small;
You are not using an extravagant set of parameters (for instance, a larger value for numberOfDisparities will increase computation time).
Don't expect miracles when it comes to accuracy though.
Apart from that, there is the issue of "just a few pixels". As far as I understand, the algorithms implemented in OpenCV usually rely on information from more than 1 pixel to determine the disparity value. E.g. it needs a neighborhood to detect which pixel from image A map to which pixel in image B. As a result, in general it is not possible to just discard every other pixel of the image (by the way, if you already know the locations in both images, you would not need the stereo methods at all). So unless you can discard a large border of your input images for which you know that you'll never find your pixels of interest there, I'd say the answer to this part of your question would be "no".
If you happen to know that your pixels of interest will always be within a certain rectangle of the input images, you can specify the input image ROIs (regions of interest) to this rectangle. Assuming OpenCV does not contain a bug here this should speedup the computation a little.
With a bit of googling you can to find real-time examples of finding stereo correspondences using EmguCV (or plain OpenCV) using the GPU on Youtube. Maybe this could help you.
Disclaimer: this may have been a more complete answer if your question contained more detail.

How to manage large 2D FFTs in cuda

I have succesfully written some CUDA FFT code that does a 2D convolution of an image, as well as some other calculations.
How do I go about figuring out what the largest FFT's I can run are? It seems to be that a plan for a 2D R2C convolution takes 2x the image size, and another 2x the image size for the C2R. This seems like a lot of overhead!
Also, it seems like most of the benchmarks and such are for relatively small FFTs..why is this? It seems like for large images, I am going to quickly run out of memory. How is this typically handled? Can you perform an FFT convolution on a tile of an image and combine those results, and expect it to be the same as if I had run a 2D FFT on the entire image?
Thanks for answering these questions
CUFFT plans a different algorithm depending on your image size. If you can't fit in shared memory and are not a power of 2 then CUFFT plans an out-of-place transform while smaller images with the right size will be more amenable to the software.
If you're set on FFTing the whole image and need to see what your GPU can handle my best answer would be to guess and check with different image sizes as the CUFFT planning is complicated.
See the documentation : http://developer.download.nvidia.com/compute/cuda/1_1/CUFFT_Library_1.1.pdf
I agree with Mark and say that tiling the image is the way to go for convolution. Since convolution amounts to just computing many independent integrals you can simply decompose the domain into its constituent parts, compute those independently, and stitch them back together. The FFT convolution trick simply reduces the complexity of the integrals you need to compute.
I expect that your GPU code should outperform matlab by a large factor in all situations unless you do something weird.
It's not usually practical to run FFT on an entire image. Not only does it take a lot of memory, but the image must be a power of 2 in width and height which places an unreasonable constraint on your input.
Cutting the image into tiles is perfectly reasonable. The size of the tiles will determine the frequency resolution you're able to achieve. You may want to overlap the tiles as well.

Fast, reliable focus score for camera frames

I'm doing real-time frame-by-frame analysis of a video stream in iOS.
I need to assign a score to each frame for how in focus it is. The method must be very fast to calculate on a mobile device and should be fairly reliable.
I've tried simple things like summing after using an edge detector, but haven't been impressed by the results. I've also tried using the focus scores provided in the frame's metadata dictionary, but they're significantly affected by the brightness of the image, and much more device-specific.
What are good ways to calculate a fast, reliable focus score?
Poor focus means that edges are not very sharp, and small details are lost. High JPEG compression gives very similar distortions.
Compress a copy of your image heavily, unpack and calculate the difference with the original. Intense difference, even at few spots, should mean that the source image had sharp details that are lost in compression. If difference is relatively small everywhere, the source was already fuzzy.
The method can be easily tried in an image editor. (No, I did not yet try it.) Hopefully iPhone has an optimized JPEG compressor already.
A simple answer that human visual system probably uses is to implemnt focusing on top of edge
Tracking. Thus if a set of edges can be tracked across a visual sequence one can work with intensity profile
Of these edges only to detrmine when it the steepest.
From a theoretical point of view, blur manifests as a lost of the high frequency content. Thus, you can just use do a FFT and check the relative frequency distribution. iPhone uses ARM Cortex chips which have NEON instructions that can be used for an efficient FFT implementation.
#9000's suggestion of heavily compressed JPEG has the effect of taking a very small number of the largest wavelet coefficients will usually result in what's in essence a low pass filter.
Consider different kind of edges: e.g. peaks versus step edges. The latter will still be present regardless of focus. To isolate the former use non max suppression in the direction of gradient. As a focus score use the ratio of suppressed edges at two different resolutions.

Increase image size, without messing up clarity

Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.

Resources