Human eye image compression - image-processing

I want to find a computationly efficient way to compress picture in a way that human eye works. Given a normal high resolution picture we leave original high resolution in focus center. This could be just center of original picture and the more l2 distance is from the center less pixel resolution is used. So we have maximal resolution in center and minimal on the borders. We are not just bluring image on border we are skiping pixels maybe using some standard interpolation. So image will be significantly smaller than the original. I can imagine how to develop such algorithm using a pixel by pixel for loop. But how to do the computationaly effective? Is there some ready to use approach that I can use? Any ideas are welcome

Related

How the checkerboard size affect an accurate Camera Calibration?

Right now, I am calibrating a monocular camera so afterwards I can calculate distance of planar objects in the image (Z=0).
However, I'd like to know how much it is important to know the structure size. The board's squares sizes change a lot in the image based on how far you are so I am not sure if it is a robust parameter as you would always have a relative scale?
Moreover, for my camera which will be mounted on a ceiling (around 10 meters high), how could I estimate suitable sizes of a checkerboard and squares for accurate calibration?
Basic rule of thumb is that the checkerboard should be in focus, approximately fill the field of view in some of the images, and be significantly slanted in some other images. It helps if you have a finite volume of space you work with, and you span most of it with measurements. See this other answer for more.

Does edge detection depend on image features?

Hi I am trying to figure out whether edge detection depends on image conditions (features).
I know there is a huge mathematical basis for any edge detection operator. Also I know edge detection is sensitive for a noise on a picture.
What about brightness, contrast? The point is I am looking how to estimate quality of the image. Is quality of image important for edge detection?
Edges are detected where there is a change in pixel value in either x or y direction in an image. The maths behind this is simple differentiation. Any variation or noise that can change the pixel value can reduce the chances of detecting an edge but then morphological operations can help.
For example, blur is one operation that can reduce the image quality by changing the pixel values. Figure 1 represents an image and its edges.. As I have already mentioned edges are detected where the pixel value changes in one direction, you can see the white lines as edges corresponding to these changes in pixel value.
Figure 2 is blurred image of input image, the edges of this blur image is much less than the actual number of edges..
That is just one example, noise while capturing the image, illumination of object or dark object can give different edges. Depending upon how the noise is affecting the image the edges can increase or decrease.
There are some basic methods of detecting edges, I have used canny edge detection. You can refer to review of classic edge detectors to understand them further.

is it possible to take low resolution image from street camera, increase it and see image details

I would like to know if it is possible to take low resolution image from street camera, increase it
and see image details (for example a face, or car plate number). Is there any software that is able to do it?
Thank you.
example of image: http://imgur.com/9Jv7Wid
Possible? Yes. In existence? not to my knowledge.
What you are referring to is called super-resolution. The way it works, in theory, is that you combine multiple low resolution images, and then combine them to create a high-resolution image.
The way this works is that you essentially map each image onto all the others to form a stack, where the target portion of the image is all the same. This gets extremely complicated extremely fast as any distortion (e.g. movement of the target) will cause the images to differ dramatically, on the pixel level.
But, let's you have the images stacked and have removed the non-relevant pixels from the stack of images. You are left hopefully with a movie/stack of images that all show the exact same image, but with sub-pixel distortions. A sub-pixel distortion simply means that the target has moved somewhere inside the pixel, or has moved partially into the neighboring pixel.
You can't measure if the target has moved within the pixel, but you can detect if the target has moved partially into a neighboring pixel. You can do this by knowing that the target is going to give off X amount of photons, so if you see 1/4 of the photons in one pixel and 3/4 of the photons in the neighboring pixel you know it's approximate location, which is 3/4 in one pixel and 1/4 in the other. You then construct an image that has a resolution of these sub-pixels and place these sub-pixels in their proper place.
All of this gets very computationally intensive, and sometimes the images are just too low-resolution and have too much distortion from image to image to even create a meaningful stack of images. I did read a paper about a lab in a university being able to create high-resolution images form low-resolution images, but it was a very very tightly controlled experiment, where they moved the target precisely X amount from image to image and had a very precise camera (probably scientific grade, which is far more sensitive than any commercial grade security camera).
In essence to do this in the real world reliably you need to set up cameras in a very precise way and they need to be very accurate in a particular way, which is going to be expensive, so you are better off just putting in a better camera than relying on this very imprecise technique.
Actually it is possible to do super-resolution (SR) out of even a single low-resolution (LR) image! So you don't have to hassle taking many LR images with sub-pixel shifts to achieve that. The intuition behind such techniques is that natural scenes are full of many repettitive patterns that can be use to enahance the frequency content of similar patches (e.g. you can implement dictionary learning in your SR reconstruction technique to generate the high-resolution version). Sure the enhancment may not be as good as using many LR images but such technique is simpler and more practicle.
Photoshop would be your best bet. But know that you cannot reliably inclrease the size of an image without making the quality even worse.

What does an image of Fourier Transformation of an image tell us?

First time studying image processing...
I just don't understand what does fourier transformed image of an image describe?
For example consider given following pictures, The first one is the image, and the second one is the fourier transformation of the image.
Now my question is:
By given the fourier transformation image, what am i suppose to comprehend from that?
I would really appreciate any help here, i just cannot proceed with my studies without understanding this.
Thanks in advance.
The central part includes the low frequency components. The bright pixels in frequency image (right) indicate strong intensity at the corresponding positions in spatial image (left). Note that there is no one-on-one mapping from the pixel in frequency domain and spatial domain. One frequency pixel consists all the spatial pixels that have the corresponding frequency.
The low frequency domain corresponds to the smooth areas in the image (such as skin, hat, background, etc). The high frequency domain, which is shown in the image away from the central part, includes those sharp edges or some structure with the changes of intensity dramatically along one orientation (such as the hairs, boundary of hat, etc). But since the intensity for those parts is not high enough compared with the smooth structure, the corresponding regions appear dark on the right image.
Note that the camera lens focuses on Lenna, so the background is blurred. If the focus region is at background, the vertical lines behind Lenna would be clear, and the sharp edges of the lines will contribute to high frequency magnitude, thus region away from the center on the right image would be bright.

Image Segmentation/Background Subtraction

My current project is to calculate the surface area of the paste covered on the cylinder.
Refer the images below. The images below are cropped from the original images taken via a phone camera.
I am thinking terms like segmentation but due to the light reflection and shadows a simple segmentation won’t work out.
Can anyone tell me how to find the surface area covered by paste on the cylinder?
First I'd simplify the problem by rectifying the perspective effect (you may need to upscale the image to not lose precision here).
Then I'd scan vertical lines across the image.
Further, you can simplify the problem by segmentation of two classes of pixels, base and painted. Make some statistical analysis to find the range for the larger region, consisting of base pixels. Probably will make use of mathematical median of all pixels.
Then you expand the color space around this representative pixel, until you find the highest color distance gap. Repeat the procedure to retrieve the painted pixels. There's other image processing routines you may have to do such as smoothing out the noise, removing outliers and background, etc.

Resources