Highlight reduction in images - image-processing

I am trying to reduce the highlights in an image caused by a high intensity light source.
I tried various softwares and I found that "Highlight Reduction" works for me.
But I am not able to understand the actual processing behind a Highlight Reduction.
Can anybody please help me regarding that ?

Perhaps http://answers.opencv.org/question/92887/specular-highlights-detection/ may give some hints. At least 3 papers are cited there:
Searching around the literature I found the following 3 interesting papers: Real-time
highlight removal using intensity ratio, Efficient and Robust Specular Highlight Removal
and Fast and High Quality Highlight Removal from A Single Image. The first two that
they provide the code as well gave not that good results.

Related

Identify if an image is blurred (preferred without OpenCV)

Just completed building a camera using AVCaptureSession for scanning documents on iPhone, I am looking for away to determine if the captured image is in good quality and not blurred.
I saw many solutions using OpenCV and I am looking for an other options.
Any Help would be appreciated.
Thanks
First of all, interesting question, made me do some research to figure out stuff myself. In general, Analysis of focus measure operators for shape-from-focus is a great research paper, talking about a few methods (36 to be precise) on how to get measure of blurriness in an image, from simple/straightforward ones to more complex ones.
I have done myself some basic laplacian operation on one channel of the image (essentially 2nd derivative of the pixels) to measure the blurriness, which worked quite well for me. Once you convolve the channel with the laplacian operator, the variance of this laplacian image is a good estimate of the blurriness. The assumption here is that if an image contains high variance then there is a wide spread of responses, both edge-like and non-edge like, representative of a normal, in-focus image. But if there is very low variance, then there is a tiny spread of responses, indicating there are very little edges in the image. As we know, the more an image is blurred, the less edges there are. The trick here is to find an apt threshold for the variance to be high/low, which I guess you can ascertain by running it on your dataset.
Courtesy: Blog
PS. Although the blog I reference here mentions "OpenCV", the methods can be implemented as you want if you understand the concept and hence I started the answer with the research paper.

Microscopic image analysis in Matlab

I am trying to analyze microscopic images (segmentation and feature extraction). I will also try to see cell pattern change over time mathematically (resource: tissue section image over three timings). I couldn't find much research work, any ideas, suggestions ?

How to write a simple image recognition

I have a problem very similar but very much simple than this.
To begin with I have a small image:
Then I take a screenshot and I want to detect if my small house is in the screenshot.
The problem is that my house can be different in size and slightly different in color.
I've found so far the OpenCV library but it seem quite oversized for my need.
Do you know any simpler library to achieve this task?
Tx
Edit: I've found this about SURF algorithm
Judging by your question, there will be no sheer or skew to your image as it will be on screen, whereas the problem you referenced is a much more difficult situation. Your image will not experience any distortion, but only an increase/decrease in size.
To match regardless of color, I recommend computing the gradient image (using sobel kernels) for both your template image and your screen shot. Now you're matching based on visible edges and take color out of the mix.
To match regardless of size, create multiple versions of your template (the more versions you make the more precise, but the longer the processing) and slide your template across the image until you find an acceptable match.
OpenCV is a beast that has a steep learning curve. If my assumptions are correct, then you are correctly stating that OpenCV is oversized when simple image processing techniques can be applied :).

row,column detection in OpenCV (OCR preprocessing)

first my final goal is to process the following image with tesseract:
http://ubuntuone.com/72m0ujsL9RhgfMIlugRDWP
(I wiped out the second and the third column...)
However tesseract has problems with the dotted background. So my idea is to pre-process the image with OpenCV. The best would be if I could somehow detect each line, because I need to remove the dotted background by applying a different threshold than to even lines. Is there any solution to solve my problem? So far I have found Hough Transformation and maybe segmentation, but the results weren't very good (maybe because of wrong parameter)... But I'm not sure, if these are possible approaches and what I invest my time best on.
Column detection would be fine, too, because the second column contains numbers and the third characters, only. Passing this "knowledge" to tesseract could improve its detection rate even more.
I would be really thankful if somebody could give me some hints how to solve this issue and which OpenCV functions are used best, with which paremeters. Some snippets that give me a fair idea about the different steps would be helpful, too.
Thank in advance!!!
Kind regards.
I would suggest you use something like erosion, as the dots seem to be rather small as compared to the width of the letters.
OR I would Canny edge detection with proper thresholds so that I would discard the rather short and thin edges of the dots.
Hope this helps, have fun!

How to do the illumination correction when images are taken in various illumination conditions?

For my final year project i'l be taking the photographs from the mobile phone and then will be computing the image processing steps. I will the taking the images under various illumination conditions (natural light, poor lightning conditions and so on). Does any one knows any algorithm that I can use to compute it?
Thanks a lot
Good whitebalancing is still an active field of research I guess. From your question, it is hard to tell how "advanced" the sought solution is supposed to be and what you need exactly.
In some other context, I recently encountered this paper. They have a quite complicated approach for Whitebalancing and produce good results:
Hsu, Mertens, Paris, Avidan, Durand. "Light mixture estimation for spatially varying white balance". In: ACM Transactions on Graphics, 2008
Check the related work section for more hints, as usual.
If you are less interested in whitebalancing but rather require to process the images further (sounds a bit like that in your comment), you should possibly aim for techniques that are rather invariant to illumination - or at least robust against changes in illumination. E.g. transforming your image in any colorspace that separates the brightness/luminance (i.e. YUV, HSV) might help, depending on your actual problem. From my experience and intuition, I would suggest that in most cases it is better to make your "recognition"-algorithm robust agains changes in illumination - rather than correcting the illumination first.
One very simple method is to take the mean pixel value of an image, adjust the exposure, take another picture and compute the mean again, continuing until the mean reaches some arbitrary value.
Try the simplest method: histogram equalization first.

Resources