Removing specular reflection from multiple images by 'merging' images - image-processing

The system I'm working on uses a mobile phone app to take images. The white speckles in the images are reflective particles that need to be activated by the flash on the mobile phone for cataloging in an image processing pipeline. The downside is that we get unwanted specular reflection from the plastic in which the reflective particles are embedded. So the idea is that by taking multiple images and somehow 'stitching' them together the speckles could be preserved and the unwanted specular reflection removed to create one final 'clean' image.
I haven't been able to find any existing imaging processing techniques in the literature that use this approach but it seems like it might work. Any pointers on this approach would be much appreciated be it papers, pseudo-code or open source projects.

I'm not aware of specific work on the subject, but seems that this can be solved using standard approaches.
By the images it looks that the specularity can be easily detected simply based on graylevels (a large light blob) at least in some cases.
To be fused the images need to be registered. Could initialize / sanity-check using the cellphone odometry if available, then refine it - estimating a homography using RANSAC (assuming you're dealing with approx. planar images as in your example).
Getting data association with sufficient inliers can be a challenge, but perhaps odometry will help here. Also will probably need to fiddle with the image to get good features.

Related

How to recognize or match two images?

I have one image stored in my bundle or in the application.
Now I want to scan images in camera and want to compare that images with my locally stored image. When image is matched I want to play one video and if user move camera from that particular image to somewhere else then I want to stop that video.
For that I have tried Wikitude sdk for iOS but it is not working properly as it is crashing anytime because of memory issues or some other reasons.
Other things came in mind that Core ML and ARKit but Core ML detect the image's properties like name, type, colors etc and I want to match the image. ARKit will not support all devices and ios and also image matching as per requirement is possible or not that I don't have idea.
If anybody have any idea to achieve this requirement they can share. every help will be appreciated. Thanks:)
Easiest way is ARKit's imageDetection. You know the limitation of devices it support. But the result it gives is wide and really easy to implement. Here is an example
Next is CoreML, which is the hardest way. You need to understand machine learning even if in brief. Then the tough part - training with your dataset. Biggest drawback is you have single image. I would discard this method.
Finally mid way solution is to use OpenCV. It might be hard but suit your need. You can find different methods of feature matching to find your image in camera feed. example here. You can use objective-c++ to code in c++ for ios.
Your task is image similarity you can do it simply and with more reliable output results using machine learning. Since your task is using camera scanning. Better option is CoreML.You can refer this link by apple for Image Similarity.You can optimize your results by training with your own datasets. Any more clarifications needed comment.
Another approach is to use a so-called "siamese network". Which really means that you use a model such as Inception-v3 or MobileNet and both images and you compare their outputs.
However, these models usually give a classification output, i.e. "this is a cat". But if you remove that classification layer from the model, it gives an output that is just a bunch of numbers that describe what sort of things are in the image but in a very abstract sense.
If these numbers for two images are very similar -- if the "distance" between them is very small -- then the two images are very similar too.
So you can take an existing Core ML model, remove the classification layer, run it twice (once on each image), which gives you two sets of numbers, and then compute the distance between these numbers. If this distance is lower than some kind of threshold, then the images are similar enough.

Is there a OCR API that can count objects?

Is there a OCR API that could be used for recognizing and counting objects from image? Or can this be done with another image processing image processing technique?
For example if i take a close-up photo of three boxes, API would just return number 3 as a result.
You can look into OpenCV, which is popular for programmers learning about image processing and vision. You'll find an endless number of posts here on StackOverflow about OpenCV.
http://opencv.org/
Some freeware GUIs and free starter versions of commercial image processing packages will allow you to test image processing techniques without having to write the code. ImageJ is old but still worth checking out:
http://rsbweb.nih.gov/ij/
I don't want to show favoritism towards any of my sisters and brothers in the image processing world, but if you google for "machine vision free" or "computer vision free" and add words such as "GUI" you should be able to quickly find some free software that will allow you to test different image processing techniques just by using your mouse.
Along with your OCR algorithm, you'll need a segmentation method to count objects.
One such technique is the connected components algorithm:
http://en.wikipedia.org/wiki/Connected-component_labeling
The typical connect components algorithm would rely on some preprocessing:
Find a binarization threshold.
Apply the binarization threshold to generate an image of black (0) and white (1) values.
Run the connect components algorithm and label all components (objects)
Filter the results by size and other parameters. For example, you probably don't want to include foreground objects that are only a few pixels in size.
Check the size of the list of filtered components.
This is a simple, low-level method, but it's useful in many situations. Even if you think you need a more complicated technique, I would strongly recommend that you first become familiar with connected components before moving on. Until one grasps the subtleties of lighting, binarization, and component labeling, it's unlikely one can learn much useful about more complicated algorithms. There really are no shortcuts.
There are other,more complicated methods, but before suggesting which might be appropriate you would have to be more specific about what kind of objects you want to find.
With any image processing question, always include one or more sample images. It's generally not useful to talk about image processing algorithms without first understanding the image set with which you are working. What may be obvious to you will not be obvious to others, especially those who have spent years working on OCR applications and who have had to deal with a wide variety of backgrounds, scripts, and specifications.

Identify pattern in image

what is the best approach to identify a pattern (could be a text,signature, logo. NOT faces,objects,people,etc) in an image, given that all images are taken from the same angle, which means the pattern to identify will be ALWAYS visible at the same angle, but not position / size/ quality / brightness, etc.
Assuming I have the logo, I would like to run a test on 1000 images, from different sizes & quality and get those images that have this pattern embedded or at least a high probability to have this pattern embedded.
Thanks,
Perhaps you can show a couple of images but it seems like template matching (perhaps with a distance transform) seems like an ideal candidate to your problem.
Perl? I'd have suggested using OpenCV with python or C since you're on the Linux platform.
You could check out SURF and SIFT (explains how to do this with OpenCV and C++ with code attached) which can do decent template matching (logos, etc.).
Text detection is a different kettle of fish, I'd suggest Robust Text Detection in Natural Images with Edge-enhanced maximally stable extremal regions paper which is the latest I've seen that does robust text detection from natural scenes without becoming overly intricate.
Training a neural network with the expected patterns seems to be the best way all-round, though the training process will take a long time. Actual identification is almost real-time though.
Here's a discussion on MSER implementation in two libraries: a) OpenCV, b) VLfeat
Have you checked AForgenet.com ? It has great libs for blob processing. Its in .NET

image segmentation techniques

I am working on a computer vision application and I am stuck at a conceptual roadblock. I need to recognize a set of logos in a video, and so far I have been using feature matching methods like SIFT (and ASIFT by Yu and Morel), SURF, FERNS -- basically everything in the "Common Interfaces of Generic Descriptor Matchers" section of the OpenCV documentation. But recently I have been researching methods used in OCR/Random Trees classifier (I was playing with this dataaset: http://archive.ics.uci.edu/ml/datasets/Letter+Recognition) and thinking that this might be a better way to go about finding the logos. The problem is that I can't find a reliable way to automatically segment an arbitrary image.
My questions:
Should I bother looking into methods other than descriptor/keypoint, or is this the
best way to recognize a typical logo (stylized, few colors, sharp edges)?
How can I segment an arbitary image (or a video frame, in my case) so that I can properly
match against a sample database?
It would seem that HaarCascades work in a similar way (databases of samples), but I
can't figure out how the processes are related. Is there segmentation going on there?
Sorry of these questions are too broad. I'm trying to wrap my head around this stuff with little help. Thanks!
It seems like segmentation is not what you want. I think it has to do more with object detection and recognition. You want to detect the presence of a certain set of logos, in a certain set of images. This doesn't seem related to segmentation which is about labeling surfaces or areas of a common color, texture, shape, etc., although examining segmentation based methods may be useful.
I would definitely encourage you to look at problem and examine all possible methods that can be applied, not only the fashionable ones (such as SIFT, GLOH, SURF, etc). I would recommend you look at older, simpler methods like simple template matching, chamfering, etc.
Haar cascades became popular after a 2000 paper by Viola and Jones used for face detection (similar to what you see in modern point and click cameras). It does sound a bit similar to the problem you are interested in. You should perhaps also examine this part of the problem, but try not to focus too much on the learning part.

GUI version of OpenCV for feature-detection (SIFT etc.) prototyping before actual project development?

I had an idea for which I need to be able to recognize certain objects or models from a rendered three dimensional digital movie.
After limited research, I know now that what I need is called feature detection in the field of Computer Vision.
So, what I want to do is:
create a few screenshots of a certain character in the movie (eg. front/back/leftSide/rightSide)
play the movie
while playing the movie, continuously create new screenshots of the movie
for each screenshot, perform feature detection (SIFT?, with openCV?) to see if any of our character appearances are there (they must still be recognized if the character is further away and thus appears smaller, or if the character is eg. lying down).
give a notice whenever the character is found
This would be possible with OpenCV, right?
The "issue" is that I would have to learn c++ or python to develop this application. This is not a problem if my movie and screenshots are applicable for what I want to do.
So, I would like to first test my screenshots of the movie. Is there a GUI version of OpenCV that I can input my test data and then execute it's feature detection algorithms manually as a means of prototyping?
Any feedback is appreciated. Thanks.
There is no GUI of OpenCV able to do what you want. You will be able to use OpenCV for some aspects of your problem, but there is no ready-made solution waiting there for you.
While it's definitely possible to solve your problem, the learning curve for this problem is quite long. If you're a professional, then an alternative to learning about it yourself would be to hire an expert to do it for you. It would cost money, but save you time.
EDIT
As far as template matching goes, you wouldn't normally use it to solve such a problem because the thing you're looking for is changing appearance and shape. There aren't really any "dynamic parameters to set". The closest thing you could try is have a massive template collection that would try to cover the expected forms that your target may take. But it would hardly be an elegant solution. Plus it wouldn't scale.
Next, to your point about face recognition. This is kind of related, but most facial recognition applications deal with a controlled environment: lighting, distance, pose, angle, etc. Outside of that controlled environment face detection effectiveness drops significantly. If you're detecting objects in a movie, then your environment isn't really controlled.
You may want to first try a simpler problem of accurately detecting where the characters are, without determining who they are (video surveillance, essentially). While it may sound simple, you'll find that it's actually non-trivial for arbitrary scenes. The result of solving that problem may be useful in identifying the characters.
There is Find-Object by Mathieu Labbé. It was very helpful for me to start getting an understanding of the descriptors since you can change them while your video is running to see what happens.
This is probably too late, but might help someone else looking for a solution.
Well, using OpenCV you would of taking a frame of a video file and do any computations on it.
You can do several different methods of detecting a character on that image, but it's not so easy to have it as flexible so you can even get that person if it's lying on the floor for example, if you only entered reference images of that character standing.
Basically you could try extracting all important features from your set of reference pictures and have a (in your case supervised) learning algorithm that gets a good feature-vector of that character for classification.
You then need to write your code that plays the video and which takes a video frame let's say each 500ms (or other as you desire), gets a segmentation of the object you thing would be that character and compare it with the reference values you get from your learning algorithm. If there's a match, your code can yell "Yehaaawww!" or do other things...
But all this depends on how flexible you want this to be. You could also try a template match or cross-correlation which basically shifts the reference image(s) over the frame and checks how equal both parts are. But this unfortunately is very sensitive for rotation, deformations or other noise... so you wouldn't get that person if its i.e. laying down. And I doubt you can get all those calculations done in realtime...
Basically: Yes OpenCV is good to use for your image processing/computer vision tasks. But it offers a lot of methods and ways and you'd need to find a way that works for your images... it's not a trivial task though...
Hope that helps...
Have you tried looking at some of the work of the Oxford visual geometry group?
Their Video Google system describes to a large extent what you want, instance detection.
Their work into Naming People in TV shows is also pretty relevant. A face detection and facial feature pipeline is included that can be run from Matlab. Are you familiar with Matlab?
Have you tried computer vision frameworks like Cassandra? There you can exactly do that just by some mouse clicks.

Resources