Avoiding strips after capturing photo from LCD display - opencv

I have faced with such problem, when I capture photo from LCD display there are annoying rainbow strips.
Is there any way to clear image from them doing some computer vision stuff?
Which are the keywords should I google for? Or maybe some useful links/papers related for.
My goal is to OCRing after it.
In case of low threshold WolfJolion binarization finds a lot of connected components which cause slow and bad recognition.
Using higher threshold some characters are vanished from image.
Source photo:
First binarization:
Second binarization:
P.S. Photos are taken from MacBook Pro Retina Display with iPhone 6 camera.

What you see is called Moire effect. It is caused by subsampling the screen pixels with your camera pixels.
Simply slightly change angle and or distance to avoid these. Then you don't need any image processing.
Beside that these stripes should not bother any decent OCR.
If you insist on image processing then a global threshold should do the trick.

Related

Stains on images captured with AVFoundation

When capturing a photo using AVFoundation classes stains appear on certain areas of the image.
Happens on iOS 14.4, iPhone 12 Pro.
I managed to reproduce it using different custom ISO and exposure time settings and using the default auto setting.
Both for single photo and bracket captures.
Both with maxPhotoQualityPrioritization set to quality and to balanced.
Both with ultra wide angle and wide angle cameras.
It's not deterministic. Seems like it is most prominent in images with high light contrast and different light sources (where the natural light mixes with artificial light and some areas are more lit than others). Also more prominent when capturing multiple images using bracket settings with both negative and positive exposure biases. example image
Does anybody know any fix or a workaround for this?
What you describe as "stains" look like areas that are "blown out" (Areas where one or more color channels is at maximum and all detail is lost. This is known as "blown highlights" in photography.) This creates blobs of a solid color where there is a loss of all detail.
In your case, it looks like the "stains" are completely blown in all 3 color channels.
If you use Photoshop you can display a histogram of the image, as well as a mode where areas that are oversaturated are shown in red.
See this link, for example, for a description of how to do that.

Quantifying differences in an image sequence to measure activity

I'm looking for a program that will enable me to quantity the difference between images in an image sequence over time.
We are hoping to use timelapse images to measure the activity of tadpoles by comparing how the images change over time. Tracking the movement of individuals isn’t necessary. The tadpoles are dark and the background of the aquarium is light, however the background isn’t uniform and some of the decor items like dark rocks and foliage make it so that all the tadpoles aren’t visible at all times.
Basically need a program that will allow me to quantity the differences/motion detected in an image sequence (i.e 209 images) and produce data that can be exported...
Any and all suggestions appreciated!!
Your question is rather vague and you don't supply any images or real indication of what you expect as results, so my answer will not be as thorough as it might otherwise be.
You don't mention any tools you are familiar with, but my recommendation would be Python and OpenCV. Alternatives are probably scikit-image, Python Wand.
In general, when trying to detect movement across a series of images, you would:
try and work out what the background is
look for movement by sutracting, or differencing, frames from the background
clean up the difference image
identify objects - maybe by shape or size or colour
maybe track objects
produce statistics
As regards working out the background, I did an example here by finding the median pixel across all images at each location in the images. There is also an OpenCV tutorial here.
As regards cleaning up images, you can probably remove noise in the background subtraction with a small median filter, say 3x3 or 5x5 depending on the resolution of your images.
As regards detecting tadpoles, you will probably want to use OpenCV findContours() and filter by size, or colour, or circularity. There are some fairly decent tutorials on PyImageSearch. There is also an ImageMagick "Connected Component" analysis to find a tennis player that I did here.

OpenCV feature-matching algorithm suggestion for boxes on a conveyor

Overview
I am attempting to build a prototype of a vision system that would apply pattern matching to figure out the orientation of boxes (eg. soap boxes).
Image sample
Below are real-time captured images of soap boxes in actual environment having two of four possible orientations. (Front_Straight and Back_Inverted orientations).
The real-time images will be very similar to these (300x200 pixels per image approx.)
____
The template images will be fed to the system in prior and it has to determine the orientation of boxes moving on a conveyor. The boxes on conveyor are guided so that they can take only one of 4 possible orientations Front_Straight, Front_Inverted, Back_Straight and Back_Inverted i.e boxes cannot be angular. The camera and the conveyor are fixed so the image size of real-time boxes is constant 300px by 200px. (I have used monochrome camera, if needed colour camera can be used too)
Some properties of the vision system prototype:
Fixed constant lighting.
The real-time image of box will be quite
low-res as attached(300x200 per box)
Minimal motion blur or imaging artefacts
OpenCV C++ based coding environment.
Intel core i5 CPU based PC will
be used.
Problem Statement
I am looking for a light weight yet robust algorithm that can fairly match template image with real-time images of boxes on conveyor to extract the face and orientation. I am new to feature matching so please guide me as to which feature detector and matcher will be most suitable for this particular case. Also please let me know if it is possible to attain 97% plus accuracy using the low-res realtime image as attached.
You have a very fortunate case, having the images with very little variation. Any feature detector should perform very well in this scenario. Since, in OpenCV, the interface is common, they are very easy to compare against each other. From my experience, ORB tends to be quite fast and with good results, but I expect SIFT/SURF to work in your case too.
I wouldn't expect the resolution to be a problem.

People detect using Hog not finding anyone

I have a video of soccer in which the players are relatively far away from the camera and thus represent small portions of the image. I'm using background subtraction to detect the players and the results are fine but I have been asked to try detecting using Hog.
I tried using the detect MultiScale using the default descriptors presented on opencv but i cant get any detection. I dont really understand how can I make it work on this case, because on other sequences where the people are closer to the camera, the detector works fine.
Here is a sample image link
Thanks.
The descriptor you use with HOG determines the minimum size of person you can detect: with the DefaultPeopleDetector the detection window is 128 pixels high x 64 wide, so you can detect people around 90px high. With the Daimler descriptor the size you can detect is a bit smaller.
Your pedestrians are still too small for this, so you may need to magnify the whole image, or just the parts which show up as foreground using background segmentation.
Have a look at the function definition for detectMultiscale http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#cascadeclassifier-detectmultiscale
It might be that you need to reduced the value of minsize so as to detect smaller people or the people might just be too far away.

Is it possible to detect blur, exposure, orientation of an image programmatically?

I need to sort a huge number of photos, and remove the blurry images (due to camera shake), the over/under exposed ones and detect whether the image was shot in the landscape or portrait orientation. Can these things be done on an image using an image processing library or are they still beyond the realms of an algorithmic solution ?
Let's look at your question as three separate question.
Can I find blurry images?
There are some methods for finding blurry images either from :
Sharpening an image and comparing it to the original
Using wavelets to detect blurring ( Link1 )
Hough Transform ( Link )
Can I find images that are under or over exposed?
The only way I can think of this is that your overall brightness is either really high or really low. But the problem is that you would have know if the picture was taken at night or day. You could create a histogram of your image and see if it is really skewed one way or the other and that might be some indication of over/under exposure.
Can I determine the orientation of the image?
There are techniques that have been used such as SVM, Color Moments, Edge Direction Histograms, Bayesian Framework using cues.
Can I find images that are under or over exposed?
here histograms is recommended.

Resources