Pattern Recognition using OpenCV - image-processing

I am trying to detect a pattern on an object on a green field, made up of three colors (two pink markers to the sides and a blue one in the middle) arranged like a traffic light.
At first I tried converting the images from the webcam to hsv color space and isolate the color using cvInRangeS but that became problematic as the light changes in the room during the day I either get false positives or lose track of objects.
Then I tried SURF by modifying find_obj.cpp, the problem with that was opencv can only detect 2 surf points on my marker which is not enough to locate it from the code it seems I need at least 4, I tried playing with surf params but that did not change anything.
Also while googling I came across this,
http://wiki.elphel.com/index.php?title=OpenCV_Tennis_balls_recognizing_tutorial&redirect=no
which says I can also use machine learning to pick the color range I am interested in but I could not find any info on how to do that.
My question is, is there anything in OpenCV that would allow me to detect the marker?
EDIT: Another question about trying haar training, my background will always be same color same surface using the same marker for the object, can I train a classifier with say 20 positive 20 negative images or do I still need thousands of images to get it to recognize?

I'd suggest you check out Shervin's tutorial on blob detection, using colors
http://www.shervinemami.info/blobs.html
EDIT
You night try retinex to help improve results
http://www.ipol.im/pub/algo/lmps_retinex_poisson_equation/

Related

Quantifying differences in an image sequence to measure activity

I'm looking for a program that will enable me to quantity the difference between images in an image sequence over time.
We are hoping to use timelapse images to measure the activity of tadpoles by comparing how the images change over time. Tracking the movement of individuals isn’t necessary. The tadpoles are dark and the background of the aquarium is light, however the background isn’t uniform and some of the decor items like dark rocks and foliage make it so that all the tadpoles aren’t visible at all times.
Basically need a program that will allow me to quantity the differences/motion detected in an image sequence (i.e 209 images) and produce data that can be exported...
Any and all suggestions appreciated!!
Your question is rather vague and you don't supply any images or real indication of what you expect as results, so my answer will not be as thorough as it might otherwise be.
You don't mention any tools you are familiar with, but my recommendation would be Python and OpenCV. Alternatives are probably scikit-image, Python Wand.
In general, when trying to detect movement across a series of images, you would:
try and work out what the background is
look for movement by sutracting, or differencing, frames from the background
clean up the difference image
identify objects - maybe by shape or size or colour
maybe track objects
produce statistics
As regards working out the background, I did an example here by finding the median pixel across all images at each location in the images. There is also an OpenCV tutorial here.
As regards cleaning up images, you can probably remove noise in the background subtraction with a small median filter, say 3x3 or 5x5 depending on the resolution of your images.
As regards detecting tadpoles, you will probably want to use OpenCV findContours() and filter by size, or colour, or circularity. There are some fairly decent tutorials on PyImageSearch. There is also an ImageMagick "Connected Component" analysis to find a tennis player that I did here.

Detecting common colors for groups of objects

I have an image with a collection of objects in K given perceived colors. Providing I extract those objects, how could I cluster them by their perceived color?
Let me give you an example. I am trying to cluster two football teams - so there will be two teams, referees and a keeper (or two, but that`s a rare situation) on the image - 3, 4 or 5 clusters.
For a human's eye, it`s an easy situation. On the picture above, we have white players, red players and a black ref. But it turns out not so easy for automatic processing.
What I have tried so far:
1) I've started working on the BGR colorspace, then tried HSV and now I am exploring CIE Luv, as I read it has unified distances describing the perceived differences between colors.
2) [BGR and HSV] taking the most common color from the contour (not the bounding box). this didn' work at all because of the noise (green field getting in the way), the quality of the image, the position of the player, etc. Colors were pretty much random.
3) [CIE Luv] Resizing all players' boxes to a common size and taking a small portion of the image from the middle (as marked by a black rectangle in the example below).
Taking the mean value of all pixels in each player's window and adding to the list (so, it`s one pixel with the mean value per player). Using K-means (with a defined number of clusters) to find out clusters on that list. This has proven somewhat successful, for the image above I have redish, white and blackish centres in the clusters.
Unfortunately, the assignment of players back to these clusters is pretty much random. I am doing that by calculating the mean color for each player like I described above and then measuring the distance to each cluster. A player might be assigned to the white cluster on one frame and to the red one on the next. Part of the problem might be that the window in the middle of the player's box will sometimes catch a number, grass or shorts, instead of the jersey.
I have already spent a considerable amount of time on trying to figure that out, grateful for any help.
I may be overcomplicating the problem since you just have 3 classes, but try training an SVM classifier based on HOG descriptors. maybe try LDA to improve speed
Some references -
1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.627.6465&rep=rep1&type=pdf - skip to recognition part
2] https://rodrigob.github.io/documents/2013_ijcnn_traffic_signs.pdf - skip to recognition part.
3] https://www.learnopencv.com/handwritten-digits-classification-an-opencv-c-python-tutorial/ - if you want to jump into the code right away
This will always work as long as your detection is good. and can also help to identify different players based on their shirt number
(maybe more???) if you train it right
EDIT: Okkay I have another idea, based on colour segmentation since that was your original approach and require less work (maybe not? color segmentation is a pain! also LIGHTING! LIGHTING! LIGHTING!).
Create a green mask and create a threshold so you detect as little grass as possible when doing your kmeans. Then instead of finding mean, try median instead, that will get you closer to red, coz white is detected as 0 and mean just drops drastically, median doesnt. So it'll be way more robust and you should be able to sort players better (hair color and skin color shouldnt affect it too much)
EDIT 2: Just noticed, if you use the black rectangle you'll get shirt number more (which is white), gonna mess up your classifier, use original box with green masked out
EDIT 3: Also. You can just create 3 thresholds for your required colors and split them up! don't really need Kmeans in this actually. Basically you just need your detected boxes to give out a value inside that threshold. Try the median method I mentioned above. Should improve. Also, might need some more minor tweaks here and there (blur, morphology etc to improve detection)

OpenCV detect square with difficult background

i am working on an Android app that will recognize a GO board and create a SGF file of it.
i need to detect the whole board in order to warp it and to be able to find the correct lines and stones like below.
(source: eightytwo.axc.nl)
right now i use an Opencv RGB Mat and do the following:
separate the channels
canny the separate channels
Imgproc.Canny(channel, temp_canny, 30, 100);
combine (bitwise OR) all channels.
Core.bitwise_or(temp_canny, canny, canny);
find the board contour
Still i am not able to detect the board consistently as some lines tend to disappear as you can see in the picture below, black lines on the board and stones are clearly visible but the board edge is missing at some places.
(source: eightytwo.axc.nl)
how can i improve this detection? or should i implement multiple ways of detecting it and switching between them when one fails..
* Important to keep in mind *
go boards vary in color
go boards can be empty or completely filled with stones
this implies i can't rely on detecting the outer black line on the board
backgrounds are not always plain white
this is a small collection of pictures with go boards i would like to detect
* Update * 23-05-2016
I kinda ran out of inspiration using opencv to solve this, so new inspiration is much appreciated!!!
In the meantime i started using machine learning, first results are nice and i'll keep you posted but still having high hopes creating a opencv implementation.
I'm working on the same problem!
My approach is to assume two things:
The camera is steady
The board is steady
That allows me to deduce the warping parameters from a single frame when the board is still empty (before playing). Use these parameters to warp every frame, no matter how many stones are occluding the board edges.

Circle/Blob detection basics

I need to be able to detect spots on sample tiles. The spots themselves are applied to the sample tiles by hand and as such are not particularly regular in shape or position.
I've been playing around with Aforge.NET's blob detection and have had some limited success using the ChannelFiltering and ColorFiltering filters. However, to do this I need to manually define a range of RGB values that match the spot colors on the example image. This simply won't work in the Real World, where factors such as lighting conditions and problems with the fabrication of the sample tiles will come into effect.
I'm guessing I should initially convert the image to grayscale and then apply a series of filters or other operations before I run the blob detection algorithm, but I'm at a loss as to what these should be.
Are there some best practices for circle/blob detection?
EDIT: fixed links.

Background removal using Kinect: noise suppression around body shape

The objective is to display the person on a different background (aka background removal).
I'm using the Kinect with Microsoft's Beta Kinect SDK to do so. With help of the depth, the background is filtered and we get only the image of the person.
This is pretty simple to do, and we can find the code that does that everywhere on the Internet. However, the depth signal is noisy, and we get pixels which do not belong to the person that are displayed.
I applied an edge detector to see if it was useful, and I currently get this:
Here's another without edge detection:
My question is: Which way can I get rid of these noisy white pixels around the person?
I tried morphological operations, but some parts of the body are erased and still leave white pixels behind.
The algorithm doesn't need to be real-time, I can just apply it when I press a 'Save image' button.
Edit 1:
I just tried to do background substraction with the closest frames on the shape border. The single pixels you see are flickering, which means it is noise and I can get easily get rid of them.
Edit 2:
The project is now over, and here's what we did: manual calibration of the Kinect by using the OpenNI driver, which provides directly the infrared image. The result is really good, but each calibration is specific to each Kinect.
Then, we applied a little transparency on the borders, and the result looks really nice! I can't provide pictures, however.
Your problem isn't just the noisy white pixels. You're missing significant parts of the person as well, e.g. part of his right hand. I'd recommend being more conservative with your thresholding of the depth data (allow more false positives). This would give you more noisy pixels, but at least you'd have the person in their entirety.
To get rid of the noisy pixels, I can think of a couple of things:
Feather the outer pixels (reduce them in intensity/increase their transparency if you're using an alpha channel)
Smooth the image, perform the edge detection on the smoothed image, then use these edges with your original sharp image.
Do some skin region detection to mark parts that definitely belong to a person. See skin detection in the YUV color space? and Skin Color Detection
For clothes, work with the hue and saturation image. If you know the color of the t-shirt (or that at least that it's not a neutral color), then this will stand out easily. If you don't know this information, then it may be worth building up a model of the person using the other frames (if there's a big gray blob that's moving around in your video, chances are that your subject is wearing a gray shirt)
The approaches aren't mutually exclusive so it may be worth trying to do them in combination. If I think of anything else, I'll post back here.
If there is no other way of resolving the jitter on the edges you could always try anti-alias as post-process.

Resources