Headcount in opencv - opencv

i am new to opencv. i have to implement a headcount.
my idea is:
Identification of circular objects
We will start by edge detection to find border line of each shape.
sort through the image matrix pixel by pixel
for each pixel, analyze each of the 8 pixels surrounding it
record the value of the darkest pixel, and the lightest pixel
if (darkest_pixel_value - lightest_pixel_value) > threshold)
then rewrite that pixel as 1;
else rewrite that pixel as 0;
Now we detect shapes
count the number of continuous edges
a sharp change in line direction signifies a different line
do this by determining the average vector between adjacent pixels
if one line, then its a circle
by measure angles between lines more information can be deduced (rhomboid, equilateral triangle, etc.)
Face detection
This part includes two common approaches based on features and color. The basic idea of the algorithm is to find objects resembling an eye, then on the basis of geometric face characteristics try to join two the objects into an eye pair.
Steps:
Unimportant colors are eliminated from the image and insignificant colors are replaced with white color.
The image is then converted to grayscale.
The image is filtered with a median filter (unimportant white regions are blurred)
White regions are segmented using a Region growth algorithm.
Hough transform is applied to find circles
For each region the best possible circle is found
Using geometric face characteristics the pair of eyes is found
is this the right way to proceed or is there an easier way?
i want to count the number (estimate) of people found in a crowd (meetings, gatherings)
can you help me with the codes please?
Thank you

You can use the OpenCV built-in face detection.See http://opencv.willowgarage.com/wiki/FaceDetection for detailed instructions.

I had a similar project.
You need to get the best image so concentrate on fixing saturation, contrast and intensity.
If your planning to use color, if you want skin color detection for example, than you need to fix white balance.
Don't think of headcount, instead think of people count.
You need a good background segmentation, use Gaussian Mixture of Models
combined with other background modeling algorithm.
If this is an outdoor application you need shadow detection.
Get foreground blobs and then determine where people are in those blobs.
If your counting heads, you need to detect omega shape for the head and shoulders.
You will need tracking for occlusions and people crossing.
You can also use human body classification, opencv has haarcascade_fullbody.xml
These are just some ideas...

Related

Detect missing squares in color calibration chart opencv python

I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.

Which feature descriptors to use and why?

I do like to do compute the position and orientation of a camera in a civil aircraft cockpit.
I do use LEDs as fixed points. My plan is to save their X,Y,Z Position associated with the LED.
How can I detect and identify my LEDs on my images? Which feature descriptor and feature point extractor should I use?
How should I modify my image prior to feature detection?
I like to stay efficient.
----Please stop voting this question down----
Now after having found the solution to my problem, I do realize the question might have been too generic.
Anyways to support other people googeling I am going to describe my answer.
With combinations of OpenCVs functions I create masks which contain areas where the LEDs could be in white. The rest of the image is black. These functions are for example Core.range, Imgproc.dilate, and Imgproc.erode. Also with Imgproc.findcontours I am filtering out too large or too small contours. Also used to combine masks is Core.bitwise_and, or Core.bitwise_not.
The masks are computed from an image in the HSV color space as input.
Having these masks with potential LED areas, I do compute color histograms, which of the intensity normalized rgb colors. (Hue did not work well enough for me). These histograms are trained and normalized using a set of annotated input images and represent my descriptor.
I do match the trained descriptor against computed onces in the application using histogram intersection.
So I receive distance measures. Using a threshold for these measures, the measures and the knowledge of the geometric positions of the real-life LEDs I translate the patches to a graph system, which helps me to find the longest chain of potential LEDs.

Color-based image segmentation method for detecting square or triangular shapes

Could you suggest an approach for color-based segmentation for square or triangular shapes? I'm working on an iOS app for recognizing road signs and have implemented it for round signs but that approach doesn't seem to work with other forms. For the circles we do the following:
Detect the colors we need, e.g. red and white, through HSV/B.
Detect circle through the method called Fast Circle Detection Using Gradient Pair Vectors based on analysis of gradient direction vectors (description and code: http://rnd.azoft.com/applied-use-of-m2m-tchnology-in-ios-apps/)
Triangles and squares demand differed approach and we've stuck a bit.
Assuming you're looking for red lines...
Threshold just the red component of the image
Compute hough lines and look for line segments of an estimated length (if you know the length of the sides of the triangle/square you're looking for).
Once you have this list, find combinations of lines that form triangles and squares.
Verify each candidate triangle/square by checking that their areas are within expected ranges.
If you follow this method, it is likely that you will find multiple shapes within close proximity of each other i.e. the same triangle/square in the real world will be found multiple times by the algorithm depending on the thickness of the lines. In this case, cluster them by distance and only retain one shape per cluster.
Another option is
Threshold the red component of the image.
Find contours.
Check for closed contours.
For every close contour, check if the shape resembles an equilateral triangle or a square by plotting histograms of slopes of individual points on the contour. The histogram for a square will have two highly populated bins, while that of a triangle will have three highly populated bins.
I have studied on a school project for road sign detection and for our segmentation part, we really benefited from this paper.
http://vc.cs.nthu.edu.tw/home/paper/codfiles/cmwang/201201100409/110104%20Goal%20evaluation%20of%20segmentation%20algorithms%20for%20traffic%20sign%20recognition.pdf
It compares performance of many color based segmentation techniques and some non-color based approaches. Tests compared with different signs.
Unlike some survey paper in this area it explains threshold values for different methods.
Good luck.

OpenCV floor detection by segmentation

I'm working on a way to detect the floor in an image. I'm trying to accomplish this by reducing the image to areas of color and then assuming that the largest area is the floor. (We get to make some pretty extensive assumptions about the environment the robot will operate in)
What I'm looking for is some recommendations on algorithms that would be suited to this problem. Any help would be greatly appreciated.
Edit: specifically I am looking for an image segmentation algorithm that can reliably extract one area. Everything I've tried (mainly PyrSegmentation) seems to work by reducing the image to N colors. This is causing false positives when the camera is looking at an empty area.
Since floor detection is the main aim, I'd say instead of segmenting by color, you could try separation by texture.
The Eigen transform paper describes a single-value descriptor of texture "roughness" using the average of eigenvalues over a grayscale window in the image/video frame. On pg. 78 of the paper they apply the mean-shift segmentation on the eigen-transform output image, effectively separating it into different textures.
Since your images are from a video feed, there can be a lot of variations in lighting so color segmentation might pose a few problems (unless you're working with HSV and other color spaces as mentioned above). The calculation of the eigenvalues is very simple and fast in OpenCV with the cvSVD() function.
If you can make the assumption about colour constancy your main issue is going to be changes in lighting that will throw off your colour detection.
To that end, convert your input image to HSV, HSL, cie-Lab, YUV or some other luminance-separated colourspace and segment your image based on just the colour part (leave out the luminance value, V, L, L and Y in the examples above). This will help you overcome the obstacle of shadows and variations in lighting.

OpenCV: Detect a black to white gradient in an area

I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Resources