Good Method for Processing some Similar Images - image-processing

I need to process some images in a real-time situation. I am receiving the images from a camera using OpenCV. The language I use is C++. An example of the images is attached. After applying some threshold filters I have an image like this, Of course there may be some pixel noises here and there, but not that much.
I need to detect the center and the rotation of the squares, and the center of the white circles. I'm totally clueless about how to do it, as it needs to be really fast. The number of the squares can be predefined. Any help would be great, thanks in advance.

Is the following straight forward approch too slow?
Binarize the image, so that the originally green background is black and the rest (black squares are white dots) are white.
Use cv::findContours.
Get the centers.
Binarize the image, so that the everything except the white dots is black.
Use cv::findContours.
Get the centers.
Assign every dot contours to the squate contour, for that is an inlier.
Calculate the squares rotations by the angle of the line between their centers and the centers of their dots.

Related

Any way to get strongest edge local to a contour line using cv2 or scikit-image?

I am working on accurately segmenting objects from an image.
I have found contour lines by using a simple rectangular prism in HSV space as a color filter (followed by some morphological operations on the resulting mask to clear up noise). I found this approach to be better than applying canny edge detection to the whole image as that just picked up a lot of other edges I don't care about.
Is there a way to go about refining the contour line I have extracted such that it clips to the strongest local edge kind of like Adobe Photoshop's smart cropping utility?
Here's an image of what I mean
You can see a boundary between the sky blue and the gray. The dark blue is a drawn on contour. I'd like to somehow clip this to the nearby edge. It also looks like there are other lines in the grey region, so I think the algorithm should do some sort of more globalish optimisation to ensure that the "clipping" action doesn't jump randomly between my boundary of interest and the nearby lines.
Here are some ideas to try:
Morphological snakes: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_morphsnakes.html
Active contours: https://scikit-image.org/docs/dev/auto_examples/edges/plot_active_contours.html
Whatever livewire is doing under the hood: https://github.com/PyIFT/livewire-gui
Based on this comment, the last one is the most useful.

Detecting a triangle mesh fence in the foreground of an image

I'm interesting in detecting a triangular mesh fence present in the foreground of a sequence of images. I've included an example image below. Ideally I'd like to output the grid of intersection points; this would then provide me with the distance to and orientation of the mesh (since the dimensions of the mesh are known and fixed).
As in the example image, the mesh can be obscured (by the thick black bars going horizontally and vertically) or can be confused with the background (see the black-lines of the structure in the top-left of the image). But the mesh will always completely cover the image, i.e. the edges or the outside of the mesh are never in view.
Any ideas on how one might begin to tackle a vision problem like this?
Find edge pixels
Hough transform to find lines in the image
Use ransac to find a model that describes the homography of the lines to the triangle grid.
Without more examples, it's hard to tell how difficult it would be to do.

Image shape detection in JavaScript

I'm looking to write a script to look over a series of images that are essentially white canvas with a few black rectangles on them.
My question is this: what's the best modus operandi that would identify each of the black rectangles in turn.
Obviously I'd scan the image pixel by pixel and work out if it's colour was black or white. So far so good, Identifying and isolating each rectangle - now that's the tricky part :) A pointer in the right direction would be a great help, thank you.

blending colors ios

I want to rectangular crop the eye from one face and paste it on another face, so that in the resulting image skin color of portion of eye blend nicely with the face color of the persons on which we are pasting eyes. I am able to crop and paste, but having problem with blending. Currently, the boundaries of the rectangular cropped eye after pasting are very much visible. I want to reduce this effect, so that the eyes nicely blend with face and resulting image won't look fake.
My suggestion is to do the blending in code. First, you need do create two bitmap contexts so you have the bits of your face and the bits of your new eye.
in the overlap area only, you need to determine the outer most "skin" area by evaluating the colors of the two areas, and create a mapping of those areas in both that are "skin". you would be working from the outermost areas and work towards the center.
for color evaluation, you should turn colors into HSV (or HCL) and look at hue and saturation.
you will need to figure out some criteria for determining what is skin and what is eye
once you have defined the outer area - the one NOT an eye, but skin, you will blend. The blend will use more of the original based on its distance from the center of the eye (or distance to the ellipse defining the eye. Thus initially, the outer color will be say 5% new, 95% original.
as you get close to the eye, you will use more of the eye overlay skin color.
This should produce a really nice image. The biggest problem of course will be getting a good algorithm for separating eye from skin.

OpenCV: Detect a black to white gradient in an area

I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Resources