I have images such as below (I am pasting only a sketch here) where I want to calculate the center of symmetry and the displacement between the 2 marked zones in the image(Marked in Red and blue). Could anyone suggest a simple algorithm for this? (Please note the signal is symmetric to 180-degree rotation).
The idea is to calculate the center of symmetry between the red and blue zones
Here is my algorithm approach:
Distinguish the red and blue pixels in the image by thresholding: Lets say set the blue pixels to wihte(255) and set the red pixels(0) and set the rest of image to the (125) in a gray channel image.
Check all of the pixels horizontally in the middle row of the image.
While you are checking, if you hit the red pixel, then try to search blue one. However, if you hit red pixel again, then start searching blue again.
During the search after red pixel if you hit a blue pixel then you can easiy calculate the middle of it. Then re-start the process.
Here is a demonstration of search line:
Note: Since they are dashed lines, luckily you may pass them with gaps. To fix, this you can try couple of random different rows.
Related
I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).
I have a project that I need to follow the points like in the picture. I made a picture like this as a representation. I get the coordinates of the red dots as pixel coordinates once per second. Is there a way I can predict the green dots when I don’t take measurements afterwards? When I use the 2d-kalman filter (x,y,x’,y’ states) it keeps guessing in the direction of the blue arrow. I want to make correct predictions on the green dots, not this way. How can I do that.!
Thanks.
Image
I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.
I know, that the Meanshift-Algorithm calculates the mean of a pixel density and checks, if the center of the roi is equal with this point. If not, it moves the new ROI center to the mean center and checks again... . Like in this picture:
For density, it is clear how to find the mean point. But it can't simply calculate the mean of a histogram and get the new position by this point. How can this algorithm work using color histogram?
The feature space in your image is 2D.
Say you have an intensity image (so it's 1D) then you would just have a line (e.g. from 0 to 255) on which the points are located. The circles shown above would just be line segments on that [0,255] line. Depending on their means, these line segments would then shift, just like the circles do in 2D.
You talked about color histograms, so I assume you are talking about RGB.
In that case your feature space is 3D, so you have a sphere instead of a line segment or circle. Your axes are R,G,B and pixels from your image are points in that 3D feature space. You then still look where the mean of a sphere is, to then shift the center towards that mean.
Let's say I take a picture of two hammers side-by-side (although they may be aligned differently, but always one on the right and one on the left), wherein each might look like this, and I want to calculate the ratio of the lengths of the handles of the hammers.
For example, the output from an input image would be the length of the red part of the one on the left (its handle) divided by the length of the handle of the one on the right.
How would I go about doing this?
If you know the handle color it doesn't sound hard. Just select those pixels and take the longer side of a minimum oriented bounding box.
Here are a couple of hints:
Make sure that the bounding boxes of the hammers don't overlap. If you can guarantee this, try this approach:
Scale the image to width=10%, height=10px. Find the largest amount of pixels in background color near the middle of the image. That allows you to separate the two hammers into individual images. Multiply the positions by 10 to transform them back into coordinates of the original image.
Create two images (one for each hammer)
Crop the border
Scale the image to width = 10px, height = 10%. Count all reddish pixels (save the image and examine the pixel values for red and non-red parts to get an idea what to look for)