How to detect and track random shapes in images - image-processing

For a while i am working on image processing. I do image processing in C# and i use EmguCV.
What i need is to detect the line as shown in the picture and track that line for each frame captured from web cam.
I want to detect the edges of the figure and track for every frame taken with web cam.
What is the best way to do it?
What i am thinking is to do edge detection first with canny filter.
But then i am a bit confused about how to track the figure.
I tought about Hough Line but not sure that it is suitable method for curved lines.
Please help

Related

How to do a perspective transformation of an image which is missing corners using opencv java

I am trying to build a document scanner using openCV. I am trying to auto crop an uploaded image. I have few use cases where there is a gap in the border when the document is out of frame(captured image).
Ex image
Below is the canny edge detection of the given image.
The borders are missing here and findContours does not return me proper results due to this.
How can I handle such images.
Both automatic canny edge detection as well as dilate does not work in such cases because it can join only small edges.
Also few documents might have only 2 sides or 3 sides captured using camera and how can we crop the other areas which is not required.
Example Image:
Is there any specific technique for handling such documents?
Please suggest few ideas.
Your problem is unusual. One way to solve this problem which comes to my mind is to:
Add white borders around image.
https://docs.opencv.org/3.4/dc/da3/tutorial_copyMakeBorder.html
Find lines in edges
http://www.robindavid.fr/opencv-tutorial/chapter5-line-edge-and-contours-detection.html
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
Make Probablistic HoughLines
Crop image by these lines. It will for sure work for image like 1st one.
For image like 2nd one you can use perpendicular and parallel lines.
For sure your algorithm must be pretty complex to works good. The easiest way is to take a picture of whole document if it is possible.
Good luck!

Detecting drawn lines and dots on a notebook paper

I have a picture of notebook (with squares) and lines and dots are drawn in it like in the description. Output should be a data structure which contains info about boundaries and dots. How one can accomplish that? If possible, program should process this dynamically (given a video).
Yes this can be accomplished by various image processing techniques.
One famous technique that can help is called the Canny Edge Detector. It can detect all the defined edges within an image. More can be looked into it here. Various python and C# image processing libraries make this extremely easy. Take for example OpenCV
For detecting dots in the middle of the edges, that would be up to you to come up with, unless anyone knows of a library to make that easy as well. I suggest looking at each square that we detected by the canny edge detector and see if there are any dark color values around the middle.
For the data structure, that is also up to you.
Remember that a video is just a sequence of images. Just apply the same technique to all the images.

How to detect this image using OpenCV? (In real time on a iOS device)

I've been trying for a while to detect an image that looks like this:
Unfortunately i haven't been very lucky.
This image has to be detected so that I can "crop" a scene to whatever is below it.
I've been trying different feature detectors, like ORB, FAST and BRISK. And although I'm getting some key-points that look promising, when I try to find the Homography and apply the perspective transform to calculate my "scene corners" the results make absolutely no sense.
I suspect that the issue might be in how "simple" the marker is? since the corner points of the image are like technically the same, and within the small triangle in the middle, they are also very alike.
I'm looking for an advice or suggestion in how to approach this problem.
Edit:
My object to detect is an image. But the scene to be detected in is the video feed from an iOS camera device.
Edit 2:
I've replaced the top image with something more complex hoping the increase in keypoints would finally allow me to detect the object, but still no luck.
Here is the new top image:
And this is a sample of a frame: (taken as a screenshot of the iphone screen)
Keypoints detected:

How to detect perspective distortion from single image in OpenCV?

I am making a program that recognizes horizontal/vertically straight lines from an image file and creates a bunch of line data for other purpose.
However I got a problem that when I take pictures from diagonally sideways(or up/downwards), that picture shouldn't have horizontally/vertically straight lines so I cannot use that picture.
So I have to make image pre-processing method to invert perspective warping. To do so, I must find current projection value of the image first.
Unfortunately I couldn't find a way with OpenCV, unless I add precalculating camera matrix progress before taking picture.
I assume that most of lines in input images should be horizontal/vertically straight. Is there any methods to solve my problem in OpenCV?
For example:
This image is Perspectively warped. I wanna make it image like this :

Determine movement/motion (in pixels) between two frames

First of all I'm a total newbie in image processing, so please don't be too harsh on me.
That being said, I'm developing an application to analyse changes in blood flow in extremities using thermal images obtained by a camera. The user is able to define a region of interest by placing a shape (circle,rectangle,etc.) on the current image. The user should then be able to see how the average temperature changes from frame to frame inside the specified ROI.
The problem is that some of the images are not steady, due to (small) movement by the test subject. My question is how can I determine the movement between the frames, so that I can relocate the ROI accordingly?
I'm using the Emgu OpenCV .Net wrapper for image processing.
What I've tried so far is calculating the center of gravity using GetMoments() on the biggest contour found and calculating the direction vector between this and the previous center of gravity. The ROI is then translated using this vector but the results are not that promising yet.
Is this the right way to do it or am I totally barking up the wrong tree?
------Edit------
Here are two sample images showing slight movement downwards to the right:
http://postimg.org/image/wznf2r27n/
Comparison between the contours:
http://postimg.org/image/4ldez2di1/
As you can see the shape of the contour is pretty much the same, although there are some small differences near the toes.
Seems like I was finally able to find a solution for my problem using optical flow based on the Lukas-Kanade method.
Just in case anyone else is wondering how to implement it in Emgu/C#, here's the link to a Emgu examples project, where they use Lukas-Kanade and Farneback's algorithms:
http://sourceforge.net/projects/emguexample/files/Image/BuildBackgroundImage.zip/download
You may need to adapt a few things, e.g. the parameters for the corner detection (the frame.GoodFeaturesToTrack(..) method) , but it's definetly something to start with.
Thanks for all the ideas!

Resources