I am really new to image processing. Currently I am using openCV for processing my video stream.
I am trying to detect if something was added in the frame & if added is there way to keep track of it. I already had tried to use yolo but It is not limited to some object I might have any random object coming in frame.
Secondly, I tried to use background subtraction method, but I have some object which keeps moving.
Thirdly, I tried to use contours but the are not that much accurate enough.
Please guide. I already had invested a month in this task. I have no clue what to do.
Related
Not sure where to ask this. Please redirect me if SO is not the place.
I want make a web app that accurately tracks pose in a stationary video of someone pedaling a stationary bike. The joints can be marked with some stickers to make the process easier and more accurate. Basically, I want to do what does this app.
First i tried markerless tracking using pose estimation models such as mediapipe's Blazepose and google's MoveNet. However, these are not accurate enough. I would also like to track some additional landmarks (ball of the foot,...).
Then I tried OpenCV.js's Lukas-Kanade optical flow method. But the algorithm lost the tracked point quickly. Even when i placed a colored tape on the part of the body that i wanted to track.
I also tried template matching a single marked point in opencv but it was not very robust, and it would probably not work well when using more markers.
What other methods can I try? Since the app i send the video of requires stickers to be placed, I though it is using something like Lukas-Kanade. But as I said, when I tried it, it wasn't able to track the marked point. Because the app is only on iOS I thought it may be using this API. However, this is only my speculation.
Edit: added example video: https://www.youtube.com/watch?v=eCNyyABfWSE
I tried shooting in slowmo to have more fps, but the quality suffered because of this. Also i didn't have blue or green tape so I had to use yellow, which is not very visible on the sweater or on my wrist. But the markers on the pants should be trackable right?
I can use Scikit-Learn to train a model and recognize objects but I also need to be able to tell where in my test data images the object is residing. Is there someway I could maybe get the coordinates of the part of the test image which has the object I'm trying to recognize?
If not, please refer me to some other library that'll help me achieve this task.
Thankyou
I assume that you are talking about a computer vision application. Usually, the way that a box is drawn around an identified object is by using a sliding window and running your classifier on each window as it steps across the screen. You can keep track of which windows come back with positive results and use those windows as your bounds. You may wish to use windows of various size, if the object scale changes from image to image. In that case, you would likely want to prefer the smaller of two overlapping windows.
I'm planing to build a program which sees the wall, and if it detects objects on wall, it plays specific music.
Because I'm new to Image Processing, I thought and searched some methods.
Just capture every frame, and if some kinds of conditions are completed, it conducts actions. (In this case, it plays music.)
Save a default image, and keep comparing it with every frame.
I don't know the name of the methods that I described above.
Anyway, I think it is more reasonable to use method 2, because it is less expensive for memory.
Would you give some advice for this problem?
If there is better method to solve this problem, could you give me that method?
i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)
iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...