How to draw in opencv with active camera stream? - opencv

I am trying to make something similar to this: https://www.youtube.com/watch?v=D2Kb3ryfGNc
I succeeded in detecting laser position, but now I can't figure out how to paint where the laser has been?
Do I need to paint lines of where laser has been in one frame and add it to camera stream frame in order to make sure that lines are painted?

Here's the thing - When we stream a continuous video using openCV Mat object, it displays one frame after another, thus the info of the nth frame is lost when the (n+1)th frame is received.
What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to draw the laser trajectory (Mat_traj). Mat_cam will be used to track the laser position frame-by-frame, using any standard colour thresholding algo. Even the video says that the laser should be bright, meaning that jimez86 might be using white color threshold, followed by largest blob localization.
As a new laser position is received in nth frame, draw a corresponding circle on Mat_traj. When the next frame is received, Mat_cam will be updated and it'll have a new laser position, but Mat_traj will be the same, since it will not be cleared/refreshed with every 'for' loop iteration, hence it will contain the whole trajectory. Adding Mat_traj and Mat_cam using Weighted addition will give you the desired result. Follow the algo below:
Mat Mat_traj(640,480,CV_8UC1,Scalar(0)),Mat_cam,Mat_res;
VideoCapture cam(0);
for(;;)
{
cam>>Mat_cam;
Point laserCentre=getLaserCentre(Mat_cam);//you'll be defining this function;
drawCircle(Mat_traj,laserCentre);
addWeighted(Mat_cam,Mat_traj,other_params,Mat_res);
imshow("out",Mat_res);
waitKey(10);
}

Related

Detecting contours of predefined shape with OpenCV

I'm working on a project which locates the Machine Readable Zone on ID cards.
For this I need to do some pre processing to extract the ID card from a scanned image which typically are randomly disposed on a white page. I'm able to locate the majority of the cards by using a Histogram equalization with CLAHE before a contour detection. But in some cases the border around the MRZ is totally invisible (white on white) as shown on the attached image.
I'd like to detect rectangle of a predefined shape as I know the shape of the ID card will be always the same but so far I wasn't able to find a way do do something like this with OpenCV.
Basically what I need is to find two rectangle of a fixed ratio that best match the 2 cards on the scan.
I'm wondering if I need to try OpenCV matchers or if there is a simpler way to accomplish this kind of detection.
The solution to you problem is likely going to be matrix transformations. The concept is to pinpoint 4 coordinates on the card that can be easily detected using opencv, such as the the rectangle colored in blue & cyan.
Have coordinates of the card with the predefined shape stored in an array, where a corner of the card is at the 0, 0. Also store the coordinates of the blue * cyan rectangle in an array. With the two arrays you can find the perspective transform of the two arrays using the cv2.getPerspectiveTransform method.
Using the perspective transform found, you can detect the coordinates of the whole card every time you detect the coordinates of the blue & cyan rectangle.

Frame Difference for non static camera

I am trying to detect motion in a video taken by non-static camera in this case UAV.
what I planned to do is to remove the camera motion effect by aligning the frames as much as they overlap then make simple differencing, Here what I did
- I used SURF to get matching points between frames
- I supplied those points to homography to get the matrix H
- I wrapped the new frame using H
* all done using openCV
* to save computation power and time I used mask with SURF, the mask is 4 squares one at each corner
the concept works great for static image but in the video the wrapped frame is giving strange results! sometimes good sometimes bad
https://www.youtube.com/watch?v=WKVoUR_-DFw #00:34

glReadPixels on separate layers

I'll get straight to the point :)
From the above 480 x 320 diagram, I am thinking I can detect collision on pixel level like a worm game.
What I want to know is how to sample pixels on separate layers. As you can see in the diagram, as the worm is falling, I want to only sample the black pixels with glReadPixels() to see if the worm is standing (colliding) with any terrain but when I last tried it, glReadPixels() samples all pixels on screen, without any ideas of "layers".
The white pixel is the background that should not be part of the sampling.
Am I perhaps suppose to have a black and white copy of my terrain on a separate buffer and call glReadPixels() on that separate buffer so that the background images (white pixels) won't get sampled?
Before I was drawing my terrain on the screen in the same buffer/context where I draw my background image.
Any ideas?
What read pixels does is read back the binded buffer, since the buffer is the output of all your compositions, will obviously contain all the data your wrote and doesn't understand you logic arrangement into layer. You can try drawing your terrain into the stencil buffer and read back only that. Use GL_DEPTH_STENCIL (format parameter).

OpenCV continous Speed measurement using camera

I am new to OPENCV so bear with me if there are simple things that I am missing here.
I am trying to work out a camera based system that can continuously output the speed of a vehicle with the following assumptions:
1. The camera is horizontally placed and the vehicle passes near 3 to 5 feet of the camera lens.
2. The speed will not be more than 30KM/Hrs
I was hoping to start with the concept of a optical mouse which detects the displacement in the surface pattern. However I am unclear as to how to handle the background when the vehicle starts to enter the frame.
There are two methods I was interested in experiment with but am looking for further inputs.
Detect the vehicle as it enters the frame and separate from background.
Use cvGoodFeaturesToTrack to find points on the vehicle.
Track the point across the next frame. & Calculate the horizontal velocity using Lucas_Kanade Pyramid function for optical flow.
Repeat
Please suggest corrections and amendments.
Also I request more experienced members to help me code this procedure efficiently since I don't know which are the most correct functions to use here.
Thanks in advance.
Hope you will be using a simple camera with 20 fps to 30 fps and your camera is placed perpendicular to the road but away from it...the object i.e. your cars have a max velocity of 8 ms-1 in the image plane...calculate the speed of the cars in the image plane with the help of the lens you are using...
( speed in object plane / distance of camera from road ) = ( speed in image plane / focal length )
you should get in pixels per second if you know how much each pixel measures...
Steps...
You can use frame differentiation...that is subtract the current frame from the previous frame and take the absolute difference...threshold the difference...this segments out your moving car from the back ground...remember this segments all moving objects...so if u want a car and not a moving man you can use the shape characteristic that is the height is to width ratio...fit a rectangle to the segmented part and in each frame do the same steps. so in each frame you can keep a record of the coordinate of the leading edge of the bounding box... that way when a car enters the view till it pass out of the view you know for how long the car has persisted...use the number of frames , the frame rate and the coordinaes of the leading edge of the bounding box to calculate the speed...
You can use goodfeaturestotrack and optical flow of open cv...that way you can make distinguish between fast moving and slow moving objects...but keep refreshing the points that goodfeaturestotrack gives you or else any new car coming into the camera view will not be updated...record the displacement of the set of points picked by goodfeaturestotrack in each frame..that is the displacement of the moving object...calculate speed in the same way...the basic idea to calculate speed is to record the number of frames the object has persisted in the camera field of view...if your camera is fixed so is your field of view...hence what matters is in how many frames are you able to catch the object...
remember....the optical flow of opencv works for tracking slow moving objects or more theoretically the feature point (determined by goodfeatures to track..) displacement is less between 2 consecutive frames for the algorithm to work...big displacements will have some erroneous predictions by the algorithm...that is why the speed in the image plane is important..at least qualitatively you should get an idea of it...
NOTE: both the methods are for single object tracking ..for multiple object tracking you need some modifications...however you can start with either of the method...i think it will work..

Transform position of point form one perspective into another

I'm trying to convert the position of a point which was filmed with a freely moving camera (local space) into the position in a image of the same scene (global space). The position of the point is given in local space and I need to calculate it in global space. I have markers distributed all over the scene to have corresponding points in both global and local space to calculate the perspective transform.
I tried to calculate the perspective transform matrix by comparing the points of corresponding markers in gloabl and local space with the help of JavaCV (cvGetPerspectiveTransform(localMarker, globalMarker, mmat)). Then I transform the postion of the point in local space with the help of the perspective transform matrix (cvPerspectiveTransform(localFieldPoints, globalFieldPoints, mmat)).
I though that would be enough to solve my problem, but it doesn't quite work good. I also noticed that when I calculate the perspective transform matrix of different markers in one specific image of the video, i get diefferent perspective transform matrices. If I understood everything correct, this shouldn't happen, because the perspective is alway the same here, so I should always get the same perspective transform matrix, shouldn't I?
Because I'm quite new to all of this and this was my first attempt, I just wanted to know If the method I used is generally right or should it be done differently? Maybe I just missed something?
EDIT:
Again, I have one image of the complete scene i look at and a video from a camara which moves freely in the scene. Now I take every Image of the video and compare it with the image of the complete scene (I used different cameras for making the image and the video, so the camera intrinsics actually aren't the same. Could that be the Problem?
Perspective Transform Screenshot.
On the rigth side I have the image of the scene, on the left one Image of the video. The red circle in the left video image is the given point. The red square in the right image ist the calculated point with the help of perspective transform. As you can see, the calculated point isn't at the right position.
What I meant with „I get different perspective transform matrices“ is that when I calculate a perspective transform matrix with the help of marker „0E3E“ I get a different matrix than using marker „0272“.

Resources