What is the range of optical flow in an image? - opencv

I use opencv to calculate optical flow between 2 images (frame_t and frame_t+1). Then I want to use optical flow to warp frame t to get warped_frame_t+1. The warping function is F.grid_sample (pytorch). Since the range of grid in F.grid_sample is (-1,1), I need to normalize optical flow. But How should I do the normalization? What is the range of optical flow in an image? Is it in the range of (-w+1,w-1) (-h+1,h-1)?

np.meshgrid(np.linspace(-1,1,W), np.linspace(-1,1,H))

Related

What exactly is the output when we run the dense optical flow (farnnback)?

I have been running the Python implementation code of Dense Optical Flow given in the official documentation page. At one particular line of the code, they use
mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]).
When I print the values of mag, I get these -
Please check this image for the output I'm getting
I have no idea how to make sense of this output.
My end objective is to use optical flow to get a resultant or an average motion value for every frame.
Quoting the same OpenCV tutorial you use
We get a 2-channel array with optical flow vectors, (u,v).
That is the output of the dense optical flow. Basically it tells you how each of the points moved in a vectorial way. (u,v) is just the cartesian representation of a vector and it can be converted to polar coordinates, this means an angle and the magnitude.
The angle is the orientation where the pixel moved. And the magnitude is the distance that the pixel moved.
In many algorithms you may use the magnitude to know if the pixel moved (less than 1 means no movement for example). Or if you are tracking an object which you know the initial position (meaning the pixels position of the object) you may find where the majority of the pixels are moving to, and use that info to determine the new position.
BTW, cartToPolar returns the angles in Radians unless it is specified. Here is an extract of the documentation:
cv2.cartToPolar(x, y[, magnitude[, angle[, angleInDegrees]]]) → magnitude, angle
angleInDegrees must be True if you need it in degrees.

Calculate the unit gradient vector

I have a problem to calculate the unit gradient vector. I have a formula but i didn't understand. If possible, would you be able to explain this formula in more detail. I must implement an image for eye center localization. Thank you for your interest.
UGVs formula
Gradient vector calculation will give you the magnitude and orientation for each pixel in the image. This means you need to calculate the derivatives along the x-axis and y-axis separately. Then fuse them to calculate magnitude and direction of vectors. If you are using OpenCV or MATLAB, you will see functions to calculate gradient magnitude and direction of pixels in an image. For example for MATLAB, see imgradient amd imgradientxy functions.

Optical flow and focus of expansion in OpenCV

I am currently working on a project that requires me to find the focus of expansion using optical flow.
I currently have the optical flow and am using the formula from pages 13-14 this paper:
http://www.dgp.toronto.edu/~donovan/stabilization/opticalflow.pdf
I take two frames from a video and find pyramids from both using buildOpticalFlowPyramid then find the keypoints using goodFeaturesToTrack. Using these I then calculate the sparse optical flow with calcOpticalFlowPyrLK. All three of these methods are provided by OpenCV.
The problem I have hit is that I need both the flow vector for each keypoint in the image to fill the A and b matrices. Would the pixel value be just the location of the keypoint in the original image? And then the flow vector is the difference between the initial location and new point?
Yes, that is precisely so. Using the terms/variables as per the paper and the following link,
http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#calcopticalflowpyrlk
p_i = (x,y) are the prevPts (points in the original image),
v = (u,v) are the flow vectors obtained by subtracting points in prevPts from those in nextPts.

centroid ellipse MSER OPENCV

I am working on an image registration method and I would like to work with region based feature detectors. As representative and because it is already implemented in opencv, i thought of MSER.
I know how to detect the MSER regions.MSER detector gives the MSER regions inside of a vector of points, a contour.I would like to retrieve the centroid of these contours. I could fit a ellipse on them, but then I don't as well how could I retrieve the centroid of these ellipses.
Does someone know if there is an already implemented function that could take care of this task? Or do i have to develop an algorithm?
The reason is that I would like to perform the point correspondence using this centroid points as interesting points.
Thanks
Iván
The centroid of the region can be computed by calculating the mean of all the x values and the mean of all the y values. The resulting (meanX, meanY) point is the region's centroid.

How to extract velocity vectors from dense optical flow?

Problem: I'm trying to align two frames of a moving video.
I'm currently trying to use the function "cvCalcOpticalFlowLK" and the result outputs velocity vectors of x and y in the form of a "CvArr".
So I obtained the result, but i'm not sure how to use these vector arrays.
My question is this... how do i know what is the velocity of each pixel? Is it just the value of each pixel value at that particular point?
Note: I would've used the other optical flow functions such as cvCalcOpticalFlowPyrLK() as it is much easier, but i want the dense optical flow.
Apparently my original assumption was true. The "velx" and "vely" outputs from the optical flow function are the actual velocities for each pixel value. To best extract them, I accessed the pixel from the raw data and pulled the value. There are 2 ways to do this.
cvGet2D() -- this way is slower but if you only need to access 1 pixel it's okay.
or
(uchar*)(image->imageData + height*image->widthStep + width);
(image is an IplImage, width and height are just the corresponding widths and heights of the image)
If you need the motion vectors for each pixel, then you need to compute what's called 'dense optical flow'. Starting from openCV 2.1, there is a function to do exactly that: calcOpticalFlowFarneback.
See the link below:
http://opencv.itseez.com/modules/video/doc/motion_analysis_and_object_tracking.html?highlight=calcopticalflowfarneback#cv2.calcOpticalFlowFarneback
velx and vely are optical flow not the actual velocity.
The method you used is Obsolete. Use this calcOpticalFlowFarneback()

Resources