Animate 1-parameter family of parametric surfaces in Manim - manim

I know how to plot a parametric surface using Manim. How can I plot (or rather animate) a whole 1-parameter family of parametric surfaces? For instance, the deformation of a sphere into an ellipsoid
(t,u,v) -> [(1+t)sin(u)cos(v),(1+2t)sin(u)sin(v),(1+3t)cos(u)]
as t ranges from, say 0 to 1.

Related

Plot Camera Trajectory

Given a set of 4x4 pose matrices, one can derive the camera's euclidean coordinate system location as the following:
where R is the 3x3 rotation matrix and t is the translation vector of the pose, as per this question.
When the set of poses is treated in a sequential manner, such as when each refers to a camera's pose at some time step, the rotation and translation components can be accumulated as follows:
and
Where both can be plugged in to the first equation to yield the camera's relative position at a given time step.
My question is how to plot such points using OpenCV or a similar tool. For a camera moving around an object in a circular motion, the output plot should be circular, with the origin at the starting point of the trajectory.
An example is shown below:-
Though my question is not explicitly about plotting the axes as shown above, it would be a bonus.
TL;DR: Given a set of poses, how can we generate a plot like the one above with common tools such as OpenCV, VTK, Matplotlib, MATLAB etc.
obtain axises vectors X,Y,Z and position O for each plot point
simply extract them form matrix. See Understanding 4x4 homogenous transform matrices. Now I do not know if your matrices are already inverse or not. So if your matrices represent camera coordinate system (not inverted) extract needed info directly. If not first invert the matrix and then extract.
If you got homogenuous transform matrix then you can do pseudo inverse by exploiting transpose operation. For more info see full pseudo inverse matrix.
Render each plot point
so first plot the axises as lines:
red_line(O,O+a*X);
green_line(O,O+a*Y);
blue_line(O,O+a*Z);
where a is axis lines size. And after this plot a dot for the position
black_circle(O,r);
Where r is some radius. You can use any gfx lib/engine for the plot. I would go for GDI or OpenGL but that depends solely on what are you familiar with.
BTW. to improve avarenes of the time line you can modulate the colors intensity (start with dark and end with bright colors so you see where the motion starts and ends ...)

Measure distance to object with a single camera in a static scene

let's say I am placing a small object on a flat floor inside a room.
First step: Take a picture of the room floor from a known, static position in the world coordinate system.
Second step: Detect the bottom edge of the object in the image and map the pixel coordinate to the object position in the world coordinate system.
Third step: By using a measuring tape measure the real distance to the object.
I could move the small object, repeat this three steps for every pixel coordinate and create a lookup table (key: pixel coordinate; value: distance). This procedure is accurate enough for my use case. I know that it is problematic if there are multiple objects (an object could cover an other object).
My question: Is there an easier way to create this lookup table? Accidentally changing the camera angle by a few degrees destroys the hard work. ;)
Maybe it is possible to execute the three steps for a few specific pixel coordinates or positions in the world coordinate system and perform some "calibration" to calculate the distances with the computed parameters?
If the floor is flat, its equation is that of a plane, let
a.x + b.y + c.z = 1
in the camera coordinates (the origin is the optical center of the camera, XY forms the focal plane and Z the viewing direction).
Then a ray from the camera center to a point on the image at pixel coordinates (u, v) is given by
(u, v, f).t
where f is the focal length.
The ray hits the plane when
(a.u + b.v + c.f) t = 1,
i.e. at the point
(u, v, f) / (a.u + b.v + c.f)
Finally, the distance from the camera to the point is
p = √(u² + v² + f²) / (a.u + b.v + c.f)
This is the function that you need to tabulate. Assuming that f is known, you can determine the unknown coefficients a, b, c by taking three non-aligned points, measuring the image coordinates (u, v) and the distances, and solving a 3x3 system of linear equations.
From the last equation, you can then estimate the distance for any point of the image.
The focal distance can be measured (in pixels) by looking at a target of known size, at a known distance. By proportionality, the ratio of the distance over the size is f over the length in the image.
Most vision libraries (including opencv) have built in functions that will take a couple points from a camera reference frame and the related points from a Cartesian plane and generate your warp matrix (affine transformation) for you. (some are fancy enough to include non-linearity mappings with enough input points, but that brings you back to your time to calibrate issue)
A final note: most vision libraries use some type of grid to calibrate off of ie a checkerboard patter. If you wrote your calibration to work off of such a sheet, then you would only need to measure distances to 1 target object as the transformations would be calculated by the sheet and the target would just provide the world offsets.
I believe what you are after is called a Projective Transformation. The link below should guide you through exactly what you need.
Demonstration of calculating a projective transformation with proper math typesetting on the Math SE.
Although you can solve this by hand and write that into your code... I strongly recommend using a matrix math library or even writing your own matrix math functions prior to resorting to hand calculating the equations as you will have to solve them symbolically to turn it into code and that will be very expansive and prone to miscalculation.
Here are just a few tips that may help you with clarification (applying it to your problem):
-Your A matrix (source) is built from the 4 xy points in your camera image (pixel locations).
-Your B matrix (destination) is built from your measurements in in the real world.
-For fast recalibration, I suggest marking points on the ground to be able to quickly place the cube at the 4 locations (and subsequently get the altered pixel locations in the camera) without having to remeasure.
-You will only have to do steps 1-5 (once) during calibration, after that whenever you want to know the position of something just get the coordinates in your image and run them through step 6 and step 7.
-You will want your calibration points to be as far away from eachother as possible (within reason, as at extreme distances in a vanishing point situation, you start rapidly losing pixel density and therefore source image accuracy). Make sure that no 3 points are colinear (simply put, make your 4 points approximately square at almost the full span of your camera fov in the real world)
ps I apologize for not writing this out here, but they have fancy math editing and it looks way cleaner!
Final steps to applying this method to this situation:
In order to perform this calibration, you will have to set a global home position (likely easiest to do this arbitrarily on the floor and measure your camera position relative to that point). From this position, you will need to measure your object's distance from this position in both x and y coordinates on the floor. Although a more tightly packed calibration set will give you more error, the easiest solution for this may simply be to have a dimension-ed sheet(I am thinking piece of printer paper or a large board or something). The reason that this will be easier is that it will have built in axes (ie the two sides will be orthogonal and you will just use the four corners of the object and used canned distances in your calibration). EX: for a piece of paper your points would be (0,0), (0,8.5), (11,8.5), (11,0)
So using those points and the pixels you get will create your transform matrix, but that still just gives you a global x,y position on axes that may be hard to measure on (they may be skew depending on how you measured/ calibrated). So you will need to calculate your camera offset:
object in real world coords (from steps above): x1, y1
camera coords (Xc, Yc)
dist = sqrt( pow(x1-Xc,2) + pow(y1-Yc,2) )
If it is too cumbersome to try to measure the position of the camera from global origin by hand, you can instead measure the distance to 2 different points and feed those values into the above equation to calculate your camera offset, which you will then store and use anytime you want to get final distance.
As already mentioned in the previous answers you'll need a projective transformation or simply a homography. However, I'll consider it from a more practical view and will try to summarize it short and simple.
So, given the proper homography you can warp your picture of a plane such that it looks like you took it from above (like here). Even simpler you can transform a pixel coordinate of your image to world coordinates of the plane (the same is done during the warping for each pixel).
A homography is basically a 3x3 matrix and you transform a coordinate by multiplying it with the matrix. You may now think, wait 3x3 matrix and 2D coordinates: You'll need to use homogeneous coordinates.
However, most frameworks and libraries will do this handling for you. What you need to do is finding (at least) four points (x/y-coordinates) on your world plane/floor (preferably the corners of a rectangle, aligned with your desired world coordinate system), take a picture of them, measure the pixel coordinates and pass both to the "find-homography-function" of your desired computer vision or math library.
In OpenCV that would be findHomography, here an example (the method perspectiveTransform then performs the actual transformation).
In Matlab you can use something from here. Make sure you are using a projective transformation as transform type. The result is a projective tform, which can be used in combination with this method, in order to transform your points from one coordinate system to another.
In order to transform into the other direction you just have to invert your homography and use the result instead.

Sphere detection with 3D points

Given a set of 3d Points(it contains the cartesian coordinate of each points as a list) with known number of sphere, How to detect and construct the sphere from this set?
I would like to find the basic information of the sphere, for example the location of center, radius and degree of fitting of the points to the constructed sphere.
Any there any function that I am applied from Opencv?
Or what kind of algorithm should I use?

OpenCV circle distortion detection

OpenCV has capapabilities to compensate for distortion in patterns, such as a this board, for example:
Every example I ever saw for this process does it with grids or squares. I would like to know if something similar exists for a single circle. My practical case is that I detect an ellipse, and I need to calculate the angle between the plane of this ellipse and the projection plane where the ellipse is projected as a circle. I managed to achieve that in my own code, but I would like to know if there is something built into the library to that purpose.
Use the ellipse axes to your advantage
I don't know of any "circular projection" as you name it, but I'm thinking that you can rephrase your problem into having the solution already.
Images make any answer SO cool.
Forget the ellipse, take the axes
A circle can be thought of as 2 vectors with unit norm defining a plane.
The projected circle's axes you estimate are the projection of the unit referential into the 3D plane
Then for projecting back and forth is just an affair of applying the transformation described by the estimated axes vectors

How can I estimate the probability of a partial state from a Kalman filter?

I have a Kalman filter tracking a point, with a state vector (x, y, dx/dt, dy/dt).
At a given update, I have a set of candidate points which may correspond to the tracked points. I would like to iterate through these candidates and choose the one most likely to correspond to the tracked point, but only if the probability of that point corresponding to the tracked point is greater than a threshold (e.g. p > 0.5).
Therefore I need to use the covariance and state matrices of the filter to estimate this probability. How can I do this?
Additionally, note that my state vector is four dimensions, but the measurements are in two dimensions (x, y).
When you predict the measurements with y = Hx you also compute the covariance of y as H*P*H.T. This property is why we use variance in the Kalman Filter.
The geometrical way to understand how far a given point is from your predicted point is a error ellipse or confidence region. A 95% confidence region is the ellipse scaled to 2*sigma (if that isn't intuitive, you should go read about normal distributions, because that is what the KF thinks it is working on). If the covariance is diagonal, the error ellipse will be axis aligned. If there are co-varying terms (which there may not be if you have not introduced them anywhere via Q or R) then the ellipse will be tilted.
The mathematical way is with the Mahalanobis distance, which just directly formulates the geometrical representation above as a distance. The distance scale is standard deviations, so your P=0.5 corresponds to a distance of 0.67 (again, see normal distributions if this is surprising).
The most probable point (I suppose from detections) will be the nearest point to filter prediction.

Resources