Compute a Bounding Box with defined orientation - orientation

I have some objects (given as segments in 3D slicer, also given as 3D npy image).
I want to use vtk to compute an Oriented Bounding Box. But the objects have a rather unregular shape, and if I use the vtkOBBTree Class to compute an OBB, the OBB does not match the intended orientation of my object.
I was able to define an orientation for one axis, and this orientation is supposed to define one axis of the OBB
like this Is there a way to compute a bounding Box (with vtk?) with a predefined orientation?

Related

Does the surface area divided by bounding box feature have a name?

I'm writing a connected component system and one of the descriptors I can easily compute is the surface area along with the component's rectangular bounding box.
What is surface area divided by bounding area called? (or any mixture of these two parameters).
For example, if my object were a rectangle, this parameter would be 1.0.
Extent or rectangularity, apparently:
Extent of an image object is defined as area of the image object
divided by the area of its bounding rectangle.
Source: Question text in https://dsp.stackexchange.com/questions/49026/what-is-the-application-difference-between-extent-and-solidity-in-image-processi
Rectangularity is the ratio of the object to the area of the minimum
bounding rectangle.
Source: Page 45 in http://www.cyto.purdue.edu/cdroms/micro2/content/education/wirth10.pdf
But the definitions I've run across do not always fully specify the rectangle. The ambiguity is related to the concept of "ferret box". Ferret boxes' edges do not have to be parallel to the image axes like good old bounding boxes. So depending on which you choose, your "extent" value might change.

In opencv's solvePnP, what should I pass for objectPoints?

OpenCV docs for solvePnp
In an augmented reality app, I detect the image in the scene so I know imagePoints, but the object I'm looking for (objectPoints) is a virtual marker just stored in memory to search for in the scene, so I don't know where it is in space. The book I'm reading(Mastering OpenCV with Practical Computer Vision Projects ) passes it as if the marker is a 1x1 matrix and it works fine, how? Doesn't solvePnP needs to know the size of the object and its projection so we know who much scale is applied ?
Assuming you're looking for a physical object, you should pass the 3D coordinates of the points on the model which are mapped (by projection) to the 2D points in the image. You can use any reference frame, and the results of the solvePnp will give you the position and orientation of the camera in that reference frame.
If you want to get the object position/orientation in camera space, you can then transform both by the inverse of the transform you got from solvePnp, so that the camera is moved to the origin.
For example, for a cube object of size 2x2x2, the visible corners may be something like: {-1,-1,-1},{1,-1,-1},{1,1,-1}.....
You have to pass the 3D coordinates of the real-world object that you want to map with the image. The scaling and rotation values will depend on the coordinate system that you use.
This is not as difficult as it sounds. See this blog post on head pose estimation. for more details with code.

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

Anchor Points in Image Processing

I want to use filter functions in Intel IPP or OpenCV. There is an argument about the position of anchor points, and I don't know what they are or how I can use them.
What is an anchor point, and what is it for?
Say you use a filter with a kernel (or mask size or something similar). The anchor point defines how your kernel is positioned with respect to the pixel currently processed during the filter operation.
The anchor point is an IppiPoint, i.e. a struct with members x and y. This is the coordinate in the kernel of the currently processed pixel. Typically, it is set to the center, i.e. kernelWidth/2 and kernelHeight/2.
Originally, I claimed that the anchor point is a linear index. Sorry, that was wrong.

Stretch an image to fit in any quadrangle

The application PhotoFiltre has an option to stretch part of an image. You select a rectangular shape and you can then grab and move the vertexes somewhere else to make any quadrangle. The image part which you selected will stretch along. Hopefully these images make my point a little clearer:
Is there a general algorithm which can handle this? I would like to obtain the same effect on HTML5 canvas - given an image and the resulting corner points, I would like to be able to draw the stretched image in such a way that it fills the new quadrangle neatly.
A while ago I asked something similar, where the solution was to divide the image up in triangles and stretch each triangle so that each three points correspond to the three points on the original image. This technique turned out to be rather exprensive and I would like if there is a more general method of accomplishing this.
I would like to use this in a 3D renderer, but I would like to work with a (2D) quadrangle.
I don't know whether PhotoFiltre internally also uses triangles, or whether it uses another (cheaper) algorithm to stretch an image like this.
Does someone perhaps know if there is a cheaper or more general method/algorithm to stretch a rectangular image, so that it fills a quadrangle given four points?
The normal method is to start with the destination, pick an appropriate grid size and then for each point in the new shape calculate the corresponding point in the source image (possibly with interpolation depending on the quality you need)
Affine transform.
Given four points for the "stretched" figure and four points for the figure it should match (e.g. a rectangle), an affine transform provides the spatial mapping you need. For each point (x1,y1) in the original image there is a corresponding point (x2,y2) in the second, "stretched" image.
For each integer-valued pixel (x2, y2) in the stretched image, use the affine transform to find the corresponding real-valued point (x1, y1) in the original image and apply its color to (x2,y2).
http://demonstrations.wolfram.com/AffineTransform/
You'll find sample code for Java and other languages online. .NET has the Matrix class.

Resources