Octree raycasting/raytracing - best ray/leaf intersection without recursion - xna

Could anyone provide a short & sweet explanation (or suggest a good tutorial) on how to cast a ray against a voxel octree without recursion?
I have a complex model baked into an octree, and I need to find the best/closest leaf that intersects a ray. A standard drill-down iterative tree walk:
Grab the root node
Check for intersection
No? Exit
Yes? Find child that intersects the ray that is closest to the ray's origin
Loop until I reach a leaf or exit the tree
Always returns a leaf, but in instances where the tree stores, say, terrain, the closest node to the ray's origin doesn't necessarily contain the leaf that's the best match. This isn't suprising - taller objects in farther nodes won't get tested using this approach.
I can do this recursively by finding all of the intersecting leaves in the tree, sorting by distance and picking the closest one to the ray's position. However, this is slow and requires recursion.
I've read a little about using the Bresenham line algorithm to walk the tree, which seems to require that each node contain pointers to adjacent neighbors, but I'm unclear on how to implement this in a useful way.
Any suggestions? I can fake a stack in HLSL using a fixed-length array or a struct with an element for each potential stack entry, but the memory requirements for that can become crippling with a sufficiently large tree.
Help.

I've managed to get this mostly working using a 3D DDA algorithm and neighbor node pointers.
I'm still working out a few bugs, but here's a C# version that appears to work. This one stops when it reaches the first leaf, but that's not entirely necessary.
/// <param name="ray"></param>
public OctreeNode DDATraverse(Ray ray)
{
float tmin;
float tmax;
/// make sure the ray hits the bounding box of the root octree node
if (!RayCasting.HitsBox(ray, root.BoundingBox.Min, root.BoundingBox.Max, out tmin, out tmax))
return null;
/// move the ray position to the point of intersection with the bounding volume.
ray.Position += ray.Direction * MathHelper.Min(tmin, tmax);// intersectionDistance.Value;
/// get integer cell coordinates for the given position
/// leafSize is a Vector3 containing the dimensions of a leaf node in world-space coordinates
/// cellCount is a Vector3 containng the number of cells in each direction, or the size of the tree root divided by leafSize.
var cell = Vector3.Min(((ray.Position - boundingBox.Min) / leafSize).Truncate(), cellCount - Vector3.One);
/// get the Vector3 where of the intersection point relative to the tree root.
var pos = ray.Position - boundingBox.Min;
/// get the bounds of the starting cell - leaf size offset by "pos"
var cellBounds = GetCellBounds(cell);
/// calculate initial t values for each axis based on the sign of the ray.
/// See any good 3D DDA tutorial for an explanation of t, but it basically tells us the
/// distance we have to move from ray.Position along ray.Direction to reach the next cell boundary
/// This calculates t values for both positive and negative ray directions.
var tMaxNeg = (cellBounds.Min - ray.Position) / ray.Direction;
var tMaxPos = (cellBounds.Max - ray.Position) / ray.Direction;
/// calculate t values within the cell along the ray direction.
/// This may be buggy as it seems odd to mix and match ray directions
var tMax = new Vector3(
ray.Direction.X < 0 ? tMaxNeg.X : tMaxPos.X
,
ray.Direction.Y < 0 ? tMaxNeg.Y : tMaxPos.Y
,
ray.Direction.Z < 0 ? tMaxNeg.Z : tMaxPos.Z
);
/// get cell coordinate step directions
/// .Sign() is an extension method that returns a Vector3 with each component set to +/- 1
var step = ray.Direction.Sign();
/// calculate distance along the ray direction to move to advance from one cell boundary
/// to the next on each axis. Assumes ray.Direction is normalized.
/// Takes the absolute value of each ray component since this value is in units along the
/// ray direction, which makes sure the sign is correct.
var tDelta = (leafSize / ray.Direction).Abs();
/// neighbor node indices to use when exiting cells
/// GridDirection.East = Vector3.Right
/// GridDirection.West = Vector3.Left
/// GridDirection.North = Vector3.Forward
/// GridDirection.South = Vector4.Back
/// GridDirection.Up = Vector3.Up
/// GridDirection.Down = Vector3.Down
var neighborDirections = new[] {
(step.X < 0) ? GridDirection.West : GridDirection.East
,
(step.Y < 0) ? GridDirection.Down : GridDirection.Up
,
(step.Z < 0) ? GridDirection.North : GridDirection.South
};
OctreeNode node=root;
/// step across the bounding volume, generating a marker entity at each
/// cell that we touch. Extension methods GreaterThanOrEEqual and LessThan
/// ensure that we stay within the bounding volume.
while (node!=null)
{
/// if the current node isn't a leaf, find one.
/// this version exits when it encounters the first leaf.
if (!node.Leaf)
for (var i = 0; i < OctreeNode.ChildCount; i++)
{
var child = node.Children[i];
if (child != null && child.Contains(cell))
{
//SetNode(ref node, child, visitedNodes);
node = child;
i = -1;
if (node.Leaf)
return node;
}
}
/// index into the node's Neighbor[] array to move
int dir = 0;
/// This is off-the-shelf DDA.
if (tMax.X < tMax.Y)
{
if (tMax.X < tMax.Z)
{
tMax.X += tDelta.X;
cell.X += step.X;
dir = 0;
}
else
{
tMax.Z += tDelta.Z;
cell.Z += step.Z;
dir = 2;
}
}
else
{
if (tMax.Y < tMax.Z)
{
tMax.Y += tDelta.Y;
cell.Y += step.Y;
dir = 1;
}
else
{
tMax.Z += tDelta.Z;
cell.Z += step.Z;
dir = 2;
}
}
/// see if the new cell coordinates fall within the current node.
/// this is important when moving from a leaf into empty space within
/// the tree.
if (!node.Contains(cell))
{
/// if we stepped out of this node, grab the appropriate neighbor.
var neighborDir = neighborDirections[dir];
node = node.GetNeighbor(neighborDir);
}
else if (node.Leaf && stopAtFirstLeaf)
return node;
}
return null;
}
Feel free to point out any bugs. I'll post the HLSL version if there's any demand.
Here's another version that just steps through the tree in leaf-sized steps without intersection checking. This is useful as a 3D DDA demonstration:
/// <summary>
/// draw a 3D DDA "line" in units of leaf size where the ray intersects the
/// tree's bounding volume/
/// </summary>
/// <param name="ray"></param>
public IEnumerable<Vector3> DDA(Ray ray)
{
float tmin;
float tmax;
if (!RayCasting.HitsBox(ray, root.BoundingBox.Min, root.BoundingBox.Max, out tmin, out tmax))
yield break;
/// move the ray position to the point of intersection with the bounding volume.
ray.Position += ray.Direction * tmin;
/// get integer cell coordinates for the given position
var cell = Vector3.Min(((ray.Position - boundingBox.Min) / leafSize).Truncate(), cellCount - Vector3.One);
/// get the bounds of the starting cell.
var cellBounds = GetCellBounds(cell);
/// calculate initial t values for each axis based on the sign of the ray.
var tMaxNeg = (cellBounds.Min - ray.Position) / ray.Direction;
var tMaxPos = (cellBounds.Max - ray.Position) / ray.Direction;
/// calculate t values within the cell along the ray direction.
var tMax = new Vector3(
ray.Direction.X < 0 ? tMaxNeg.X : tMaxPos.X
,
ray.Direction.Y < 0 ? tMaxNeg.Y : tMaxPos.Y
,
ray.Direction.Z < 0 ? tMaxNeg.Z : tMaxPos.Z
);
/// get cell coordinate step directions
var step = ray.Direction.Sign();
/// calculate distance along the ray direction to move to advance from one cell boundary
/// to the next on each axis. Assumes ray.Direction is normalized.
var tDelta = (leafSize / ray.Direction).Abs();
/// step across the bounding volume, generating a marker entity at each
/// cell that we touch. Extension methods GreaterThanOrEEqual and LessThan
/// ensure that we stay within the bounding volume.
while (cell.GreaterThanOrEqual(Vector3.Zero) && cell.LessThan(cellCount))
{
yield return boundingBox.Min + cell * leafSize;
///create a cube at the given cell coordinates, and add it to the draw list.
if (tMax.X < tMax.Y)
{
if (tMax.X < tMax.Z)
{
tMax.X += tDelta.X;
cell.X += step.X;
}
else
{
tMax.Z += tDelta.Z;
cell.Z += step.Z;
}
}
else
{
if (tMax.Y < tMax.Z)
{
tMax.Y += tDelta.Y;
cell.Y += step.Y;
}
else
{
tMax.Z += tDelta.Z;
cell.Z += step.Z;
}
}
}
}
And an HLSL version that just stores the tree in a Texture3D, without neighbors or any "sparseness" to the tree.
This is still buggy. The first test with hitbox() works correctly, but the ray winds up getting refracted within the tree. This looks very cool, but isn't correct.
Here's what it looks like when I stop at the root bounds, without using the DDA to traverse the tree:
/*
find which leaf, if any, the ray intersects.
Returns transparency (Color(0,0,0,0)) if no intersection was found.
TestValue is a shader constant parameter passed from the caller which is used to dynamically adjust the number of loops the shader code will execute. Useful for debugging.
intrinsics:
step(y,x) : (x >= y) ? 1 : 0
*/
float4 DDATraverse(Ray ray)
{
float3 bounds_min = OctreeCenter-OctreeObjectSize/2;
float3 bounds_max = OctreeCenter+OctreeObjectSize/2;
float4 cellsPerSide = float4(trunc((bounds_max-bounds_min)/CellSize),1);
float3 vector3_one = float3(1,1,1);
float tmin;
float tmax;
if(hitbox(ray,bounds_min,bounds_max,tmin,tmax))
{
ray.Position+=ray.Direction*tmin;
float4 cell = float4((ray.Position-bounds_min)/CellSize,1);
float3 tMaxNeg = (bounds_min-ray.Position)/ray.Direction;
float3 tMaxPos = (bounds_max-ray.Position)/ray.Direction;
float3 tmax = float3(
ray.Direction.x < 0 ? tMaxNeg.x : tMaxPos.x
,
ray.Direction.y < 0 ? tMaxNeg.y : tMaxPos.y
,
ray.Direction.z < 0 ? tMaxNeg.z : tMaxPos.z
);
float3 tstep = sign(ray.Direction);
float3 dt = abs(CellSize/ray.Direction);
float4 texel;
float4 color;
for(int i=0;i<TestValue;i++)
{
texel=smoothstep(float4(0,0,0,0),cellsPerSide,cell);
if (color.a < 0.9)
color = tex3Dlod(octreeSampler,texel);
if (tmax.x < tmax.y)
{
if (tmax.x < tmax.z)
{
tmax.x+=dt.x;
cell.x+=tstep.x;
}
else
{
tmax.z+=dt.z;
cell.z+=tstep.z;
}
}
else
{
if (tmax.y < tmax.z)
{
tmax.y+=dt.y;
cell.y+=tstep.y;
}
else
{
tmax.z+=dt.z;
cell.z+=tstep.z;
}
}
}
return color;
}
else
return float4(1,0,0,1);
}
update
Found a very good volume rendering tutorial!
http://graphicsrunner.blogspot.com/search?updated-max=2009-08-27T02%3A45%3A00-04%3A00&max-results=10

Related

Detect Image Problems

I really don't know what it is called (distortion or something else)
But I would like to detect lens camera problems for some different types of images by using emgucv (or opencv)
Any ideas about which algorithms to use would be appreciated
Second image seems to have high noise, but is there any way to understand high noise via opencv?
This is very difficult to achieve generically without reference data or a homogeneity sample. However, I have developed a recommendation analyzing the Average SNR (Signal to Noise) ratio of the image. The algorithm divides the input image into a specified number of "sub images' based on a specified kernel size in order to evaluate each independently for local SNR. The computed SNRs for each sub image are then mean averaged to provide an indicator for the global SNR of the image.
You will need to test this approach exhaustively, however it shows promise on the following three images, producing AvgSNR;
Image #1 - AvgSNR = 0.9
Image #2 - AvgSNR = 7.0
Image #3 - AvgSNR = 0.6
NOTE: See how the "clean" control image produces a much higher AvgSNR.
The only variable to consider is the kernel size. I would recommend keeping this at a size that will support will even the smallest of your potential input images. 30 pixels square should likely be appropriate for many images.
I enclose my test code with annotation:
class Program
{
static void Main(string[] args)
{
// List of file names to load.
List<string> fileNames = new List<string>()
{
"IifXZ.png",
"o1z7p.jpg",
"NdQtj.jpg"
};
// For each image
foreach (string fileName in fileNames)
{
// Determine local file path
string path = Path.Combine(Environment.CurrentDirectory, #"TestImages\", fileName);
// Load the image
Image<Bgr, byte> inputImage = new Image<Bgr, byte>(path);
// Compute the AvgSNR with a kernel of 30x30
Console.WriteLine(ComputeAverageSNR(30, inputImage.Convert<Gray, byte>()));
// Display the image
CvInvoke.NamedWindow("Test");
CvInvoke.Imshow("Test", inputImage);
while (CvInvoke.WaitKey() != 27) { }
}
// Pause for evaluation
Console.ReadKey();
}
static double ComputeAverageSNR(int kernelSize, Image<Gray, byte> image)
{
// Calculate the number of sub-divisions given the kernel size
int widthSubDivisions, heightSubDivisions;
widthSubDivisions = (int)Math.Floor((double)image.Width / kernelSize);
heightSubDivisions = (int)Math.Floor((double)image.Height / kernelSize);
int totalNumberSubDivisions = widthSubDivisions * heightSubDivisions;
Rectangle ROI = new Rectangle(0, 0, kernelSize, kernelSize);
double avgSNR = 0;
// Foreach sub-divions, calculate the SNR and sum to the avgSNR
for (int v = 0; v < heightSubDivisions; v++)
{
for (int u = 0; u < widthSubDivisions; u++)
{
// Iterate the sub-division position
ROI.Location = new Point(u * kernelSize, v * kernelSize);
// Calculate the SNR of this sub-division
avgSNR += ComputeSNR(image.GetSubRect(ROI));
}
}
avgSNR /= totalNumberSubDivisions;
return avgSNR;
}
static double ComputeSNR(Image<Gray, byte> image)
{
// Local varibles
double mean, sigma, snr;
// Calculate the mean pixel value for the sub-division
int population = image.Width * image.Height;
mean = CvInvoke.Sum(image).V0 / population;
// Calculate the Sigma of the sub-division population
double sumDeltaSqu = 0;
for (int v = 0; v < image.Height; v++)
{
for (int u = 0; u < image.Width; u++)
{
sumDeltaSqu += Math.Pow(image.Data[v, u, 0] - mean, 2);
}
}
sumDeltaSqu /= population;
sigma = Math.Pow(sumDeltaSqu, 0.5);
// Calculate and return the SNR value
snr = sigma == 0 ? mean : mean / sigma;
return snr;
}
}
NOTE: Without a reference, it is not possible to differentiate between natural variance/fidelity and "noise". For example, a highly texture background, or a scene with few homogeneous regions will yield a high AvgSNR. This approach will perform best when the evaluated scene consists mostly of plain, mono-color surfaces, such as the server room or shop front. Grass for example would contain a large amount of texture and therefore "noise".
An alternative method is to consider evaluating your images in the frequency domain following a Fourier transform. Principally, the noise examples you have provided are images containing unwanted, high frequency content. Conduct FFT and evaluate for images violating a threshold for high frequencies. Here you will from an example of FFT with Emgu: FFT with Emgu

Draw line instead of rendering Anchor in arcore

I am new to AR, I am working on an APP using ARCore using this one AR-REMOTE-SUPPORT
When I am drawing it from my screen it is creating default android anchor, I want line instead of default android anchor.
How can I achieve this.
here is the function which is placing Anchors on the screen
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
// Creates an anchor if a plane or an oriented point was hit.
if ((trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose()))
|| (trackable instanceof Point
&& ((Point) trackable).getOrientationMode()
== Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
// Hits are sorted by depth. Consider only closest hit on a plane or oriented point.
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (anchors.size() >= 250) {
anchors.get(0).detach();
anchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3D model
// in the correct position relative both to the world and to the plane.
anchors.add(hit.createAnchor());
break;
}
}
}
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (camera.getTrackingState() == TrackingState.PAUSED) {
return;
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
if (isShowPointCloud()) {
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
mPointCloud.draw(viewmtx, projmtx);
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
}
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
if (isShowPlane()) {
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
}
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, mScaleFactor);
//mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
}
sendARViewMessage();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
Any help would be appreciated
TIA
One simple way to draw a line in ARCore is to create it between two anchor points.
The line itself is generally a 3D object also.
Here is a tested working example, based on the nice approach in this answer: https://stackoverflow.com/a/52816504/334402
private void drawLine(AnchorNode node1, AnchorNode node2) {
//Draw a line between two AnchorNodes (adapted from https://stackoverflow.com/a/52816504/334402)
Log.d(TAG,"drawLine");
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
//First, find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
Log.d(TAG,"drawLine insie .thenAccept");
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
Anchor lineAnchor = node2.getAnchor();
nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
);
}
You can see the full source here: https://github.com/mickod/LineView

SpriteKit stop spinning wheel in a defined angle

I have a spinning wheel rotating at an angular speed ω, no acceleration involved, implemented with SpriteKit.
When the user push a button I need to slowly decelerate the wheel from the current angle ∂0 and end-up in a specified angle (lets call it ∂f).
I created associated to it a mass of 2.
I already tried the angularDamping and the SKAction.rotate(toAngle: duration:) but they do not fit my needs because:
With the angularDamping I cannot specify easy the angle ∂f where I want to end up.
With the SKAction.rotate(toAngle: duration:) I cannot start slowing down from the current rotation speed and it doesn't behave natural.
The only remaining approach I tried is by using the SKAction.applyTorque(duration:).
This sounds interesting but I have problems calculating the formula to obtain the correct torque to apply and especially for the inertia and radius of the wheel.
Here is my approach:
I'm taking the starting angular velocity ω as:
wheelNode.physicsBody?.angularVelocity.
I'm taking the mass from wheelNode.physicsBody?.mass
The time t is a constant of 10 (this means that in 10 seconds I want the wheel decelerating to the final angle ∂f).
The deceleration that I calculated as:
let a = -1 * ω / t
The inertia should be: let I = 1/2 * mass * pow(r, 2)*. (see notes regarding the radius please)
Then, finally, I calculated the final torque to apply as: let t = I * a (taking care that is opposite of the current angular speed of the wheel).
NOTE:
Since I don't have clear how to have the radius of the wheel I tried to grab it both from:
the wheelNode.physicsBody?.area as let r = sqrt(wheelNode.physicsBody?.area ?? 0 / .pi)
by converting from pixel to meters as the area documentation says. Then I have let r = self.wheelNode.radius / 150.
Funny: I obtain 2 different values :(
UNFORTUNATLY something in this approach is not working because so far I have no idea how to end up in the specified angle and the wheel doesn't stop anyway as it should (or the torque is too much and spins in the other direction, or is not enough). So, also the torque applied seems to be wrong.
Do you know a better way to achieve the result I need? Is that the correct approach? If yes, what's wrong with my calculations?
Kinematics makes my head hurt, but here you go. I made it to where you can input the amount of rotations and the wheel will rotate that many times as its slowing down to the angle you specify. The other function and extension are there to keep the code relatively clean/readable. So if you just want one giant mess function go ahead and modify it.
• Make sure the node's angularDampening = 0.0
• Make sure the node has a circular physicsbody
// Stops a spinning SpriteNode at a specified angle within a certain amount of rotations
//NOTE: Node must have a circular physicsbody
// Damping should be from 0.0 to 1.0
func decelerate(node: SKSpriteNode, toAngle: CGFloat, rotations: Int) {
if node.physicsBody == nil { print("Node doesn't have a physicsbody"); return } //Avoid crash incase node's physicsbody is nil
var cw:CGFloat { if node.physicsBody!.angularVelocity < CGFloat(0.0) { return -1.0} else { return 1.0} } //Clockwise - using int to reduce if statments with booleans
let m = node.physicsBody!.mass // Mass
let r = CGFloat.squareRoot(node.physicsBody!.area / CGFloat.pi)() // Radius
let i = 0.5 * m * r.squared // Intertia
let wi = node.physicsBody!.angularVelocity // Initial Angular Velocity
let wf:CGFloat = 0 // Final Angular Velocity
let ti = CGFloat.unitCircle(node.zRotation) // Initial Theta
var tf = CGFloat.unitCircle(toAngle) // Final Theta
//Correction constant based on rate of rotation since there seems to be a delay between when the action is calcuated and when it is run
//Without the correction the node stops a little off from its desired stop angle
tf -= 0.00773889 * wi //Might need to change constn
let dt = deltaTheta(ti, tf, Int(cw), rotations)
let a = -cw * 0.5 * wi.squared / abs(dt) // Angular Acceleration - cw used to determine direction
print("A:\(a)")
let time:Double = Double(abs((wf-wi) / a)) // Time needed to stop
let torque:CGFloat = i * a // Torque needed to stop
node.run(SKAction.applyTorque(torque, duration: time))
}
func deltaTheta(_ ti:CGFloat, _ tf:CGFloat, _ clockwise: Int, _ rotations: Int) -> CGFloat {
let extra = CGFloat(rotations)*2*CGFloat.pi
if clockwise == -1 {
if tf>ti { return tf-ti-2*CGFloat.pi-extra }else{ return tf-ti-extra }
}else{
if tf>ti { return tf-ti+extra }else{ return tf+2*CGFloat.pi+extra-ti }
}
}
}
extension CGFloat {
public var squared:CGFloat { return self * self }
public static func unitCircle(_ value: CGFloat) -> CGFloat {
if value < 0 { return 2 * CGFloat.pi + value }
else{ return value }
}
}

Set the minimum grid resolution in AChartEngine?

I am using AchartEngine library to plot the measurements from a sensor. The values are in the order of 1E-6.
When I try to plot the values they are shown correctly but as I zoom the plot, the maximum resolution I can see in the x Labels is in the order of 1E-4. I am using following code to change the number of labels:
mRenderer.setXLabels(20);
mRenderer.setYLabels(20);
I am also changing the range of the y axis, but the resolution remains unchanged. Has anyone found this problem before?
EDIT
I do not have enough reputation to post images, but the following link shows the chartview that I am getting.
https://dl.dropboxusercontent.com/u/49921111/measurement1.png
What I want is to have more grid lines between 3.0E-5 and 4.0E-5. Unfortunately I have not found how to do that. I also tried changing the renderer pan, initial range of the plot and zoom limits. all without sucess. I was thinking the only option left would be to override some of the draw methods but I have no clue how to do that.
I Have digged into the source code of AChartEngine and found the problem that it has when small numbers are to be plotted. It is in a static function used to draw labels by every chart:
private static double[] computeLabels(final double start, final double end,
final int approxNumLabels) {
// The problem is right here in this condition.
if (Math.abs(start - end) < 0.000001f) {
return new double[] { start, start, 0 };
}
double s = start;
double e = end;
boolean switched = false;
if (s > e) {
switched = true;
double tmp = s;
s = e;
e = tmp;
}
double xStep = roundUp(Math.abs(s - e) / approxNumLabels);
// Compute x starting point so it is a multiple of xStep.
double xStart = xStep * Math.ceil(s / xStep);
double xEnd = xStep * Math.floor(e / xStep);
if (switched) {
return new double[] { xEnd, xStart, -1.0 * xStep };
}
return new double[] { xStart, xEnd, xStep };
}
So this function basically takes the start (minimum) and and end (maximum) values of the plot and the aproximate number of labels. Then it rounds the values and computes the step of the grid (xStep). If the difference between start and end is too small (0.000001f) then the start and end are the same and the step is 0. That is why its not showing any labels in between this small values nor any grid lines!. So I just need to change the 0.000001f with a smaller number or with a variable in order to control the resolution of the grid. I hope this can help someone.

Finding the Oriented Bounding Box of a Convex Hull in XNA Using Rotating Calipers

Perhaps this is more of a math question than a programming question, but I've been trying to implement the rotating calipers algorithm in XNA.
I've deduced a convex hull from my point set using a monotone chain as detailed on wikipedia.
Now I'm trying to model my algorithm to find the OBB after the one found here:
http://www.cs.purdue.edu/research/technical_reports/1983/TR%2083-463.pdf
However, I don't understand what the DOTPR and CROSSPR methods it mentions on the final page are supposed to return.
I understand how to get the Dot Product of two points and the Cross Product of two points, but it seems these functions are supposed to return the Dot and Cross Products of two edges / line segments. My knowledge of mathematics is admittedly limited but this is my best guess as to what the algorithm is looking for
public static float PolygonCross(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float crossProduct1 = CrossProduct(segmentA1, segmentB1);
return crossProduct1;
}
public static float CrossProduct(Vector2 v1, Vector2 v2)
{
return (v1.X * v2.Y - v1.Y * v2.X);
}
public static float PolygonDot(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float dotProduct = Vector2.Dot(segmentA1, segmentB1);
return dotProduct;
}
However, when I use those methods as directed in this portion of my code...
while (PolygonDot(polygon, i, j) > 0)
{
j = NextIndex(j, polygon);
}
if (i == 0)
{
k = j;
}
while (PolygonCross(polygon, i, k) > 0)
{
k = NextIndex(k, polygon);
}
if (i == 0)
{
m = k;
}
while (PolygonDot(polygon, i, m) < 0)
{
m = NextIndex(m, polygon);
}
..it returns the same index for j, k when I give it a test set of points:
List<Vector2> polygon = new List<Vector2>()
{
new Vector2(0, 138),
new Vector2(1, 138),
new Vector2(150, 110),
new Vector2(199, 68),
new Vector2(204, 63),
new Vector2(131, 0),
new Vector2(129, 0),
new Vector2(115, 14),
new Vector2(0, 138),
};
Note, that I call polygon.Reverse to place these points in Counter-clockwise order as indicated in the technical document from perdue.edu. My algorithm for finding a convex-hull of a point set generates a list of points in counter-clockwise order, but does so assuming y < 0 is higher than y > 0 because when drawing to the screen 0,0 is the top left corner. Reversing the list seems sufficient. I also remove the duplicate point at the end.
After this process, the data becomes:
Vector2(115, 14)
Vector2(129, 0)
Vector2(131, 0)
Vector2(204, 63)
Vector2(199, 68)
Vector2(150, 110)
Vector2(1, 138)
Vector2(0, 138)
This test fails on the first loop when i equals 0 and j equals 3. It finds that the cross-product of the line (115,14) to (204,63) and the line (204,63) to (199,68) is 0. It then find that the dot product of the same lines is also 0, so j and k share the same index.
In contrast, when given this test set:
http://www.wolframalpha.com/input/?i=polygon+%282%2C1%29%2C%281%2C2%29%2C%281%2C3%29%2C%282%2C4%29%2C%284%2C4%29%2C%285%2C3%29%2C%283%2C1%29
My code successfully returns this OBB:
http://www.wolframalpha.com/input/?i=polygon+%282.5%2C0.5%29%2C%280.5%2C2.5%29%2C%283%2C5%29%2C%285%2C3%29
I've read over the C++ algorithm found on http://www.geometrictools.com/LibMathematics/Containment/Wm5ContMinBox2.cpp but I'm too dense to follow it completely. It also appears to be very different than the other one detailed in the paper above.
Does anyone know what step I'm skipping or see some error in my code for finding the dot product and cross product of two line segments? Has anyone successfully implemented this code before in C# and have an example?
Points and vectors as data structures are essentially the same thing; both consist of two floats (or three if you're working in three dimensions). So, when asked to take the dot product of the edges, I suppose it means taking the dot product of the vectors that the edges define. The code you provided does exactly this.
Your implementation of CrossProduct seems correct (see Wolfram MathWorld). However, in PolygonCross and PolygonDot I think you shouldn't normalize the segments. It will affect the magnitude of the return values of PolygonDot and PolygonCross. By removing the superfluous calls to Vector2.Normalize you can speed up your code and reduce the amount of noise in your floating point values. However, normalization is not relevant to the correctness of the code that you have pasted as it only compares the results with zero.
Note that the paper you refer to assumes that the polygon vertices are listed in counterclockwise order (page 5, first paragraph after "Beginning of comments") but your example polygon is defined in clockwise order. That's why PolygonCross(polygon, 0, 1) is negative and you get the same value for j and k.
I assume DOTPR is a normal vector dot product, crosspr is a crossproduct. dotproduct will return a normal number , crossproduct will return a vector which is perpendicular to the two vectors given. (basic vector math,check wikipedia)
they are actually defined in the paper as DOTPR(i,j) returns dotproduct of vectors from vertex i to i+1 and j to j+1. same for CROSSPR but with cross product.

Resources