Make a line thicker in 3D? - ios

In reference to this question
Drawing a line between two points using SceneKit
I'm drawing a line in 3D and want to make it thicker by using this code
func renderer(aRenderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: NSTimeInterval) {
//Makes the lines thicker
glLineWidth(20)
}
but it doesn't work, iOS 8.2.
Is there another way?
Update
From the docs
https://developer.apple.com/library/prerelease/ios/documentation/SceneKit/Reference/SCNSceneRendererDelegate_Protocol/index.html#//apple_ref/occ/intfm/SCNSceneRendererDelegate/renderer:updateAtTime:
I did add SCNSceneRendererDelegate and a valid line width but still could not get the line width to increase.

You cannot assign any number to glLineWidth().
You can check the range of possible values of glLineWidth()by:
glGetFloatv(GL_LINE_WIDTH_RANGE,sizes);
One crazy idea is to use a cylinder for drawing lines ;). I use it when I want to have nice and controllable lines but I am not aware of a handy OpenGl function to do so.

#G Alexander: here you go my implementation of cylinder. It is a bit tedious but it is what I have at the moment.
If you give me points p0 and p1, Vector normal=(p1-p0).normalize() would be the axis of the cylinder.
pick point p2 that is not on the vector Normal.
q=(p2-p0).normalize();
normal.crossproduct(q)=v0;
normal.crossproduct(v0)=v1;
Having these two vectors you can have circles with any radius that are stacked along the axis of the cylinder using the following function (A cylinder is a stack of circles):
public Circle make_circle(Point center, Vector v0, Vector v1, double radius)
{
Circle c;
for (double i = 0; i < 2 * Math.PI; i += 0.05)
{
Point p = new Point(center + radius * Math.Cos(i) * v0 + radius * Math.Sin(i) * v1);
c.Add(p);
}
return c;
}
You only need to make circles using this function along the axis of the cylinder:
List<Circle> Cylinder = new List<Circle>();
for(double i=0;i<1;i+=0.1)
{
Cylinder.add( make_circle(P0+i*normal, v0, v1,radius);
}
Now you should take two consecutive circles and connect them with quads by sampling uniformly.
I have implemented it this way since I had circles implemented already.
A simpler way to implement is make the circle along the x axis and then rotate and translate it to p0 and make it align with normal or to use gluCylinder if you are the fan of Glu....
Hopefully it works for you.

Related

JSXGraph 0.99.7 union of curves

Because of the Moodle-STACK environment I am currently limited to JSXGraph 0.99.7. Is there a way to get the union of two curves given by coordinate vectors (polygons) in that version?
In 1.2.1 I do this using Clip.union(), which works fine in jsfiddle (not exactly a minimum working example) but not in STACK.
this.b = board.create('curve', JXG.Math.Clip.union( bneu, this.b, board),
{opacity: true, fillcolor:'lightgray', strokeWidth: normalStyle.strokeWidth,
strokeColor: normalStyle.strokeColor});
In 0.99.7 you have to do the union by hand. As long as the shapes are not overlapping this might be doable without too much work. Define a curve and set its
updataDataArray method:
c = board.create('curve', [[], []]);
c.updateDataArray = function() {
this.dataX = [];
this.dataY = [];
// copy now the coordinates of the polygons / curves into
// these arrays.
};
board.update();
You can access the coordinates of the polygon vertices by
polygon.vertices[i].X();
polygon.vertices[i].Y();
Attention: the last vertex is a copy of the first vertex, to make the polygon a closed curve.
The coordinates of a curve can be accessed by
curve.points[i].usrCoords[1]; // x-coordinate
curve.points[i].usrCoords[2]; // y-coordinate
The curve path can be interrupted by adding NaNs:
this.dataX.push(NaN);
this.dataY.push(NaN);
Hope that helps a little bit.

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Passing UIBezierPath to view class for drawing

I am making a level based game with many different objects, all different. In each level, there will be different amounts of each type of object. Thus, I have been trying to make the drawing part as generic as possible so that all I have to do is pass in the coords and it will automatically draw. To do this, I have made a protocol that forces each object class to implement the method getBP(), which returns the UIBezierPath to draw for each. Then, the view class just has to say
Object.getBP().fill()
However, this has been leading to some strange problems. The object does not draw at the correct coordinates. The y coordinate is correct, but the x coordinate puts it always at the left of the screen. I think it may be the fact that the Bezier Path is not being created in the view class. Here is my code in Surface.swift (this is meant to draw a surface in the game):
func getBP() -> UIBezierPath {
var rect:CGRect
var length:Double = getSurfaceVector().getMagnitude()//length of the surface
var cx = points.1.x+(points.0.x-points.1.x)//center coords of the surface
var cy = points.1.y+(points.0.y-points.1.y)
var bp = UIBezierPath(roundedRect: CGRectMake(CGFloat(cx - length/2), CGFloat(cy-RECT_HEIGHT/2), CGFloat(length), CGFloat(RECT_HEIGHT)), cornerRadius: CGFloat(5))
let transform:CGAffineTransform = CGAffineTransformMakeRotation(CGFloat(Double(angle)*(Double(M_PI)/Double(180))))
bp.applyTransform(transform)
return bp
}
points is just a tuple with the start and end points of the surface. RECT_HEIGHT is the height of the rectangle that is drawn to represent the surface. angle is the angle from horizontal of the surface.
Creating the surface in View.swift, I do this:
Surface(fixed: true, points: (Vector(x: 50, y:100), Vector(x: Double(UIScreen.mainScreen().bounds.width), y: 100)))
I add that surface to the array of objects in the game. I draw it in the View.swift file by saying
surface.stroke()
The surface draws on the screen with a y value of 100, but it is centered at x = 0 so that it is half on and half off of the screen. Also, it doesn't draw at the angle - it is always horizontal. Is there some better way of doing this? What is happening?

Calculating point coordinates from user tap with constraints

I am trying to calculate the coordinates along a circle corresponding to the tap location. The coordinates should be on the border of the circle nearest to the tap location (e.g. the border that is less distant from the radius). To facilitate this I am detecting only taps that are distant by 80% of the radius from the circle center.
Input:
P (GPPoint) - center of the circle
P1 (GPPoint) current position of an image displayed
r (float) radius of circle
P3 (CGPoint) user tap coordinate
Desired output:
P2 (CGPoint) - new coordinates for the image corresponding to P3 but along the circle. Sorry for the bad explanation, I try to explain it in other words: once the user taps on the screen I would like to move the image in P2. P2 should be derived by moving P2 to the border of the circle. It should be possible to do so by using the radius information.
The idea is to create from P3 coordinates a new coordinate called P2 as described above - the key is that P2 distance from the centre should correspond exactly to the radius and the ANGLE should be the same as tapPoint.
Would anyome be able to suggest a formula to calculate the corresponding coordinate given a tap? I simply need to calculate P3 using the input I have.
Code so far:
-(void)tapInImageView:(UITapGestureRecognizer *)tap
{
CGPoint tapPoint = [tap locationInView:tap.view];
if ([self isInOuternCircle:tapPoint]) {
// then create from tapPoint coordinates a new coordinate P2 as described above - but have no idea how.. the key is that P2 distance from the centre should correspond exactly to the radius and the ANGLE should be the same as tapPoint.
}
}
-(BOOL)isInOuternCircle:(CGPoint)point
{
double distanceToCenter = sqrt((point.x - _timerView.center.x)*(point.x - _timerView.center.x) + (point.y - _timerView.center.y)*(point.y - _timerView.center.y));
if (distanceToCenter < _innerCircleRadius) {
return false;
}
return true;
}
I've done this once before, but the math usually depends on how you've set up your coordinate system, so I'll just outline what I did. You'll need a bit of geometry, and a few formulae to determine the new coordinate along the circle.
Calculate the formula of a line passing through the center (P) and your tap point (P3) using this: http://en.wikipedia.org/wiki/Linear_equation#Two-point_form
Determine the equation for your circle: http://en.wikipedia.org/wiki/Circle#Equations
Using the above two equations, you'll have a system of a linear and a quadratic equation: http://www.mathsisfun.com/algebra/systems-linear-quadratic-equations.html
Once you have the equation above, you need to solve it. The result will yield two possible points (the line will intersect the circle in two places), and the point you are looking for is the point closer the tap point. In this case, just compare the distances to P3 between the two solutions, and the shorter distance will show your required solution - P2.

XNA isometric tiles rendering issue

I'm currently working on a XNA game prototype. I'm trying to achieve a isometric view of the game world (or is it othographic?? I'm not sure which is the right term for this projection - see pictures).
The world should a tile-based world made of cubic tiles (e.g. similar to Minecraft's world), and I'm trying to render it in 2D by using sprites.
So I have a sprite sheet with the top face of the cube, the front face and the side (visible side) face. I draw the tiles using 3 separate calls to drawSprite, one for the top, one for the side, one for the front, using a source rectangle to pick the face I want to draw and a destination rectangle to set the position on the screen according to a formula to convert from 3D world coordinates to isometric (orthographic?).
(sample sprite:
)
This works good as long as I draw the faces, but if I try to draw fine edges of each block (as per a tile grid) I can see that I get a random rendering pattern in which some lines are overwritten by the face itself and some are not.
Please note that for my world representation, X is left to right, Y is inside screen to outside screen, and Z is up to down.
In this example I'm working only with top face-edges. Here is what I get (picture):
I don't understand why some of the lines are shown and some are not.
The rendering code I use is (note in this example I'm only drawing the topmost layers in each dimension):
/// <summary>
/// Draws the world
/// </summary>
/// <param name="spriteBatch"></param>
public void draw(SpriteBatch spriteBatch)
{
Texture2D tex = null;
// DRAW TILES
for (int z = numBlocks - 1; z >= 0; z--)
{
for (int y = 0; y < numBlocks; y++)
{
for (int x = numBlocks - 1; x >=0 ; x--)
{
myTextures.TryGetValue(myBlockManager.getBlockAt(x, y, z), out tex);
if (tex != null)
{
// TOP FACE
if (z == 0)
{
drawTop(spriteBatch, x, y, z, tex);
drawTop(spriteBatch, x, y, z, outlineTexture);
}
// FRONT FACE
if(y == numBlocks -1)
drawFront(spriteBatch, x, y, z, tex);
// SIDE FACE
if(x == 0)
drawSide(spriteBatch, x, y, z, tex);
}
}
}
}
}
private void drawTop(SpriteBatch spriteBatch, int x, int y, int z, Texture2D tex)
{
int pX = OffsetX + (int)(x * TEXTURE_TOP_X_OFFRIGHT + y * TEXTURE_SIDE_X);
int pY = OffsetY + (int)(y * TEXTURE_TOP_Y + z * TEXTURE_FRONT_Y);
topDestRect.X = pX;
topDestRect.Y = pY;
spriteBatch.Draw(tex, topDestRect, TEXTURE_TOP_RECT, Color.White);
}
I tried using a different approach, creating a second 3-tiers nested for loop after the first one, so I keep the top face drawing in the first loop and the edge highlight in the second loop (I know, this is inefficient, I should also probably avoid having a method call for each tile to draw it, but I'm just trying to get it working for now).
The results are somehow better but still not working as expected, top rows are missing, see picture:
Any idea of why I'm having this problem? In the first approach it might be a sort of z-fighting, but I'm drawing sprites in a precise order so shouldn't they overwrite what's already there?
Thanks everyone
Whoa, sorry guys I'm an idiot :) I started the batch with SpriteBatch.begin(SpriteSortMode.BackToFront) but I didn't use any z-value in the draw.
I should have used SpriteSortMode.Deferred! It's now working fine. Thanks everyone!
Try tweaking the sizes of your source and destination rectangles by 1 or 2 pixels. I have a sneaking suspicion this has something to do with the way these rectangles are handled as sort of 'outlines' of the area to be rendered and a sort of off-by-one problem. This is not expert advice, just a fellow coder's intuition.
Looks like a sub pixel precision or scaling issue. Also try to ensure your texture/tile width/height is a power of 2 (32, 64, 128, etc.) as that could make the effect less bad as well. It's really hard to tell just from those pictures.
I don't know how/if you scale everything, but you should try to avoid rounding wherever possible (especially inside your drawTop() method). Every time you round some position/coordinate chances are good you might increase the error/random offsets. Try to use double (or better: float) coordinates instead of integer.

Resources