Get real coordinates of Point (or any other constructed object) - geogebra

In GeoGebra, you can easily construct scenes with the GUI and the tools available in the Graphics view. I now have two functions and created some objects around them using that tools (Their intersection point, a circle tangent to both etc.). The whole depends on 5 parameters I defined as sliders for testing.
Now I want to know the coordinates of the point. It is defined as Intersect[l, h] which doesn't help me. I can access its coordinates too (0.8, 3.98) but I want to know how to calculate them depending on the parameters. (I'd expect it to be something like (3a, 7+b-2a)). I know GeoGebra can do this because it must have done it internally to be able to draw the whole image. But I don't know how to access this information.

If you want to get the current position of a Point P you can use the x and y commands. These will update whenever the position of P changes so that you don't have to recalculate where the point should be by hand.

Related

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

Is it possible to get UIBezierPath current control points after path transformation?

I am working on creating a way for a user to move polygons on screen and change their shape by dragging their corner vertices. I then need to be able to re-draw those modified polygons on other devices - so I need to be able to get final positions of all vertices for all polygons on screen.
This is what I am currently doing:
I draw polygons using UIBezierPath for each shape in the override of the drawRect function. I then allow user to drag them around the screen by applying CGAffineTransformMakeTranslation function and the x and y coordinate deltas in touchedMoved function override. I have the initial control points from which initial polygons are drawn (as described here). But once a path instance is moved on screen, those values don't change - so I am only able to get initial values.
Is there something built - in in the Core Graphics framework that will allow me to grab a set of current control points in a UIBezierPath instance? I am trying to avoid keeping track of those points manually. I will consider using other ways to draw if:
there is a built - in way to detect if a point lies within that polygon (such as UIBezierPath#contains method
A way to easily introduce constraints so user can't move a polygon out of bounds of the superview (I need the whole polygon to be visible)
A way to grab all points easily when user is done
Everything can run under 60fps on iPhone 5.
Thanks for your time!
As you're only applying the transform to the view/layer, to get the transformed control points from the path, simply apply that same transform to the path, and then fetch the control points. Something like:
UIBezierPath* copiedPath = [myPath copy];
[copiedPath applyTransform:[view transform]];
[self yourExistingMethodToFetchPointsFromPath:copiedPath];
The way that you're currently pulling out points from a path is unfortunately at the only API available for re-fetching points from an input UIBezierPath. However - you might be interested in a library I wrote to make working with Bezier path's much simpler: PerformanceBezier. This library makes it significantly easier to get the points from a path:
for(NSInteger i=0;i<[path elementCount];i++){
CGPathElement element = [path elementAtIndex:n];
// now you know the element.type and the element.points
}
In addition to adding functionality to make paths easier to work with, it also adds a caching layer on top of the existing API to make the performance hit of working with paths much much smaller. Depending on how much CPU time you're spending on UIBezierPath methods, this library will make a significant improvement. I saw between 2x and 10x improvement, depending on the operations I was using.

OpenCV: Searching for pixels along single-pixel branches

I'm currently trying to find a neat way of storing separate "branches" in a binary image. This little animation explains it:
As I go along the branches I need to collect the pixel indices that makes up a single-pixel wide branch. When I hit a junction point it should split up and store the new branches.
One way of going about it is maybe to create a 3x3 subregion, find out if there are white pixels inside it, move it accordingly, create a junction point if there is more than two. Always store the previous subregion so one can use it for making sure that we don't move to regions we already scanned.
It's a bit tricky to figure out how I would go about it though.
I basically need to reorder the pixels based on a "line/curve" hierarchy. Another part of the application will then redraw the figures, which internally works by creating lines between points hence the need to have them "ordered".
I don't know if you could apply it in your case but you should take a look at cv::findContour.
you will get a vector of points ordered.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html

Viewpoint for X3D Centering

Can anyone please help me in calculating center of rotation and position of a X3D object?
I've noticed that aopt tool by InstantReality adds something like:
<Viewpoint DEF='AOPT_CAM' centerOfRotation='x y z' position='x y z'/>
The result is nice, object is properly zoomed, centrated and center
of rotation is somehow perfectly "inside" the object (x,y,z, center).
I must avoid using aopt, how can I obtain that, (i.e. via JavaScript)
pheraphs looping trough XML Coordinate point and doing some calculations...?
I'm using X3DOM to render the object.
Many thanks.
"AOPT_CAM" is the name of the Viewpoint. The centerOfRotation and position values are automatically computed by the Browser (InstantReality in your case).
In order to compute these values by yourself you need to know your object size (BoundingBox) and do some math to compute where the Viewpoint should be located ('position' attribute) in your local coordinate system. You also need to know the object displacement in the coordinate system. If not specified this should be (0,0,0)

Retrieving path segments from ID2D1PathGeometry

I'm currently trying to write a little game engine in C# using SlimDX just for fun.
I want my world to be destructable, so I have to be able to modify my map. My map is currently vector based, represented by an ID2D1PathGeometry (PathGeometry in SlimDX) object. This object is modified, using the CombineWithGeometry method of ID2D1Geometry (Geometry in SlimDX).
For reasonable collision detection, I need knowledge about the exact shape of my ID2D1PathGeometry object, for instance for calculating angles of balls bouncing of the walls.
So, is it possible to access all or specific (per location) segements/lines/points of my ID2D1PathGeometry object? Or are there other, better ways to accomplish my goals, e.g. storing all lines and shapes additionaly in another data structure?
Please note that bitmap based maps are not the way to go here, since I don't want to have memory as a constraint on the map size.
with best regards, Emi
There is no way to retrieve just the geometry segments that are close to a certain point, but there is a way to retrieve all geometry segments.
Implement a class that inherits from ID2D1SimplifiedGeometrySink.
Create an instance of that class and pass it to ID2D1Geometry::Simplify.
More info and example are here How to Retrieve Geometry Data by Extending ID2D1SimplifiedGeometrySink
If you are interested in retrieving only portion close to a certain point, perhaps you'd like to:
Create rectangular geometry surrounding the point of interest.
Intersect it with your geometry via ID2D1Geometry::CombineWithGeometry using D2D1_COMBINE_MODE==D2D1_COMBINE_MODE_INTERSECT.
Get sink of the intersected geometry using the method above.
More info: ID2D1Geometry::CombineWithGeometry method

Resources