Retrieving path segments from ID2D1PathGeometry - directx

I'm currently trying to write a little game engine in C# using SlimDX just for fun.
I want my world to be destructable, so I have to be able to modify my map. My map is currently vector based, represented by an ID2D1PathGeometry (PathGeometry in SlimDX) object. This object is modified, using the CombineWithGeometry method of ID2D1Geometry (Geometry in SlimDX).
For reasonable collision detection, I need knowledge about the exact shape of my ID2D1PathGeometry object, for instance for calculating angles of balls bouncing of the walls.
So, is it possible to access all or specific (per location) segements/lines/points of my ID2D1PathGeometry object? Or are there other, better ways to accomplish my goals, e.g. storing all lines and shapes additionaly in another data structure?
Please note that bitmap based maps are not the way to go here, since I don't want to have memory as a constraint on the map size.
with best regards, Emi

There is no way to retrieve just the geometry segments that are close to a certain point, but there is a way to retrieve all geometry segments.
Implement a class that inherits from ID2D1SimplifiedGeometrySink.
Create an instance of that class and pass it to ID2D1Geometry::Simplify.
More info and example are here How to Retrieve Geometry Data by Extending ID2D1SimplifiedGeometrySink
If you are interested in retrieving only portion close to a certain point, perhaps you'd like to:
Create rectangular geometry surrounding the point of interest.
Intersect it with your geometry via ID2D1Geometry::CombineWithGeometry using D2D1_COMBINE_MODE==D2D1_COMBINE_MODE_INTERSECT.
Get sink of the intersected geometry using the method above.
More info: ID2D1Geometry::CombineWithGeometry method

Related

Getting the current visible entities in RealityKit

Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

Get real coordinates of Point (or any other constructed object)

In GeoGebra, you can easily construct scenes with the GUI and the tools available in the Graphics view. I now have two functions and created some objects around them using that tools (Their intersection point, a circle tangent to both etc.). The whole depends on 5 parameters I defined as sliders for testing.
Now I want to know the coordinates of the point. It is defined as Intersect[l, h] which doesn't help me. I can access its coordinates too (0.8, 3.98) but I want to know how to calculate them depending on the parameters. (I'd expect it to be something like (3a, 7+b-2a)). I know GeoGebra can do this because it must have done it internally to be able to draw the whole image. But I don't know how to access this information.
If you want to get the current position of a Point P you can use the x and y commands. These will update whenever the position of P changes so that you don't have to recalculate where the point should be by hand.

How can I implement the custom drawing search tool used in the Realtor iPad app?

The Realtor iPad app has done a very good job of implementing a custom drawing tool on top of mapkit that they use to query an area for homes. I am familiar with mapkit and its associated classes but I am unaware of how I could do some custom drawing with my finger and have it translate to a geospatial query. How to do it?
I'm not sure how far along you've made it with this, but your basic algorithm should look like this:
Draw a polygon overtop your map, then translate the coordinates of that polygon to "map" coordinates. In order to do that you would probably need to listen for gestures on a view other than the MKMapKit instance. With my limited knowledge of the MapKit's touch event handling you might have to overlay a different transparent view on the map when you want to draw, so touch events won't go through to the MapKit (if that makes any sense). You use your finger to pinch, zoom, pan and you won't want that functionality if you're trying to draw. In that view, you'll draw the shape tracing the user's finger, then translate the points drawn into map points.
The docs indicate that you can translate screen points to map points using the convertPoint:toCoordinateFromView: method on MKMapView.
Check this link for information on that: Trouble converting MapKit user coordinates to screen coordinates
This post provides a link that might help you with drawing the polygon:
To draw polygon on google map with MapKit framework
After you've drawn your polygon you'll want to "spatially" query your data. You could do that in several ways. Locally on the device or through a web service are two options. If your data is local to the device you'll have to do the cartographic math on your device. You'll also need to ensure that your point data (the X,Y's) are in the same projection and coordinate space as your polygon's information. Polygon intersection math is relatively straight forward to do, when your projections and coordinate systems line up.
Here's a link that can help you with the math.
https://math.stackexchange.com/questions/237/how-do-you-determine-if-a-point-sits-inside-a-polygon
Alternatively you could set up some web service that takes your polygon data and performs the same cartographic math on a server and returns the results to the device. Either way the same math needs to be performed. You'll take that polygon data and determine which records in your data intersect with that polygon.
This is pretty high-level, I know, but it should be all you need to do.
Another consideration is if your data is spatially enabled with spatialite compiled for SQLite on your device or SQL Server Spatial on your server. You should be able to query the data using that polygon data. You would have to format the query properly, though.
Lastly, I would encourage you to look into the ESRI SDK for iOS. ESRI provides drawing and sketching tools out of the box. Its not too difficult to use but one downside is that you would have to learn a new API:
http://resources.arcgis.com/en/communities/runtime-ios-sdk/

XNA project - who is in charge of drawing?

I am just playing around with XNA, and I have several different models I need to draw in each frame.
at the moment, the Game object holds references to all my models, and draws them one after the other. Each with his own different way of drawing - one has two separate textures, the other might be mirrored to the other side, etc.
I was wondering if it is acceptable to just add a
public void Draw(SpriteBatch spriteBatch)
method to all my models (from the BaseModel, of course), and have each class be in charge of drawing itself, or maybe I should stick to letting the classes set their data according to events (KeyboardState) on the Update method, and keep all graphic logic in the Game class.
Is there a preferred way to do this?
Generally, I have a base class that contains a BaseModel, texture data, rotation and scale data, etc. For each type of actor in the game, I create a derived class. The base class provides a Draw method that, by default, draws the model with the texture, rotation, and scale data given in the class. Derived classes can override it to draw the actor however they like.
Then, I have a DrawableGameComponent that acts as my scene graph. It contains a list of all active actor objects. In the component's Draw and Update methods, I iterate through the list of actors and call their Draw and Update methods.
That's one way of approaching it ... for the sake of completeness in this post, I'll highlight the other approach. Basically, the opposing view states that no one entity should need (or have) custom knowledge of how to render itself. An entity is merely a collection of state ... and the renderer can simply look at that state, and draw it in the correct way.
An example ... say you have a number of ships. Some go fast, some shoot rockets, some have a sattelite orbiting around it that also shoots. Your "Entity" class can have the following properties
Model VisualRepresentation
Matrix Position
Entity[] AttachedEntities
Your renderer can then iterate over your generic "List<Entity>", and
Draw the visual representation (ie. Model) of the entity using the position
Loop over the AttachedEntities and draw them (recursively).
It's obviously a simplified example ... but this way the drawing logic is completely contained in the rendering code, and only needs to concern itself with as little amount of information as possible. While the ship class can focus on the game logic itself (ie. how fast do I fly, what weapon am I using, how much energy do I have in my shields, etc.).
As far as which one is preferred, really the answer lies within your project's requirements, and what you feel comfortable with. Don't try to make a game engine before making a game ... just do whatever it takes to make your game, and then maybe you can extract the components that worked after you ship the game :-P

Resources