I see some ways to do it:
1) Draw using OpenGL programmatically.
2) Draw using QuartzCore and CoreAnimation programmatically.
3) Draw map in AutoCad and then somehow connect it to iOS.
4) Draw map using SVG.
Requirments are supporting pathfinding and gps navigation.
For first 2 ways I think that it's expensive in terms of performance way, redraw all elements on scaling; and I don't think that this way may have GPS-navigation support.
Using AutoCad pictured maps is hard to understand for me how to connect it with graphs\paths for pathfinding.
My colleagues will develop this app on web using SVG. I found it - https://github.com/SVGKit/SVGKit , but still have no idea how it will support pathfinding and navigation.
I would appreciate any help.
Generally there are two types of map application:
A) They display a map, (with or without a user position) without needing to calculate a path like a navigation system does (see point B)
B) Application that use the vectors of a map and calculate something: e.g to find a best path. The shortest connection, e.g A navighation system , etc.
Application for A) are usually less complex then that of B), because the vectors can be somewhat inacurate, have no conections, have small gaps, have no logic between the edges, etc.
1) To only display a building map, you would only need a list of edges. (An edge is pair of coordinates (x1,y1) - (x2,y2). How ever you get that. E.g MapInfo Professional format mif/mid.
Or even you could dispaly a pdf that contains the map of the builing. Right with the built in PDF View, (also with SVG but more difficult).
Things get much more complicated if it is not a relative map, but also a map that is positioned with an reference coordinate system, like latitude/longitude (WGS84).
In that case you would use a Tool (mapInfoProfessional, to import AutoCad DXF Files, and apply 3 GPS measured reference points at the corner of the house, and convert that to LatLong WGS84 coordinates system.
With ios you cannot measure that 3 Points because you cannot average a position, ios stops sending when you are standing still at one corner of the house.
You could try to extract the positions from a google earth satellite foto if you are living in a region where google Sattelite fotos have high resolution. (But this might violate the license conditions of that Satellite Foto provider (Topic: derived data))
Finally you now have a list of edges in Lat Lon coordinate System.
For Displaying I personally would either do with 1) OpenGL) or 2) Quartz2D.
Now the Path finding part.
Probaly you need a second "map" that defines the possible paths inside the building.
This structure must be a connect graph (points with connected neighbours).
Computer games do it that way. (Some even allow you to display that path in developper mode)
The path can be drawn, in a different layer of the floor plan. But this path
has higher requirements: No gaps are allowed, all must be perfect connected.
Call that layer "Path" and export it as own plan.
Now use only this path layer, and import, and create a graph of nodes with connect neighbours.
Use Dijkstra Algo to search for shortest path.
Related
I would like to extract significant weather over an area from a weather map.
This is what the map looks like:
The black "clouds like" lines define several areas in which different types of weather phenomena will occur. The blue dashed area labeled "1" is the same type of area but refers to the legend of the map, on the right top side.
I already managed to extract all the contours (in green). That's pretty much all I can do for now because I just discovered OpenCV. (Please be kind if I missed anything.)
I assume the next step is to look in which areas the place I am interested in is located. And then try to retrieve all the significant weather symbols located inside. But I definitely do know how to do that for the moment.
The constants in this project are :
The map will always be centered and zoomed in the exact same way.
The region I am interested in will always be the same (the red circle that I drew with paint).
The variables of this project are :
Between two different maps, the shapes of the significant weather areas will completely change.
Significant weather symbols in the areas will, of course, also change.
Those maps are named TEMSI. If someone knows a different way to use computer vision to do the job I want to do, please let me know.
I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.
I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.
My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this
I have a data set with over 50,000 geocoded points (lat-long). Each point has a set of data associated with it -- things like quality, status, etc.
I'd like to make a set of density maps showing the distribution of data by those metrics. For example, one map would show the density of all items with a quality of "good".
With a smaller set of points, I'd use Google Maps and custom markers. Here, however, different segments have tens of thousands of points
Are there any APIs or libraries that could help me do this?
The solution I will be going with:
Break the area to be mapped into a
grid.
Count the number of entries
falling inside each square.
For each
square, generate a PNG with
transparency relative to the number
of entries.
Populate a Google Map with
this set of PNGs as markers.
Google Fusion Table does a nice job at it http://www.google.com/fusiontables/Home/
Good for you that your data are already geocoded, because -from my recent experience-, Google location resolver does not let you dealing with ambiguous location..
One solution could be to create bitmaps with your density maps and add them (only one at the same time) as overlay on your google map (with GGroundOverlay)
You may have a look at this post that gives an example of density map with google map. It uses the HeatMapAPI. Unfortunately, this API is not free if you use it with a large number of points...
Put build your own density bitmap may be not so complicated...
One other solution is to reduce the number of markers you can use. It could be done with the MarkerClustered library. It is not exactly a density map, but... can maybe be useful.
http://heatmap.codeplex.com/