My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this
Related
I'm working on an ARKit application, in which I place several anchor around a room, save the world map data to a server, and then restore that data and the anchors on a different device. Obviously, localization and drifting of the anchors is something the app has to cope with.
My question is, can placing an image anchor in the room and scanning that image help with re-localizing after re-loading the world map data? Would ARKit use the pose of a scanned image as feedback into it's re-localization process? For the app I'm working on, it is a possibility to have an image marker (such as a QR code) placed at a consistent location within a room, such that the app can be sure the image has not physically moved. Would scanning such an image and placing an image anchor at its location help with re-localizing when the world map is later loaded on a different device?
I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.
I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.
I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.
In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).
I see some ways to do it:
1) Draw using OpenGL programmatically.
2) Draw using QuartzCore and CoreAnimation programmatically.
3) Draw map in AutoCad and then somehow connect it to iOS.
4) Draw map using SVG.
Requirments are supporting pathfinding and gps navigation.
For first 2 ways I think that it's expensive in terms of performance way, redraw all elements on scaling; and I don't think that this way may have GPS-navigation support.
Using AutoCad pictured maps is hard to understand for me how to connect it with graphs\paths for pathfinding.
My colleagues will develop this app on web using SVG. I found it - https://github.com/SVGKit/SVGKit , but still have no idea how it will support pathfinding and navigation.
I would appreciate any help.
Generally there are two types of map application:
A) They display a map, (with or without a user position) without needing to calculate a path like a navigation system does (see point B)
B) Application that use the vectors of a map and calculate something: e.g to find a best path. The shortest connection, e.g A navighation system , etc.
Application for A) are usually less complex then that of B), because the vectors can be somewhat inacurate, have no conections, have small gaps, have no logic between the edges, etc.
1) To only display a building map, you would only need a list of edges. (An edge is pair of coordinates (x1,y1) - (x2,y2). How ever you get that. E.g MapInfo Professional format mif/mid.
Or even you could dispaly a pdf that contains the map of the builing. Right with the built in PDF View, (also with SVG but more difficult).
Things get much more complicated if it is not a relative map, but also a map that is positioned with an reference coordinate system, like latitude/longitude (WGS84).
In that case you would use a Tool (mapInfoProfessional, to import AutoCad DXF Files, and apply 3 GPS measured reference points at the corner of the house, and convert that to LatLong WGS84 coordinates system.
With ios you cannot measure that 3 Points because you cannot average a position, ios stops sending when you are standing still at one corner of the house.
You could try to extract the positions from a google earth satellite foto if you are living in a region where google Sattelite fotos have high resolution. (But this might violate the license conditions of that Satellite Foto provider (Topic: derived data))
Finally you now have a list of edges in Lat Lon coordinate System.
For Displaying I personally would either do with 1) OpenGL) or 2) Quartz2D.
Now the Path finding part.
Probaly you need a second "map" that defines the possible paths inside the building.
This structure must be a connect graph (points with connected neighbours).
Computer games do it that way. (Some even allow you to display that path in developper mode)
The path can be drawn, in a different layer of the floor plan. But this path
has higher requirements: No gaps are allowed, all must be perfect connected.
Call that layer "Path" and export it as own plan.
Now use only this path layer, and import, and create a graph of nodes with connect neighbours.
Use Dijkstra Algo to search for shortest path.