Create ARReferenceObject manually from a 3d model? - ios

I want to recognize 3d objects from models I provide.
There is no way I can scan the objects because they are mounted on & inside aircraft engines...
But we do have the 3d models in fbx & obj & other formats.
Would it be possible to somehow convert these to ARReferenceObjects so we don't have to use Unity (it's working there...), but can use ios?

If I understand your question correct, you want the ability to scan the real world for a 3Dimensional object recognised from an 3D model. Apples ARKit on IOS have the ability to recognise objects in the real world based on 3D objects. Though, it do need the .arobject file that is created from a scan. Also, it is best suited for small objects, that fits on a table. Objects with strong textures is better to recognise compared to mono coloured parts. So this might not be the solution yet. Though, there is also Vuforia plugin, the overview don't mention the need for pre-scanning your object.
https://library.vuforia.com/content/vuforia-library/en/features/overview.html

Related

Identify objects by its position open cv

I'm very new into Image Processing libraries. I been looking into OpenCV. But I have a question.
What sort of algorithms could I use if I want to identify few similar objects in a room.
Lets say 3 similar tables.
With a camera I sign an identity to each of those tables, after I move the camera
to a positon where the objects are out of sight, when pointing the camera back
to them, the system can properly identify those objects with the initial ID and trigger action based in each id.
I read about aruco makers, but i would like to try the idea without have to attach markers.
There's plenty of methods to choose from. You could use image features, color matching, shape matching, pattern matching ... and so on. It really depends on the specific use case and the environment. In any case you need something unique to distinguish the tables from each other. Using markers would be one way to artificially create uniqueness.
Maybe you wanna start reading here to get a feeling how one method works:
https://docs.opencv.org/3.4.1/dc/dc3/tutorial_py_matcher.html
Could you provide an example set of images of the scenario?

User defined target image and user defined model at run time using vuforia

How can I download a 3d model from web and then augment it on user defined target image in Vuforia at run time?
Using UserDefineTarget image in Vuforia we can only augment predefined 3d model.
But what I want to do is to augment any desired 3d model that can be download from web and then can augment on user defined target image at run time.
Your question is not really related to Vuforia or AR specifically - UserDefinedTarget allows you to know where to draw what you want. This is a pure OpenGl question.
Basically, this should not be a big deal - download the 3d model file, and load it from that file and parse it (rather than load the model from a hard-coded file) to get all the model data.
Here you can find an example using the Rajawali library (Android), but you should be able to find other ways quite easily:
how to add 3d models dynamically

ARKit with multiplayer experience to share same planes [duplicate]

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

What strategy is thoughtful to evaluate WebGl-Based APIs?

Since there are lot of high level APIs , libraries and Frameworks available for webGL for developing 3D web applications, I want to select the best (sorry this is bit straightforward) to implement a particular model (which isn't Game oriented) on web. I'm confused how to approach for my work, The criteria I want to use for evaluation is:
pickable objects, easily defined geometry and corresponding texture, multi-camera rendering, possible to incorporate GSLS implementations, type of buffers available.
I can't experiment and judge myself a framework by developing every single demo application in every framework due to time constraint. Is there any particular way to read the documentation for available APIs which mention all these. Moreover, the problem is every framework says they are good in some part and how to overcome this to justify single framework among all those available in net world.
A suggestion would suffice my research...
If you have Maya at hand then www.inka3d.com is easy in terms of defining geometry and texture (because you do it with Maya and your favorite image editor) and you get pickable objects. For shaders you can't use glsl but have to use Maya's node based shader editor.

Creating 3D model using set of 2D images on Windows

I want to create a 3D model using set of 2D images on windows which can be send through webservice to iphone to display on it.
I know it can be done through Opengl but don't know how to start and also if I succeeded in creating it,is it compatible with iphone as iphone uses opengl es.
Thanks in advance.
What kind of transformation do you have in mind to create the 3D models? I once worked on an application using such a concept to create a model from three images of an object. It didn't really work well. The models that could be created were very limited.
OpenGL does not have a built in functionality to do such stuff. Are there any reasosns why you do not want to use a real 3D-model? It sounds as if you are looking for a fast solution for your problem. But I'm afraid if you do not have any OpenGL experience, you should prepare prepare for lots of stuff to learn.
If you want to create 3D models automatically from 2D photos, you're going to have a fair bit of work to do. AFAIK, this is not something where you can get a cheap pre-packaged solution. Autodesk charge a small fortune for ImageModeler.
MeshLab may be a good starting point, but even that can't automatically convert photos to a 3D model AFAIK.
Take a look at David Lowes site. I found the "Distinctive image features from scale-invariant keypoints" paper quite interesting, though I haven't re-read it in a while. If nothing else, this should give you some idea why this is far from a trivial problem.

Resources