Is there any proof of concept of how to implement multiple AR markers w/ A-Frame?
Ex. Something like this: https://www.youtube.com/watch?v=Y8WEGGbLWlA
The first video in this post from Alexandra Etienne is the effect I’m aiming for (multiple distinct AR "markers" with distinct content): https://medium.com/arjs/area-learning-with-multi-markers-in-ar-js-1ff03a2f9fbe
I’m a bit unclear if when using multiple markers they need to be close to each-other/exist in the same camera view
This example from the ar.js repo uses multiple markers but they're all of different types (ie one is a Hiro marker, one is a Kanji marker, etc): https://github.com/jeromeetienne/AR.js/blob/master/aframe/examples/multiple-independent-markers.html
tldr: working glitch here. Learn the area (the image is in the assets), click the accept button, and toggle the marker helpers.
Now, a bit of details:
1) Loading saved area data
Upon initialisation, when ar.js detects, that you want to use the area marker preset, it tries to grab a localStorage reference:
localStorage.get("ARjsMultiMarkerFile")
The most important data there is an array of pairs {markerPreset, url.patt} which will be used to create the area.
Note: By default it's just the hiro marker.
2) Creating an area data file
When you have debugUIEnabled set to true:
<a-scene embedded arjs='sourceType: webcam; debugUIEnabled: true'>
There shows up a button "Learn-new-marker-area".
At first glance it redirects you to a screen where you can save the area file.
There is one but: by default the loaded learner site is on another domain.
Strictly speaking: ARjs.Context.baseURL = 'https://jeromeetienne.github.io/AR.js/three.js/
Any data saved there won't be loaded on our website, for local storage is isolated per origin.
To save and use the marker area, you have to create your own learner.html. It can be identical to the original, just keep in mind you have to keep it on the same domain.
To make the debugUI button redirect the user to your learner html file, you need to set
ARjs.AnchorDebugUI.MarkersAreaLearnerURL = "myLearnerUrl.html"
before the <a-marker>s are initialized. Just do it in the <head>.
Once on the learner site, make sure the camera sees all the markers, and approve the learning.
It should look like this:
Once approved, you will be redirected back to your website, the area file will be loaded, and the data will be used.
As #mnutsch stated, AR.js does what you want.
You can display two different models on two different markers. If the camera doesn't see one of the markers the model vanishes (or stays where it was last, depending on your implementation.
The camera doesn't need to see both.
Screenshot:
https://www.dropbox.com/s/i21xt76ijrsv1jh/Screenshot%202018-08-20%2011.25.22.png?dl=0
Project:
https://curious-electric.com/w/experiments/aframe/ar-generic/
Also, unlike Vuforia, there is no 'extended tracking' - once the code is out of sight, you can't track anymore.
Related
I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.
I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.
In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).
I studied the multimarker documentation of ARToolKit for iOS and i have some troubles in achieving some sort of QR-Code.
I want, for example:
A set of 6 markers positioned differently on a picture, and when and only when ALL of them are present, some sort of video is displayed in the origin of them( i want to use some sort of CORNER Markers like QR-Code system ).
How to do this ? From what i've seen, on multimarkers, if 1 is present out of 6 for example, the object is displayed.
From looking into the ARToolKit code you can see that a MultiMarker is internally handled as one single Marker consisting of several Pattern:
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarker.cpp#L344
https://github.com/artoolkit/artoolkit5/blob/master/lib/SRC/ARWrapper/ARMarkerMulti.cpp#L75
That is why ARToolKit will always return true whenever one of the markers configured in the multi-marker configuration is visible.
Taking that into account ‘Multi-Markers’ are not the way to go for the target you would like to reach.
What you can do, however, is to configure each marker separately and add them as ‘Single-Marker’. Then you can query if all of these ‘Single-Markers’ are visible.
If so you can calculate the origin of all these ‘Single-Markers’ and render your object there.
You can get an idea on how to configure several ‘Single-Markers’ if you take a look here:
http://augmentmy.world/moving-cars-augmented-reality
Also take that example here on how to set to markers into the same coordinate system (and calculate the distance between them) you can use that as a starting point for calculating the origin between several markers:
https://github.com/artoolkit/artoolkit5/tree/master/AndroidStudioProjects/ARMarkerDistanceProj
I know that these are not iOS examples but I have only done Android so far. Also, the ARWrapper interface should be the same on Android and iOS, meaning to say there should not be much difference between these two.
I hope that helps
I am using Objective C for the first time and instead of choosing to do a nice simple app I'm trying to make a tour application which will have the route mapped on it with the tourist points marked. I eventually want to add audio to the points but for now just want the route mapped and working!
Yes I know I'm silly for picking such a hard task - but any help? I am just stuck on how to get into this. I don't want anything fancy, I really just want to get the map up with the points and route displayed.
You can store specific points on a plist file (or a common text file).
Once you have those points, u have to make a mapview on your app (just drag)
and then, read the points, and add a mark to the map.
You can also draw lines between those points, so u can display the route.
Apple´s mapview and gps, has a method that allows u to know when the phone
has moved. When u detect that, u can check if the phone is near of one of
the audio points, and then start to play the sound. (there is a function
that gives you the distance between 2 points of the map).
I am asking this because I couldn't find the answer anywhere, at least using the keywords I could think.
The most relevant question/answer I've found is : (Create interactive videos in iPad - An app for product demo) . The user Jano replied:
The easiest way to create interactive videos for iOS is to use Apple's HTTP Live Streaming technology. You have to create a video, embed metadata, play it using MPMoviePlayerController or AVPlayerItem, and then display clickable areas in response to metadata notifications.
Metadata should contain coordinates for the element you are tracking, eg: a dress, and a identifier for the product. You overlay this info with a clickable subview that reveals more information about the product. There are several applications of this kind in iTunes, here is one.
Once you get a working product and weeks-time of videos, the most difficult part is to perform motion tracking with the less possible human interaction. One approach is to use Adobe After Effects, another is to code your own solution based on OpenCV.
The example I've found concerning this technology (http://vimeo.com/16455248) showed the automatic addition of NSButtons when the video reaches the meta-tags embedded. My client wants a human body interactive video that pauses at a specific time (maybe using the meta-tags) and reacts to user tapping in an element in video (e.g: imagine a pill inside stomach; after tapping this pill it triggers another pre-rendered video, in a way not transparent to user). I have thought about animations using Cocos2D or Open GL ES, but I lack people who master these technologies.
I didn't quite understand the "motion tracking" reference in the quote above. Jano mentions Adobe After Effects and OpenCV. This motion tracking is like an "UIGestureRecognizer" ? Does it track parts of the video itself or motions initiated by user, as taps ?
I expect I've exposed the question in the most clear form possible. Thank you in advance.
This question is a year old, but I can give you insight into the After Effects question. AE has a feature where you can define an area in a video frame and the software will track that area across the timeline, logging the coordinates at specific intervals. For example, in a video of a person riding a mountain bike, you could select an area around their helmet and AE will log coordinates of the helmet throughout the timeline.
Since Flash was the most likely target for interactive video, the typical workflow would encode this coordinate data into a Flash video as cue point events (this is the only method I have personally experienced). According to some googling, the data is stored in key frames and can be extracted using scripts.
More info: http://helpx.adobe.com/after-effects/using/tracking-stabilizing-motion-cs5.html
Here's a manual method for extracting the data:
In the timeline panel select the footage and press the U key, all
track points keyframes will show up. Here’s the magic, select the
Feature Center property of each track point and copy it (Cmd+C for Mac
or Ctrl+C for PC)
Now open any text editor such as TextMate or Notepad and paste the
data (Cmd+V for Mac or Ctrl+V for PC)