Now I have an requirement in Augmented Reality, I suppose to detect the live object(pen/marker) then I have play some interactive content.
I need a suggestion/advise from you is to recommend the appropriate SDK to develop this app.
I used Vuforia for normal/simple AR app but this is involved the real time object detection.
Friends, kindly suggest me the SDK to meet the requirement.
-Murali Krishnan
You can do this with a cylinder object. you would need to create a different object, for each pen style you would want as a trackable.
https://developer.vuforia.com/resources/dev-guide/creating-cylinder-target
As mentioned by ashatte, you should have a look at metaios sdk.
You are looking for 3d Markerless tracking. The sdk is very good for this.
Related
Has anyone worked with developing a custom filter for Augmented Reality on iOS Apps in Swift? I am wanting to create a very specific look of a filter for iOS App that blends AR on top of the existing surroundings.
ie. Winter Wonderland theme (snowing, snow on the ground and the buildings around the user)
What's the best way to approach this?
To do this you have to use a computer vision algorithm called SLAM (Simultaneous Localization and Mapping). There are multiple SDKs online that offer this for iOS in Swift, such as: KudanCV (https://www.kudan.eu/download-kudan-cv-sdk/) and ARToolKit (https://artoolkit.org/download-artoolkit-sdk).
However, if you want to develop your own SLAM algorithm I'd recommend looking more into LSD-SLAM(link in comment) or ORB SLAM(link in comment).
Also there's an iOS port for ORB SLAM (link in comment)
I hope that helped.
I'm researching AR frameworks in order to select the best option for developing conference call/ meeting application for ODG glasses.
I got only a few directions for selecting a framework:
Performance of video streaming (capturing and encoding) must be watched closely to avoid overheating and excessive power consumption,
Should support extended tracking and
Video capturing should not be frame by frame.
I have no experience with AR field in general, and I would really appreciate if you can let me know your opinion or to give me some guidance on how to choose the best-fitted framework.
For ODG, you should use Vuforia according software details :
Qualcomm Technologies Inc.'s VuforiaTM SDK for Digital Eyewear
Vuforia supports extended tracking. According to what you are asking, you'll need more than just an AR SDK. You'll need to identify what you want exactly. Do you want an application that let the user see with who he's talking or do you want some holographic stuff? Depending on what you want, maybe smartglasses isn't what you need and at this point you should try to learn more about the differents SDK out there. I suggest you to look at this and that.
Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!
I am a student and I am making my major project is about augmented reality and I have a good background in programming and my plan to make a very huge project in augmented reality
I have download the vuforia SDK and I have make some samples using unity
my question is the vuforia SDK support the 3d tracking
I have seen the "Sesame Street Augmented Reality Dolls" in YouTube but I couldn't find under which Section it has made
Please inform me how to start doing this
This is the Visit http://www.youtube.com/watch?v=U2jSzmvm_WA/
According to the moderators on the Vuforia forum:
You cannot detect arbitrary 3D objects, but you can detect 3D objects
made up of planar image targets (e.g. a cereal box). Look for the
MultiImageTargets section of the Developer Guide and the AR Extension
for Unity 3 documentation
(https://ar.qualcomm.com/qdevnet/sdk/unity/ar). You can create
simple cube objects using the My Trackables system, or you can edit
the config.xml file by hand to arrange image targets into the desired
configuration.
A recent update allows Vuforia to track 3D cylinder targets.
The Sesame Street example was experimental and research is ongoing. There are no official plans to release it as a standard component of Vuforia yet.
3D object tracking has now been officially added to the Vuforia 4.0 SDK:
https://developer.vuforia.com/library/articles/training/object-recognition
Note that it only works with small objects and the objects must be scanned using an Android app.
I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...