I am not sure if this question has been asked before or not, but I want to know what frameworks do I need to explore in order to do augmented reality with image recognition for iOS.
Basically to build something like this, http://www.youtube.com/watch?v=GbplSdh0lGU
I am using Wikitude's SDK which enables me to use it in PhoneGap as well. Wikitude uses Vuforia's SDK for Image Recognition. Compare Wikitude and Vuforia for their features!
Here is a good SDK which I know for Augmented Reality. You will find tutorials and demos there.
Note: Adding vuforia SDK with your app is difficult so you can go with metaio SDK but modification(changing target image is easy in Vuforia)
http://www.metaio.com/sdk/
https://developer.vuforia.com/resources/sample-apps
Related
I really want to make use of the pencil kit in my react-native application. I just want to know whether I can do it. If yes, then how?
TLDR: Doesn't exist as of 2/6/2021 and you will need to write bridge code between react native (js) and native (swift/ObjC). There are big performance limitations with this approach. I recommend you create a native Swift based app for your project.
I was also curious if this is available.
For those willing to use Swift, here's the sample shown during the demo.
For those that want to use PencilKit / CoreML native libraries from React native you need need to write bridge code between Javascript (please use typescript) and the native code.
Here's more information on bridging and a guide.
For me, I will be building a note taking app and it needs to be performant. Despite being a react / react native developer I will be choosing Swift to build this project due to performance concerns.
Last point to make is that you can use react native and native together. But this is more of a headache than an enabler. AirBnB used this for some time but moved away from this approach.
For anyone new to React Native, it's a great platform. I personally like to use it for simple applications (not graphically intensive). You can also use it with the Expo tooling which speeds up prototyping but be warned some functionalities are not available bluetooth is one example.
Yes I Did it.
I created a native view for iOS Pencil Kit in React Native with Swift.
You can check basic examples for native module in my Repo
I want to detect the whole human body in Unity3D is there any way to do that ? I think there is an easy way to do that in opencv. but I'am pretty new to Unity and I don't know how to use opencv in Unity3D. And can I use OpenCV in Unity3D?
I don't see an easier way to do that if you are new(even if you are pro i think you would still use openCV)
You can use openCv in unity, there is an asset on asset store, it should be easy to implement and with any luck you will have an ready example for detecting a human body. Sorry to say that to you but the asset is paid.
Of course you can always integrate your openCv in Unity with your solution :)
I don't know what is the meaning "I want to detect the whole human body in Unity3D" But sure you can use openCV in unity and you require to write rapper for it and fortunately there is a paid package available at assetstore.
OpenCV for unity by Enox Software
If you have time and want to avoid expensive plugin then, you can write your own integration for unity using this
Using OpenCV with unity blog
OpenCV + Unity3D integration
OpenCV (EmguCV) integration in unity
I have been working on one Augmented Reality demo in which I am using Vuforia 7 SDK for my iOS App.I want to use VuMark as Target and for that I have one svg File with my VuMark Design.
Now, When I am uploadig this svg on Target Manager it shows processing for a while and then it ends with failed status.
I assume that there might be something wrong with width parameter or I might be doing it wrong.please help me if someone knows about it,Thanks.
I am currently working on a augmented reality project. I would like to place some virtual objects on a human body. Therefore I created an iOS facetracking app(with openCV; C++) which I want to use as a plugin for Unity. Is there a way to build a framework from an existing iOS app? Or do I have to create a new Xcode project and create a cocoa touch framework and copy paste the code from the app into this framework? I am a little bit confused here. Will the framework have camera access?
My idea was to track the position of a face and to send the position to unity, so that I can place some objects on it. But I do not know how to do that. Can anybody help?
nice greets.
as far as I know you need to make your Unity project, and use assets like OpenCV, but it doesn´t allow you to track the human body (without markers).
About building a fremwork starting from an iOS app, first time I heard that!
I would like to do augmented reality with ARKit but I have check that it is possible with plane surfaces
Here: ARKit example
But impossible to find the way to do it with a simple image for example like in this example:
example1
and:
example2
I would like to upload an specific image, and when my future app will "scan it", it will lauch me some augmented reality.
So you have any github project, or tutorial on that subject? It should be very useful.
Thank you in advance.
As someone mentioned it before deleting his message, it will be possible in next ios release (now in Beta) in ios 11.3.
https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience
This is now available in ARKit 2.0. A few steps to it, but its fairly straight forward.
Here are a few available tutorials:
https://www.appcoda.com/arkit-image-recognition/
https://hackernoon.com/arkit-tutorial-image-recognition-and-virtual-content-transform-91484ceaf5d5