Effort for building an Augmented Reality SDK with OpenCV - opencv

Our company is planning to start building some AR apps for Android and iOS. As the first step we need to decide whether we are going to use a Opensource SDK like ARToolKit or to go for a commercialized product like Vuforia, Wikitude, CraftAR, KudanAR etc or whether we should start writing our own AR sdk based on libraries like OpenCV/OpenGL etc..
I have read many articles and comparisons about different SDKs available and have a good idea about what each of them can do and how much they are going to cost. e.g.
http://socialcompare.com/en/comparison/augmented-reality-sdks
https://www.linkedin.com/pulse/dozens-more-augmented-reality-sdks-than-you-think-here-offermann
In the past we have used Vuforia and we have it at the top of our list. But the main issue is the pricing.
So I would like to know if any of you have written or tried to build your own AR sdk based on OpenCV and what type of an effort it will be. To support features like image and 3D object tracking and augmenting 2D and 3D objects. And need to support iOS and Android devices.
This Augmented Reality SDK with OpenCV has some basic guidelines on how to start.
Mainly what I would like to know is if a software Engineer with about 5+ years of good programming skills try to do this, how much of an effort will that be? Will it be like 1 month work or a 6 months work or even with 12 months of work will it be difficult to get closer to what Vuforia SDK can does?

Related

Mixed reality in Android device using Unity(w/ OpenCV?)

I'm a fourth year student currently doing my thesis project(Noob in programming). What I want to ask is that is it possible to create a mobile app in Unity that has the following specifications:
- Mixed Reality
- Hand Tracking
- has an A.I. (Image comparison)
I've done some research but all I've saw is only in AR. If it is possible what course of action should I take?
Yes, it's possible.
I suggest you to use:
ARStuff:
Vuforia, EasyAR or ARCore/ARKit (depending of your mobile target).
HandTracking:
One of the best SDK's I try about HandTracking with AR is Manomotion (https://www.manomotion.com/)
Image Comparison:
As you suggest, OpenCV is a good choice, there are some "bridges" to use OpenCV on Unity Store.

How to start working with Augmented Reality?

Is there any way to start with Augmented Reality? Is there any innovation team from which i can learn and contribute? Is it restricted to apps only in I.T. or can we also implement any otherthing with the help of Augmented Reality within IT?
The below links will give you some idea on the list of available sdk's for developing applications using Augmented Reality .
http://socialcompare.com/en/comparison/augmented-reality-sdks
https://creator.zoho.com/reitmayr/augmented-reality-sdks/view-embed/AR_SDKs
Wikitude and Vuforia sdk's are the most common one's for Augmented Reality applications.
Try the below link to get started with Augmented Reality!!
https://www.sitepoint.com/how-to-build-an-ar-android-app-with-vuforia-and-unity/
your question is very broad. I think you can try and start by using ARToolKit. http://artoolkit.org/
ARToolKit has a great community from which you can learn and also it is open source so you can have a look at the source and contribute back as well.
I don't get your last question about I.T. but you can do a lot with AR

Xcode (swift) vs Unity for isometric 2d mobile-apps - Performace, Package Size

Let's assume I want to develop an isometric 2D mobile-game such as Clash of Clans for example.
My main target would be iOS but of course Android would be nice, too (but not a must-have).
Now I have to decide to either program with Apples XCode (therefore Swift as a language, which I am already pretty familiar with), or develop my game with Unity3D (and therefore C# as a language, which I am also pretty familiar with).
Personally, I don't prefer one over the other.
So much for the set-up.
As I don't have any preferences, I'd like to choose the one that offers the most benefits for my 2.5D game to me.
The questions:
Is there a difference in getting an approval for the App-Store if you program in Swift, or use Unity; C#?
How big is the difference of the published package-size of the app between Unity and XCode?
Does my Unity-written app run as smoothly as my XCode-written app?
I hope you could help me with that.
If I missed some points there, feel free to criticize me and give me your opinions on it.
Greetings
Chriz
Is there a difference in getting an approval for the App-Store if you program in Swift, or use Unity; C#?
No, given this general comparison - there should be nothing here favoring or disallowing one over the other.
How big is the difference of the published package-size of the app between Unity and Xcode?
That is very hard to say. There will be added libraries for Unity inclusion whereas Apple would already have shared libraries apart of the OS - used by every app. Think shared libraries here - only Apple is permitted to do this. Not to be confused with the to be newly released iOS 9 'App Thinning'.
The larger weight will be media/images/bitmaps.
Does my Unity-written app run as smoothly as my XCode-written app?
Since they both end up using OpenGL, the end result should be the same or very similar. Obviously as the OS and device mature - if Unity doesn't leverage it, they could end up giving up performance advantages.
But... the flip side of being so tightly coupled with Swift/iOS/Apple, is you abandon your Android market - and if you are even considering it - I'd suggest Unity based on what you shared if there is a remote possibility you want to deploy to Android, desktops, *TV devices in the future.

How to start with Augmented reality to create my own framework (Not AR App)

I have been working Augmented Reality for quite a few months. I have used third party tools like Unity/Vuforia to create augmented reality applications for android.
I would like to create my own framework in which I will create my own AR apps. Can someone guide me to right tutorials/links to achieve my target. On a higher level, my plan is to create an application which can recognize multiple markers and match it with cloud stored models.
That seems like a massive undertaking: model recognition is not an easy task. I recommend looking at OpenCV (which has some standard algorithms you can use as a starting point) and then looking at a good computer vision book (e.g., Richard Szeliski's book or Hartley and Zisserman).
But you are going to run into a host of practical problems. Consider that systems like Vuforia provide camera calibration data for most Android devices, and it's hard to do computer vision without it. Then, of course, there's efficiently managing the whole pipeline which (again) companies like Qualcomm and Metaio invest huge amounts of $$ in.
I'm working on a project that does framemarker tracking and I've started exporting bits of it out to a project I'm calling OpenAR. Right now I'm in the process of pulling out unpublishable pieces and making Vuforia and the OpenCV versions of marker tracking interchangeable. You're certainly welcome to check out the work as it progresses. You can see videos of some of the early work on my YouTube channel.
The hard work is improving performance to be as good as Vuforia.

2D games on iOS

This is something I've pondered/struggled with and would love to hear some opinions on. I have a good deal of familiarity with the iOS sdk but not so much with the opengl related aspects and not really any with the various SDKs, especially game SDKs build to work on iOS.
If I want to create 2D games for iPhone/iPad, is it easy/better/practical to use some simple collection of iOS SDK objects such as the UIImageViews etc to build a plethora of sprites interacting on the screen, or much better to go with an SDK for that? I'm assuming that going with gl is overboard for 2d requirements, but please voice any dissent if I'm wrong there.
I'm mainly interested in what the quickest route to getting things done is, combined with the smallest requirements to ramp up on technologies. Obviously if it is well worth it to use an SDK simply because it is cross platform for other OSs, that is reasonable to mention.
The advantage of using a framework on top of OpenGL can greatly increase productivity, maintainability and reduce programming errors.
Personally I work with cocos2d-for-iphone. It's written in Objective-C and is based on top of OpenGL. It was created with the aims to create 2D games and thus unlike UIKit or QuartzCore, it's designed for that. It provides a lot of convenience API to manage scenes or sprites, to create animations, etc. Or even libraries for the sounds for example.
There is a very good article which describes some open-source game engines available on iphone here. It could help you in your search.

Resources