Vuforia: UserDefinedTargets is better than ImageTargets database? - augmented-reality

I'm experimenting with Vuforia. It's going pretty well so far.
Previously I've had the ImageTarget demo working with my own targets, so I know I can get this to work for my own purposes. I also realise targets should have a good "star rating" so that Vuforia can successfully track them.
However, the following experiment is confusing me:
I create my own target database using the Target Manager, with one target, which shows up as ZERO star rating. I know Vuforia likes high star ratings, but bear with me. As I expected the ImageTargets app does not seem to recognize my target image. No surprises there really given the ZERO star rating.
However, if instead I run the UserDefinedTargets demo and I take a "live" image of the same target, Vuforia is perfectly able to track the target !
Can anyone explain why this might be the case and how I can fix the problem?
Ideally, I would like to use ImageTargets as this allows me to load in databases as I please.
Alternatively, I would like to be able to store a database captured within the UserDefinedTargets app which I can reuse at a later stage.
Overall, I'd like to know why using the Target Manager doesn't work, but using the UserDefinedTarget app does work, and how I might be able to fix the problem.

Rather than add this to the question, which is already quite lengthy, I thought it better to put it as an answer, although I'm open to other comments and answers!
I think the UserDefinedTarget app may recognize the images "better" because directly after the user defined target image is taken, the camera (i.e. mobile phone) is in the correct position already. This does not, however, explain the excellent "re-recognition" rate, i.e. if the camera is moved away from the target and then brought back over the target, the UserDefinedTargets app recognizes the target instantly every time.
Hmmm...

Related

I want to build an AR tool to place and store text files in a virtual space

It's called a memory palace (Read: 'Moonwalking with Einstein') it's an ancient tool used to memorize, in my case coding concepts and Spanish and Indonesian phrases.
I'm learning python now, but I'm not really sure what direction to move in and what stack should be used to build a project like this. it wouldn't be too complex, I just want to store and save "text files" in a virtual space like my bedroom or on my favorite hikes.
If anyone has insights or suggestions it'd be much appreciated.
Probably the two most common AR frameworks, on mobile devices anyway, at the moment are ARKit for iOS devices and ARCore for Android devices.
I am sure you can find comparisons of the strengths and weaknesses of each one but it is likely your choice will be determined by the type of device you have.
In either case, it sounds like you want to have 'places' you can return to over time and see your stored content. For this you could build on some common techniques:
link the AR object to some sort of image in the real world and when this image is recognised by the AR app, launch your AR object, in your case a text file.
use 'Cloud Anchors' - these are essentially anchors for AR objects that can persist over time, when you close the app and come back to it later, and even be shared between users on different devices.
You can find more information on cloud Anchors at the link below, including information on using them with iOS and on Android:
https://developers.google.com/ar/develop/java/cloud-anchors/overview-android

A/B testing(show new feature only for 50% of users)

I'am creating a new feature for my iOS app. After I publish the app I wants to show the new feature only for 50% of the users, so I can do some testing which version makes more orders. I have no idea how to do it without using some third parties like Optimizely.
Also is it possible to do this using Google Tag Manager(GTM).
So can someone please help me to figure this out.
Thank you very much for your time.:)
It’s hard to do it on your own, though not impossible of course: Optimizelys of the world are just programs. You’ll need to solve these problems:
Targeting: Some algorithm that will assign user session to either control or (one of) treatment(s). This has to be random, of course, or you may as well stop there.
Routing: Send sessios to the targeted experience.
Logging: You’ll need to intelligently log events from sessions as they traverse their targeted experience. These may be many, so be careful not to add latency to your app path. Your statistical analysis will be based on these.
Experience stability: how do you ensure (if you do) that a returning user sees the same experience he’s already seen.
Note as well, that Optimizely will only help you if all your changes are on the device and not on the server. If you need to instrument server changes as well, you’ll have to look into Sitespect or Variant.
I finally figured out how to do the A/B testing with 'Google Tag Manager'(GTM).
In GTM you can create a variable called 'Google Analytics Content Experiment'. With this variable you can select how many percentage of users going to see each Variation(your experiments). You can create up to 10 variations for single experiment.
GTM is so cool and powerful. GTM contains so many features that could save lot of time and I totally recommend it for anyone who is going to do A/B testing.

How to restrict recognition of particular target in each particular iOS device?

We are working in cloud recognition. In this, we have to restrict recognition of the particular image target not more than 2 recognitions in each device.
We know, we have to use VWS API for that. But our question is how we can restrict recognition of image target only in particular device, but it has to recognize in other devices which is not exceeding 2 recognitions.
How we can achieve this?
I thought this was impossible, but then after updating to Vuforia 4, I noticed in their prefab scripts they have this function RequireComponent now this has a lot of interesting applications to deal with.
Vuforia basically uses it to make sure the device has a camera, so you can notice that in their prefab scripts you can see RequireComponent(typeof(Camera))
With respect to your problem you could do something like RequireComponent(iPhone) because while playing with it, I noticed that was an option they gave me for the brackets.
Check it out and let us all know. I haven't been able to try it out, so can't confirm the same.

ios - Best way to create own level editor

I've created a core mechanic of my game and want to create a level editor for it. my game is not a tile-based one, so my needs are quite specific. Game is written using Swift and Cocos2d-swift, but i dont think i can figure something out with Sprite Builder.
What you can advice me? Can I for example create a level editor with c# and then use it from swift code?
And what data structure is the best?
I mean is it possible to serialize classes on desktop Swift application and then just load them on ios from file or I'll need to use json/xml?
It might be an old question but I ended up with using a .Net powered solution. I choosed as It has all controls that you need for creating a rich user interface and also, it has a lot of built-in and third party solutions for serializing levels in any way you want. And the c# syntax is very similar to the Swift one.
The only problem is that you might run windows to work with it.
My game in development also needed a non-tile level editor. A few months ago I took some time to make my choice.
Since my project still uses cocos2d 2.x, I don't use the whole new 3.x and SB system. After some investigation I found out that it would be too time consuming to adjust my whole project to the new system and adjusting SB to my needs, mainly because my game engine has been in development for quite some time and is close to finished. Further more, I couldn't find the right information to actually make it work for my game (it needed some odd level architecture I guess).
Finally, I didn't find any other good alternative so I decided to create my own level editor. In this way I had full control and I knew exactly how everything worked which was a huge advantage for me.
Right now my level editor has been finished for some time and works like a charm. I still think in my situation I made the right choice. Also because I learned a lot this way, building everything from the ground up. Having said that, for my next game I will probably go with the main stream and use SB from the start. Also for you I advice you to still check out SB and take some time for it before making an alternative choice...
I'll explain how I did mine. Disclaimer: it has some oddity's which only worked in my situation, but hopefully it helps a bit with choosing your own way to go, that's the goal I'm aiming at...
I used:
max/msp
Although it's developed to make music and audio based software, I used max/msp because it's very easy and fast for creating visual and interface based software as well.
More importend: I happened to be very experienced in it, which shortened the development time tremendously.
javascript
Inside the max/msp patch is a javascript file running. This file is like a bridge between the interface and the visual representation of the level being edited, and the database in which the level
is saved in. 70% of the editor development went into this file I think.
sqlite
All the data is written in a sqlite database. Again, this has been mainly the choice because it saved a great amount of development time in my case. I could have used xml files for instance, but my game was already using a sqlite database and because of this I felt comfortable using it, I had no experience in xml. Also all the code was already in place for a big part, which speeded up the whole proces a lot.
I'm very happy with the end result. It does everything I need, it's easy too use and since I made everything myself from the ground up I know exactly how every works.
Good luck with you choices.

IPad and Openframeworks

Can someone point me in the right direction to learning how to use Openframeworks to develop and IPad app. Perhaps some good tutorials, I can't seem to find any good documentation.
The docs of openFrameworks is quite outdated. But you can discover OF through the examples. Just download the iPhone package here: http://www.openframeworks.cc/download and follow the instructions in the included readme. I think a good start is, try to get the examples running on your device and start to modify the examples. If you have any further questions, the people here --> http://forum.openframeworks.cc/ will be happy to help you out.
For a more in-depth discover of openFrameworks, look at the inofficial doxygen docs here --> http://ofxfenster.undef.ch/doc/
Getting OF running on iPad is actually pretty much the same thing as running on iphone.
have you got it running before?
if you haven’t, first thing is you need to pay Apple $99 if you want to run it on real device,
otherwise it’s free to try on the simulator.
there is some instructions on OF site for the first run,
just go through it these complicated stuffs only need to be done once:
http://www.openframeworks.cc/setup/iphone/
(the guide is totally not updated at all, but it’s pretty much the same process with minor UI difference)
Any iOS OF example should runs on iPad the same way as iPhone does.
but to get iPad native resolution, you’ll have to change it manually.
it's in Application>General and in Deployment Info change the Devices drop down to iPad. (screenshot attached)
try it with any iOS examples
and if you want to put any code for mac version,
just make a copy of any iOS example and hand paste the code in appropriate void,
they are pretty much the same except mouse event vs touch event.
which a bit different in logic but just play around with it. not too hard to get used to.
basically touch events are touch.x/touch.y instead of mouseX mouseY.
(and touch events are private to each void so you might need other variables to pass it somewhere else)
I don't have a forum link but there was an openframeworks forum question on this just last week and folks posted a number of sites that have good examples/tutorials. Here's one on doing pixel operations for graphic effects:
http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10

Resources