How to restrict recognition of particular target in each particular iOS device? - ios

We are working in cloud recognition. In this, we have to restrict recognition of the particular image target not more than 2 recognitions in each device.
We know, we have to use VWS API for that. But our question is how we can restrict recognition of image target only in particular device, but it has to recognize in other devices which is not exceeding 2 recognitions.
How we can achieve this?

I thought this was impossible, but then after updating to Vuforia 4, I noticed in their prefab scripts they have this function RequireComponent now this has a lot of interesting applications to deal with.
Vuforia basically uses it to make sure the device has a camera, so you can notice that in their prefab scripts you can see RequireComponent(typeof(Camera))
With respect to your problem you could do something like RequireComponent(iPhone) because while playing with it, I noticed that was an option they gave me for the brackets.
Check it out and let us all know. I haven't been able to try it out, so can't confirm the same.

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

Fix to get my wishes?

I have problems in myself when I drive cars 🚘 I forget to slowly in some way have cameras 🎥 speed in high ways so I thinked if possible to make IOS APP to fixing these problems I explain my thing in image but I can't convert to coding by this step ?
1-After to speed camera 🎥 100 Miter Alerts me app( (there are camera speeds pleas slow down your speed.))
2- just post code i have basic programming languages in swift.
I'm not sure if I got you wright, so please correct me if I'm wrong.
You want an iOS app that tells you if there is a speed camera on the road you're driving, right?
So you have some possibilities to achieve that:
you can have a look at the app store. There are lot of such apps (e.g. TomTom) (easiest way)
if you want to build your own app you can make a use of the navigation sdk provided by mapbox: https://www.mapbox.com/help/ios-navigation-sdk/ (some programming skills needed)
Build your own app from scratch (much work and advanced programming skills)
If you want to build your app by mapbox or on your own you'll need the GPS-locations of speed cameras like provided here: https://www.scdb.info/

Unity3D - OCR Number Recognition

Our initial use case called for writing an application in Unity3D (write solely in C# and deploy to both iOS and Android simultaneously) that allowed a mobile phone user to hold their camera up to the title of a magazine article, use OCR to read the title, and then we would process that title on the backend to get related stories. Vuforia was far and away the best for this use case because of its fast native character recognition.
After the initial application was demoed a bit, more potential uses came up. Any use case that needed solely A-z characters recognized was easy in Vuforia, but the second it called for number recognition we had to look elsewhere because Vuforia does not support number recognition (now or anywhere in the near future).
Attempted Workarounds:
Google Cloud Vision - works great, but not native and camera images are sometime quite large, so not nearly as fast as we require. Even thought about using the OpenCV Unity asset to identify the numbers and then send multiple much smaller API calls, but still not native and one extra step.
Following instructions from SO to use a .Net wrapper for Tesseract - would probably work great, but after building and trying to bring the external dlls into Unity I receive this error .Net Assembly Not Found (most likely an issue with the version of .Net the dlls were compiled in).
Install Tesseract from source on a server and then create our own API - honestly unclear why we tried this when Google's works so well and is actively maintained.
Has anyone run into this same problem in Unity and ultimately found a good solution?
Vuforia on itself doesn't provide any system to detect numbers, just letters. To solve this problem I followed the next strategy (just for numbers near of a common image):
Recognize the image.
Capture a Screenshot just after the target image is recognized (this screenshot must contain the numbers).
Send the Screenshot to an OCR web-service and get the response.
Extract the numbers from the response.
Use these numbers to do whatever you need and show AR info.
This approach solves this problem, but it doesn't work like a charm. Their success depends on the quality of the screenshot and the OCR service.

Vuforia: UserDefinedTargets is better than ImageTargets database?

I'm experimenting with Vuforia. It's going pretty well so far.
Previously I've had the ImageTarget demo working with my own targets, so I know I can get this to work for my own purposes. I also realise targets should have a good "star rating" so that Vuforia can successfully track them.
However, the following experiment is confusing me:
I create my own target database using the Target Manager, with one target, which shows up as ZERO star rating. I know Vuforia likes high star ratings, but bear with me. As I expected the ImageTargets app does not seem to recognize my target image. No surprises there really given the ZERO star rating.
However, if instead I run the UserDefinedTargets demo and I take a "live" image of the same target, Vuforia is perfectly able to track the target !
Can anyone explain why this might be the case and how I can fix the problem?
Ideally, I would like to use ImageTargets as this allows me to load in databases as I please.
Alternatively, I would like to be able to store a database captured within the UserDefinedTargets app which I can reuse at a later stage.
Overall, I'd like to know why using the Target Manager doesn't work, but using the UserDefinedTarget app does work, and how I might be able to fix the problem.
Rather than add this to the question, which is already quite lengthy, I thought it better to put it as an answer, although I'm open to other comments and answers!
I think the UserDefinedTarget app may recognize the images "better" because directly after the user defined target image is taken, the camera (i.e. mobile phone) is in the correct position already. This does not, however, explain the excellent "re-recognition" rate, i.e. if the camera is moved away from the target and then brought back over the target, the UserDefinedTargets app recognizes the target instantly every time.
Hmmm...

Augmented Reality, Move 3d model respective to device movement

I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...

Resources