I have been working on one Augmented Reality demo in which I am using Vuforia 7 SDK for my iOS App.I want to use VuMark as Target and for that I have one svg File with my VuMark Design.
Now, When I am uploadig this svg on Target Manager it shows processing for a while and then it ends with failed status.
I assume that there might be something wrong with width parameter or I might be doing it wrong.please help me if someone knows about it,Thanks.
Related
I am currently working on a augmented reality project. I would like to place some virtual objects on a human body. Therefore I created an iOS facetracking app(with openCV; C++) which I want to use as a plugin for Unity. Is there a way to build a framework from an existing iOS app? Or do I have to create a new Xcode project and create a cocoa touch framework and copy paste the code from the app into this framework? I am a little bit confused here. Will the framework have camera access?
My idea was to track the position of a face and to send the position to unity, so that I can place some objects on it. But I do not know how to do that. Can anybody help?
nice greets.
as far as I know you need to make your Unity project, and use assets like OpenCV, but it doesn´t allow you to track the human body (without markers).
About building a fremwork starting from an iOS app, first time I heard that!
I'm trying to replace my own SLAM code with Kudan Unity Plugin (1.2.1, native 1.2).
I've already succeeded in building and running sample app on my iPhone.
I also checked the plugin works on my Unity project on the editor.
But when I build it on my iPhone, it does not render the video background as like in the picture.
I attached:
KudanTracker to a camera,
added:
MarkerlessTransformDriver,
MarkerlessTracking,
and BackgroundRender
as like in the sample.
I appreciate if someone give me some suggestions for what I'm missing.
Thank you in advance.
UPDATE (7/13):
I found that KudanSample scene also have same problem when Google Cardboard SDK plugin is added as shown in the screenshot.
In this case, I just imported Cardboard plugin and didn't add any prefabs in it to the scene hierarchy. It seems some plugin parts of Kudan and Cardboard are conflicting.
For trial, I deleted Cardboard's static library, libvrunity.a (and CardboardAppController.mm/h and entire "Cardboard" folder as they are referencing the library).
As the result, Kudan could render video background.
I also try to use the latest Google VR instead of Cardboard SDK, but the result was totally same as in the case of Cardboard.
I'd appreciate if someone knows how to fix it. Any suggestions are very welcome.
UPDATE (7/14):
As the signing problem of Xcode is now fixed, I'm checking the debug log.
It says "[KudanAR] Failed to create external textures".
It indicates _textureYp or _textureCbCr is null in TrackeriOS.cs.
_textureYpID and _textureCbCrId do not change whether I remove libvrunity.a or not.
Therefore, maybe there are something wrong with GetTextureForPlane().
I will keep you updated.
UPDATE (7/15):
I found that _textureYp and _textureCbCr are both null in TrackeriOS.
As they are results of GetTextureForPlane(), I hope it can be solved by changing parameters for the function. Since I could't find the documents for the function, I would be very happy if someone give me the information about it.
I have same problem.
And I think this problem is very wired.
Please adjust Link Binary With Libraies of your xcode project setting build phases.
https://i.stack.imgur.com/jLYGh.png
Please follow this order. it's work for me.
libPhone-lib.a
libgvrunity.a
libkudanPlugin.a
Question is simple how much space does these frameworks add to an app size?
https://www.aviary.com/
https://creativesdk.adobe.com/
I tested this here are the results:
My app size: 16.5mb
App size with AdobeCreativeSDK: 31.0mb
App size with AdobeCreativeSDK + CreativeSDKImageEditing(Aviary): 37.8
App size with AviarySDK: 23.2mb
So the sizes of
AviarySDK: 6.7mb
CreativeSDKImageEditing: 6.8mb
AdobeCreativeSDK: 14.5mb
So the core image editing framework parts are roughly the same size. However there this little catch:
The Image component is part of the larger Creative SDK and depends on
the Foundation SDK. Please see the Creative SDK Getting Started guide
to learn about setting your project up for the Creative SDK.
https://creativesdk.adobe.com/docs/ios/#/articles/imageediting/index.html
The only way to know for sure is to try it. Measure the size of the .ipa files (these are already compressed) before and after you add the frameworks.
It might be that depending on which part of the frameworks you actually use, the linker might strip away a lot of unused code.
I have a .map file that I used to render maps on Andriod using mapsforge. I am trying to do the same thing in iOS.
I have tried using route-me library and followed the following tutorial but the problem is that the map that I get is a picture base map not a vector map which makes the file size very large. the .map file that I have is running on iOS device perfectly and its size is relatively small about 13 MB for the same region of the same file I used with route-me in iOS.
Can anyone please point me to a tool or library even if paid for drawing vector maps in iOS for offline usage.
YES; I tried running this application but I keeps getting errors in the application like ProtocolBuffer.h is not defined.
Thanks in advance
I am sorry I was unable to create an offline vector map, but I was able to create an offline tiled map using openstreetMap as data source and Mapbox as rendering tool in ios.
I'm working on a BlackBerry app which will be used in cafes and restaurants, and one of the features is QR code scanning
Is there any way to make the camera autofocus the QR code?
Searching, I found FocusControl, which looks like the one I'm looking for. Unfortunately, it's only available since OS 5.0.
I wonder how to achieve the same thing on OS 4.5, 4.6, and 4.7.
Any suggestions?
You can't.
In OSs prior to 4.5, you can launch the native camera app and listen for recently created image files (FileSystemJournalListener). This way, if the BB has autofocus, the image has more definition.
However, there's no way to do this for every BB. Unless you apply some image algorithm of your own after taking the image.