Augmented reality - move object across overlay - ios

I am developing a Augmented reality application in iOS. I need to add an object, say a teapot to the screen, I should able to drag the object across the overlay of the camera and fix the object in a place. I am using the vuforia engine to add the object. I came across this thread to drag the 3D object to the target. But it uses C#. Is there any possibilities to achieve it in native itself? Or else some other way?
Kindly share your ideas.

I can not help much with Vuforia but "Metaio" has it ready in their samples:
http://bit.ly/1I0wWzR
If you download their SDK you will have the sample code with it.

Related

AR quick look Web vertical (wall) tracking

Is any posible solutions to make usdz files be tracked on walls in AR quick look?
Looks that model-viwer doesn't support ios vertical tracking.
Yes, you can use model viewer library, for place in wall augmented reality.
https://modelviewer.dev/examples/augmentedreality/#wall
You can use the mglb for android and you can use usdz for iOS (https://modelviewer.dev/docs/#entrydocs-augmentedreality-attributes-iosSrc)

How to rotate an image in dart?

I'm trying to develop a simplistic 2D browser game with dart.
The player is drawn from a png-image represented by an ImageElement in dart.
I want the player-image to turn towards the mousepointer, but cant find how to rotate an image in dart.
Any suggestions as to how this might be done?
I would highly recommend using the StageXL library for this (https://pub.dartlang.org/packages/stagexl). It's basically a recreation of the Flash APIs for Dart. It makes doing that sort of thing very easy, and it's often used to create Dart games.
I ended up using multiple images for the 8 general directions the player can move and set them accordingly.

Unity3D on iOS, inspecting the device camera image in Obj-C

I have a Unity/iOS app that captures the user's photo and displays it in the 3D environment. Now I'd like to leverage CIFaceFeature to find eye positions, which requires accessing the native (Objective-C) layer. My flow looks like:
Unity -> WebCamTexture (encode and send image to native -- this is SLOW)
Obj-C -> CIFaceFeature (find eye coords)
Unity -> Display eye positions
I've got a working prototype, but it's slow because I'm capturing the image in Unity (WebCamTexture) and then sending it to Obj-C to do the FaceFeature detection. It seems like there should be a way to simply ask my Obj-C class to "inspect the active camera". This would have to be much, much faster than encoding and passing an image.
So my question, in a nutshell:
Can I query in Obj-C 'is there a camera currently capturing?'
If so, how do I 'snapshot' the image from that currently running session?
Thanks!
You can access the Camera's preview capture stream by changing CameraCapture.mm in unity.
I suggest that you have a look at some existing plugin called Camera Capture for an example of how additional camera I/O functionality can be added to the capture session / "capture pipeline".
To set you off in the right direction. have a look at the function initCapture in CameraCapture.mm :
- (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps
Here you will be able to add to the capture session.
And then you should have a look at the code sample provided by Apple on Facial Recognition :
https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
Cheers
Unity 3D allows execution of native code. In the scripting reference, look for native plugins. In this way you can display a native iOS view (with the camera view, possibly hidden depending on your requirements) and run Objective C code. Then return the results of eye detection to Unity if you need it in a 3D view.

Corona SDK 3d engine

I have been using Corona SDK for almost a year, and have a couple simple games developed. What I am looking for now is some way to create 3D illusions in Corona SDK. If anyone has any experience with 3D in Corona, I would appreciate any advice. I've tried several game engines, but they either don't work with Corona, or cost way too much.
You can create a 3D model in Sketchup, export an image, and add it to your Corona app.
If you want animations, you can also export a bunch of sprites from Sketchup (sort of) and use the movieclip library to play the animation.
Check out LIME library for Corona sdk, you can simulate 3D effect with parallax or orthogonal view. And there is some code in code sharing section of corona, but i dont know if that can be useful:
http://developer.anscamobile.com/code/texturemapped-raycasting-engine
http://developer.anscamobile.com/code/raycasting-engine

How to make a screenshot of OpenGl ES on top of the live preview camera in iOS (Augmented Reality app)?

I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy

Resources