I'm working on an app which combines the camera with some OpenGL. The usual "AR" thing. I know how to record the camera, I know how to record OpenGL, I've just failed with a thousand of different options :( over the last 4 days. After I'm wondering if this is at all doable / supported / feasable.
Just use glReadPixels on your finished AR-ed rendering.
Related
I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...
I'm trying to build a game for iOS using Adobe Air and Flash Builder 4.7, I need to read the gyroscope data to find out what is my alpha rotation value(0-360), I've been searching around for libs and native extensions to use with adobe air, but I'm a bit lost.
Is there any easy way I can get this value on my app?
Something like this guy does here:
Understanding How the Accelerometer and Gyroscope Work in the Browser
What I would need is an event that would give me an alpha rotation value, or a way to calculate this value using x/y/z and/or pitch/yaw/roll values.
Thanks
You can find example code and links to tutorials in the official documentation:
flash.sensors.Accelerometer
I need to move large amounts of pixels on the screen on an iOS device. What is the most efficient way of doing this?
So far I'm using glTexSubImage2D(), but I wonder if this can be done any faster. I noticed that OpenGL ES 2.0 does not support pixel buffers, but there seems to be a pixel buffer used by Core Video. Can I use that? Or maybe there's an Apple extension for OpenGL that could help me achieve something similar (I think saw a very vague mention about a client storage extension in one of the WWDC 2012 videos, but I can't find any documentation about it)? Any other way that I can speed this up?
My main concern is that glTexSubImage2D() copies all the pixels that I send. Ideally, I'd like to skip this step of copying the data, since I already have it prepared...
The client storage extension you're probably thinking of is CVOpenGLESTextureCacheCreateTextureFromImage; a full tutorial is here. That's definitely going to be the fastest way to get data to the GPU.
Frustratingly the only mention I can find of it in Apple's documentation is the iOS 4.3 to 5.0 API Differences document — do a quick search for CVOpenGLESTextureCache.h.
I have been researching for this and read different opinions but i wanted to ask you more specific questions.
In my application i want to take 3 or 4 frames from the camera stream to process them without making the user press a button multiple times (and as fast as posible), i do this already on the android version, because android provides a callback method that contains each frame of the camera feed.
I have seen some people using the iOS AVFoundation (classes AVCaptureDevice, AVCaptureInput) to perform this tasks, but as far as i know, this is supported from version 4.0 of iOS.
Is there another way to do this and support older iOS versions? like 3.X?
how fast can the different pictures be taken?
Are there still problems using this Framework to get Apps/updates accepted on the App Store?
Thanks a lot,
Alex.
You should use the new way (AVCaptureInput), as only a few percent of users still use iOS 3. iOS adoption is much faster than android adoption. Early last winter about 90% had already upgraded to 4. At this point even 4.0 is likely in the small minority as well.
One pre-ios-4 way to do it was by opening a UIImagePickerController and taking screenshots. Depending on the exact version targeted, there are sometimes ways to disable the camera overlays.
I see this question: iPhone: Get camera preview
I want to create a 3D model using set of 2D images on windows which can be send through webservice to iphone to display on it.
I know it can be done through Opengl but don't know how to start and also if I succeeded in creating it,is it compatible with iphone as iphone uses opengl es.
Thanks in advance.
What kind of transformation do you have in mind to create the 3D models? I once worked on an application using such a concept to create a model from three images of an object. It didn't really work well. The models that could be created were very limited.
OpenGL does not have a built in functionality to do such stuff. Are there any reasosns why you do not want to use a real 3D-model? It sounds as if you are looking for a fast solution for your problem. But I'm afraid if you do not have any OpenGL experience, you should prepare prepare for lots of stuff to learn.
If you want to create 3D models automatically from 2D photos, you're going to have a fair bit of work to do. AFAIK, this is not something where you can get a cheap pre-packaged solution. Autodesk charge a small fortune for ImageModeler.
MeshLab may be a good starting point, but even that can't automatically convert photos to a 3D model AFAIK.
Take a look at David Lowes site. I found the "Distinctive image features from scale-invariant keypoints" paper quite interesting, though I haven't re-read it in a while. If nothing else, this should give you some idea why this is far from a trivial problem.