I'm trying to build a game for iOS using Adobe Air and Flash Builder 4.7, I need to read the gyroscope data to find out what is my alpha rotation value(0-360), I've been searching around for libs and native extensions to use with adobe air, but I'm a bit lost.
Is there any easy way I can get this value on my app?
Something like this guy does here:
Understanding How the Accelerometer and Gyroscope Work in the Browser
What I would need is an event that would give me an alpha rotation value, or a way to calculate this value using x/y/z and/or pitch/yaw/roll values.
Thanks
You can find example code and links to tutorials in the official documentation:
flash.sensors.Accelerometer
Related
A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)
I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...
I might be missing something here but it seems that zxing does not support auto-focus. Having done some searches here and on Google I haven't found anything that gives any insight.
E.g. on my iPhone 4 using the sample ScanTest app many of the QR codes are blurry and tricky for the app to recognise.
So, to be a bit more specific:
Does zxing support auto-focus on the iPhone and, if so, how do you implement it?
Hmm ... the camera autofocuses on its own on hardware that is not fixed-focus, unless told to do otherwise, I believe. I think it's possible to do a tap-to-focus thing but zxing doesn't do that.
I am creating a very basic App for the iPhone & iPad and it's being built in Flash Professional CS5.5. All the actionscript is done on the frames (inline?) as I'm not familiar with using classes and external .as files etc. I've been meaning to learn that method but just haven't gotten around to it.
So my question is, what is the simplest way to use stageVideo to play a video attached to the IPA? I have looked at various sites such as http://www.adobe.com/devnet/flashplayer/articles/stage_video.html and spent hours on Google looking for examples, but all the examples and source files are either: external classes, flex, or flash builder. I can't find a simple .fla example where all the code is internally placed.
You need to change the wmode=direct in the Name-app.xml. If you find a way to edit it, tell me please, cause I don't know how to do it on Flash Professional =/
I have been researching for this and read different opinions but i wanted to ask you more specific questions.
In my application i want to take 3 or 4 frames from the camera stream to process them without making the user press a button multiple times (and as fast as posible), i do this already on the android version, because android provides a callback method that contains each frame of the camera feed.
I have seen some people using the iOS AVFoundation (classes AVCaptureDevice, AVCaptureInput) to perform this tasks, but as far as i know, this is supported from version 4.0 of iOS.
Is there another way to do this and support older iOS versions? like 3.X?
how fast can the different pictures be taken?
Are there still problems using this Framework to get Apps/updates accepted on the App Store?
Thanks a lot,
Alex.
You should use the new way (AVCaptureInput), as only a few percent of users still use iOS 3. iOS adoption is much faster than android adoption. Early last winter about 90% had already upgraded to 4. At this point even 4.0 is likely in the small minority as well.
One pre-ios-4 way to do it was by opening a UIImagePickerController and taking screenshots. Depending on the exact version targeted, there are sometimes ways to disable the camera overlays.
I see this question: iPhone: Get camera preview