I've a strange behavior with CMMotionManager. I try to calibrate the position of my device to enable my App to support multiple device orientations.
When I debug my App on a real device (not in Simulator), everything is working fine.
When I run the same App without debugging, the calibration does not work.
Here's my code:
static CMMotionManager* _motionManager;
static CMAttitude* _referenceAttitude;
// Returns a vector with the current orientation values
// At the first call a reference orientation is saved to ensure the motion detection works for multiple device positions
+(GLKVector3)getMotionVectorWithLowPass{
// Motion
CMAttitude *attitude = self.getMotionManager.deviceMotion.attitude;
if (_referenceAttitude==nil) {
// Cache Start Orientation
_referenceAttitude = [_motionManager.deviceMotion.attitude copy];
} else {
// Use start orientation to calibrate
[attitude multiplyByInverseOfAttitude:_referenceAttitude];
NSLog(#"roll: %f", attitude.roll);
}
return [self lowPassWithVector: GLKVector3Make(attitude.pitch,attitude.roll,attitude.yaw)];
}
+(CMMotionManager*)getMotionManager {
if (_motionManager==nil) {
_motionManager=[[CMMotionManager alloc]init];
_motionManager.deviceMotionUpdateInterval=0.25;
[_motionManager startDeviceMotionUpdates];
}
return _motionManager;
}
I've found a solution. The issue was caused due the different timing behavior between debug and non debug mode. CMMotionManager needs a little time for initializing, before it returns correct values. The solution was to postpone the calibration for 0.25 seconds.
This code works:
+(GLKVector3)getMotionVectorWithLowPass{
// Motion
CMAttitude *attitude = self.getMotionManager.deviceMotion.attitude;
if (_referenceAttitude==nil) {
// Cache Start Orientation
// NEW:
[self performSelector:#selector(calibrate) withObject:nil afterDelay:0.25];
} else {
// Use start orientation to calibrate
[attitude multiplyByInverseOfAttitude:_referenceAttitude];
NSLog(#"roll: %f", attitude.roll);
}
return [self lowPassWithVector: GLKVector3Make(attitude.pitch,attitude.roll,attitude.yaw)];
}
// NEW:
+(void)calibrate
_referenceAttitude = [self.getMotionManager.deviceMotion.attitude copy]
}
Related
I am creating video in ARKit during session. When I press record button, camera freezes. I have written code in didUpdateFrame delegate that causes the problem. There I save scene.snapshot in an array. Also when i create video from these images, app crashes with following message in debugger:
Message from debugger: Terminated due to memory issue
-(void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame
{
if (_recordButton.state == UIControlStateSelected)
{
currentState = Recording;
[self saveImage];
}
else if (previousState == Recording)
{
NSLog(#"Stop recording");
currentState = NotRecording;
recordTime = NULL;
self.nextButton.enabled=YES;
}
//update recording state per frame update
previousState = currentState;
}
-(void)saveImage
{
UIImage *image = self.sceneView.snapshot;
[self.bufferArray addObject:image];
image = nil;
}
Do not use ARSCNView.snapshot with implementing ARSessionDelegate.didUpdateFrame. I had same issue and solution was do not implement ARSessionDelegate.didUpdateFrame. I have used CADisplayLink with ARSCNView.snapshot and it works well.
I also tried to use ARFrame.capturedImage, but it has not contain AR objects at all. ARSCNView.snapshot contains them.
I'm using AVCaptureMetadataOutput to detect faces on iOS, and I'm trying to set the orientation of the video after the user rotates their device. However, it appears that I can't do this as every time I call the getter isVideoOrientationSupported on the only AVCaptureConnection that my AVCaptureMetadataOutput has, it always returns false. I've tried the code below in every place imaginable, yet it always returns no. Is there any way to set orientation for my metadata?
AVCaptureConnection *conn = [self.metadataOutput connectionWithMediaType:AVMediaTypeMetadataObject];
NSLog(#"%#",self.metadataOutput.connections);
if (!conn) {
NSLog(#"NULL CONNECTION OBJ");
}
if ([conn isVideoOrientationSupported]) {
NSLog(#"Supported!");
}
else {
NSLog(#"Not supported");
}
An Apple Engineer solved this for me over on the Apple Developer Forums. Here's a link. This was their response:
If you want to translate your metadata objects' coordinate space to
that of another AVCaptureOutput (such as the
AVCaptureVideoDataOutput), use
- (AVMetadataObject *)transformedMetadataObjectForMetadataObject:(AVMetadataObject *)metadataObject connection:(AVCaptureConnection *)connection NS_AVAILABLE_IOS(6_0); It's in AVCaptureOutput.h. If you want to
translate the coordinates to the coordinate space of your video
preview layer, use AVCaptureVideoPreviewLayer.h's
- (AVMetadataObject *)transformedMetadataObjectForMetadataObject:(AVMetadataObject *)metadataObject NS_AVAILABLE_IOS(6_0);
There are two types of focus detection right from iphone 6 introduction,
1. Contrast detection
2. Phase detection
from iphone 6.6+ it uses phase detection.
I am trying to get the current focus system
self.format = [[AVCaptureDeviceFormat alloc] init];
[self.currentDevice setActiveFormat:self.format];
AVCaptureAutoFocusSystem currentSystem = [self.format autoFocusSystem];
if (currentSystem == AVCaptureAutoFocusSystemPhaseDetection)
{
[self.currentDevice addObserver:self forKeyPath:#"lensPosition" options:NSKeyValueObservingOptionNew context:nil];
}
else if(currentSystem == AVCaptureAutoFocusSystemContrastDetection)
{
[self.currentDevice addObserver:self forKeyPath:#"adjustingFocus" options:NSKeyValueObservingOptionNew context:nil];
}
else{
NSLog(#"No observers added");
}
but now its crashing in the following line
AVCaptureAutoFocusSystem currentSystem = [self.format autoFocusSystem];
I am unable to find proper description for the crash.
You're creating some "AVCaptureDeviceFormat", but nothing is actually set up by default with it. It's just unusable garbage (and that's why you are getting a crash).
Each capture device has one or two formats it can work with.
You can see them by doing something like:
for (AVCaptureDeviceFormat *format in [self.currentDevice formats]) {
CFStringRef formatName = CMFormatDescriptionGetExtension([format formatDescription], kCMFormatDescriptionExtension_FormatName);
NSLog(#"format name is %#", (NSString *)formatName);
}
What you should be doing is deciding which AVCaptureDevice you want to use (e.g. the front camera, the back camera, whatever), getting the format from that, and then setting [self.currentDevice setActiveFormat:self.format]".
The WWDC 2013 session "What's New in Camera Capture" video has more information on how to do this.
I am trying to establish a geofence for CLbeacons, which is like this :
a> Any beacon whose accuracy <= 2.5 metres of distance should get detected.
Now, when I place the beacons in about 7m distance apart both get detected. What is more shocking is that the accuracy sometimes goes like 15.70 m for the beacon (checked by running the Airlocate App), which happens randomly and thereby makes the geofencing thing impossible to construct.
I tried to apply the custom formula to calculate the beacon distance double accuracy = (0.89976) * pow(ratio,7.7095) + 0.111; where double ratio = rssi*1.0/txPower; but since the txPower for CLbeacons are not provided, the function depends on me providing a static value as txPower.
Can anyone guide as to how the geofencing for these CLBeacons should be constructed then?
You are correct in that the accuracy value of the beacon can fluctualte drastically over short periods. The way I handle this (we have a similar need to determining when devices have been returned to a base location) is a combination of two approaches: First, we are tweaking the power on our iBeacons to lower them so that the didDetermineState: delegate call does not get called to many times for entering and leaving the beacon's range. Second, my iBeacon model keeps track of the accuracy for any beacons in range and averages them out. That way someone walking in between the device and the beacon, or the user turning the device a particular way won't cause the huge fluctuations in the accuracy value, messing up your logic.
I don't believe Apple intended developers to use iBeacons as indoor geolocation. The geofencing aspect of it is to simply adjust the transmit power so that you can get notified of when your device can detect the signal or not. The accuracy can be used, but it is so inaccurate it should be used with caution.
There is a developer that claims to have developed an algorithm for using iBeacons for indoor positioning, but I have not experience with it. Also, if it were possible with any level of accuracy, I feel that Apple would be using it for it's indoor location capabilities, which they are not.
Here's some of the code I use:
Here's my custom MyBeacon class:
#interface MyBeacon()
#property NSMutableDictionary *accuracyHistory;
#end
#implementation MyBeacon
- (id) init
{
self = [super init];
if (self!=nil) {
self.accuracyHistory = [[NSMutableDictionary alloc] init];
}
return self;
}
- (void) addAccuracyValue:(CGFloat)rangeValue forDate:(NSDate *)rangeDateTime
{
[self removeOldRangeHistoryItems];
[self.accuracyHistory setObject:[NSNumber numberWithFloat:rangeValue] forKey:rangeDateTime];
}
- (double) getBeaconAverageAccuracy
{
[self removeOldRangeHistoryItems];
if( self.accuracyHistory.count == 0 )
{
return -1;
}
CGFloat sumRangeVals = 0.0;
int numRangeVals = 0;
for(NSDate *accuracyDateTime in self.accuracyHistory) {
NSNumber *curValue = [self.accuracyHistory objectForKey:accuracyDateTime];
if( [curValue floatValue] >= 0.0 )
{
sumRangeVals += [curValue floatValue];
numRangeVals++;
}
else // let's toy with giving unknown readings a value of 30.
{
sumRangeVals += 30;
numRangeVals++;
}
}
CGFloat averageRangeVal = sumRangeVals / numRangeVals;
return averageRangeVal;
}
- (void) removeOldRangeHistoryItems
{
NSMutableArray *keysToDelete = [[NSMutableArray alloc] init];
for(NSDate *accuracyDateTime in self.accuracyHistory) {
// remove anything older than 10 seconds.
if( [accuracyDateTime timeIntervalSinceNow] < -10.0 )
{
[keysToDelete addObject:accuracyDateTime];
}
}
for( NSDate *key in keysToDelete )
{
[self.accuracyHistory removeObjectForKey:key];
}
}
#end
I have an iOS app that reads text using the OpenEars API. I am using the latest version (1.2.5). I can not figure out how to change the pitch while the words are being read ("on the fly"). I created a slider to control the pitch. A delegate is fired as the slider is changed. In the delegate function, the FliteController target_mean is changed. The intent was to have the pitch change as soon as the target_mean value was changed. My code is as follows:
-(void)sayTheMessage:(NSString *)message {
// if there is nothing there, don't try to say anything
if (message == nil)
return;
[self.oeeo setDelegate:self];
// we are going to say what is in the label...
#try {
// set the pitch, etc...
self.flite.target_mean = pitchValue; // Change the pitch
self.flite.target_stddev = varienceValue; // Change the variance
self.flite.duration_stretch = speedValue; // Change the speed
// finally say it!
[self.flite say:message withVoice:self.slt];
}
#catch (NSException *exception) {
if ([delegate respondsToSelector:#selector(messageError)])
[delegate messageError];
}
#finally {
}
}
-(void)changePitch:(float)pitch {
if ((pitch >= 0) && (pitch <= 2)) {
// save the new pitch internally
pitchValue = pitch;
// change the pitch of the current speaking....
self.flite.target_mean = pitchValue;
}
}
Any ideas?
OpenEars developer here. You can't change the pitch on the fly with FliteController since the pitch is set before speech is processed.