Strange behaviour after modifying exposure duration and going back to AVCaptureExposureModeContinuousAutoExposure - ios

I am working on an app that exposes manual controls for the camera with the new APIs introduced in iOS 8, and I am using this sample app from WWDC 2014 as a reference.
However I noticed a strange bahaviour (on my 5s and on a 6): after setting the exposure mode to "custom" and then back to "auto" the image continues to lag as if the exposure duration was not affected by this change.
Here is the code involved in each step (from the sample app, without any modification):
- (IBAction)changeExposureMode:(id)sender
{
UISegmentedControl *control = sender;
NSError *error = nil;
AVCaptureExposureMode mode = (AVCaptureExposureMode)[self.exposureModes[control.selectedSegmentIndex] intValue];
if ([self.videoDevice lockForConfiguration:&error])
{
if ([self.videoDevice isExposureModeSupported:mode])
{
[self.videoDevice setExposureMode:mode];
}
else
{
NSLog(#"Exposure mode %# is not supported. Exposure mode is %#.", [self stringFromExposureMode:mode], [self stringFromExposureMode:self.videoDevice.exposureMode]);
}
}
else
{
NSLog(#"%#", error);
}
}
- (IBAction)changeExposureDuration:(id)sender
{
UISlider *control = sender;
NSError *error = nil;
double p = pow( control.value, EXPOSURE_DURATION_POWER ); // Apply power function to expand slider's low-end range
double minDurationSeconds = MAX(CMTimeGetSeconds(self.videoDevice.activeFormat.minExposureDuration), EXPOSURE_MINIMUM_DURATION);
double maxDurationSeconds = CMTimeGetSeconds(self.videoDevice.activeFormat.maxExposureDuration);
double newDurationSeconds = p * ( maxDurationSeconds - minDurationSeconds ) + minDurationSeconds; // Scale from 0-1 slider range to actual duration
if (self.videoDevice.exposureMode == AVCaptureExposureModeCustom)
{
if ( newDurationSeconds < 1 )
{
int digits = MAX( 0, 2 + floor( log10( newDurationSeconds ) ) );
self.exposureDurationValueLabel.text = [NSString stringWithFormat:#"1/%.*f", digits, 1/newDurationSeconds];
}
else
{
self.exposureDurationValueLabel.text = [NSString stringWithFormat:#"%.2f", newDurationSeconds];
}
}
if ([self.videoDevice lockForConfiguration:&error])
{
[self.videoDevice setExposureModeCustomWithDuration:CMTimeMakeWithSeconds(newDurationSeconds, 1000*1000*1000) ISO:AVCaptureISOCurrent completionHandler:nil];
}
else
{
NSLog(#"%#", error);
}
}

I noticed this too. It seems to be related to slow shutter speeds. Try this: Go to custom. Set a fast shutter speed. Then go back to Auto. Boom, you're right there. Now, go to custom, set a slow shutter speed (slider to the right). Go back to auto and you can watch the shutter speed gradually move back to a reasonable setting.
This is the case in the sample code and in the app that I wrote based on the sample code. It is also the same for my 4s and 5s.
I believe that this is because the sensor needs to catch a certain number of images in order to pick the right auto setting. With a very slow shutter speed (up to 1 second long max) this means it could take several seconds to find the right setting. Sort of makes sense, even if not what we'd like. Fortunately for me my app never needs a shutter speed more than quarter of second, if that.

I have found in my own code that the setExposureModeCustomWithDuration method has some issue. Though it has a completion handler which is supposed to be called AFTER the duration and ISO are set in the device, it doesn't always work.
There are times, for instance when switching from Auto expose to manual exposure, that if you grab a still from within setExposureModeCustomWithDuration's completion handler, the still is taken with an auto expose setting. If you take another still imedeately after that, it has the correct manual exposure set on it.
I found that a 1 second delay at the beginning of the completion handler works around this issue, but that can't be a proper solution.
I have also tried placing a wait/sleep loop at the beginning of the completion handler where it waits until the device is not adjusting exposure -- that does not help.

I tried the same sample app and tried to reproduce the issue but was not able to it looks like they have fixed it now.

Related

Score system not working Obj-C

So i am developing a game and have coded the score system. I want it to be so that when my object that is scrolling down is equal to the same y.co-ordinate as the "Box" object, the score system adds one to the score.
However, upon trying this the socre does not update and stays at 0. Im not quite sure why, could you please tell me where I may be going wrong and how i could fix it.
This is the code I am using to code the score system. Any help would be appreciated, thanks.
-(void)Score{
ScoreNumber = ScoreNumber + 1;
ScoreDisplay.text = [NSString stringWithFormat:#"%i", ScoreNumber];
}
[There is also some code in the view did all. but the problem is not with this part of the code as i have used it before and it works fine.]
This is where it is meant to be implemented but is not working...
if (ROne.center.y == IView.center.y) {
[self Score];
}
if (RTwo.center.y == IView.center.y) {
[self Score];
}
if (RThree.center.y == IView.center.y) {
[self Score];
}
if (RFour.center.y == IView.center.y) {
[self Score];
}
if (RFive.center.y == IView.center.y) {
[self Score];
}
if (RSix.center.y == IView.center.y) {
[self Score];
}
Your problem is likely that is the y-increment doesn't add up to the y-coordinate of the target object, then it won't be strictly equal. You'll just sort of pass by it. This is why >= worked, except once you passed it, it will keep adding to the score.
So, what you need to test for is "passing by". That is to say that prior to moving, ROne.center.y < IView.center.y and after moving ROne.center.y >= IView.center.y. That is the only condition that effectively check for "ROne just arrived at the desired height".
Alternatively, you can use >= and use a flag arrived on ROne to tell it to quit counting the score. Set the arrived flag to false when (ROne.center.y >= IView.center.y && ROne.arrived) is true. Then reset the arrived flag to true when you decide to reset the object to the top of the screen.
You need to think through the logic of your game code:
Some object is moving along the Y-axis each frame and that movement is likely to be more than 1 pixel (or point) per frame.
Your tick method is being called each frame (or some other set interval) and it's coded to only work when the Y-coordinate exactly matches the boundary you have set.
What's happening is the object is moving past the boundary between ticks and the method is therefore unable to detect it (i.e. think about a boundary where y == 3 and the object goes from y == 1 to y == 5 in a single tick).
What you need to do is hold a flag showing if the object has gone past the boundary and only check if the boundary condition if this flag is false. The best place to put this flag is in the object itself.
Something like:
if (!ROne.hasScored && ROne.center.y => IView.center.y) {
[self Score];
ROne.scored = YES;
}
You also need to consider using an array of the six objects, rather than 6 individual variables.

AS 3 | Fade-In and Fade-Out loop | ENTER_FRAME

I'm building a basic kids app for iOS and I want to fade-in and fade-out my background to sync with my sun and moon animation.
The problem is, my fade-in and fade-out code, has a lower value of 0.01 and still too fast for my app, I want a slow fade animation, like 0.001 but it's not working with this values.
bgLight.addEventListener(Event.ENTER_FRAME, fadeout);
function fadeout(e:Event){
if(bgLight.alpha <=0){
bgLight.removeEventListener(Event.ENTER_FRAME, fadeout);
bgLight.addEventListener(Event.ENTER_FRAME, fadein);
} else {
bgLight.alpha -=.01; // That's the small value
}
}
function fadein(e:Event){
if(bgLight.alpha >= 1){
bgLight.removeEventListener(Event.ENTER_FRAME, fadein);
bgLight.addEventListener(Event.ENTER_FRAME, fadeout);
} else {
bgLight.alpha +=.01; // That's the small value
}
}
Is it possible the reach a small value like 0.001 using ENTER_FRAME?
My project has 60 FPS.
Yeah, actually I'm using now the Greensock engine for this basic tween.
It's very easy to use, and I think will use less CPU usage.
import com.greensock.*;
import com.greensock.easing.*;
TweenMax.to(bgLight, 35.5, {alpha:0, repeatDelay:1, repeat:-1, yoyo:true});
Thanks for your time guys.

Detecting when someone begins walking using Core Motion and CMAccelerometer Data

I'm trying to detect three actions: when a user begins walking, jogging, or running. I then want to know when the stop. I've been successful in detecting when someone is walking, jogging, or running with the following code:
- (void)update:(CMAccelerometerData *)accelData {
[(id) self setAcceleration:accelData.acceleration];
NSTimeInterval secondsSinceLastUpdate = -([self.lastUpdateTime timeIntervalSinceNow]);
if (labs(_acceleration.x) >= 0.10000) {
NSLog(#"walking: %f",_acceleration.x);
}
else if (labs(_acceleration.x) > 2.0) {
NSLog(#"jogging: %f",_acceleration.x);
}
else if (labs(_acceleration.x) > 4.0) {
NSLog(#"sprinting: %f",_acceleration.x);
}
The problem I run into is two-fold:
1) update is called multiple times every time there's a motion, probably because it checks so frequently that when the user begins walking (i.e. _acceleration.x >= .1000) it is still >= .1000 when it calls update again.
Example Log:
2014-02-22 12:14:20.728 myApp[5039:60b] walking: 1.029846
2014-02-22 12:14:20.748 myApp[5039:60b] walking: 1.071777
2014-02-22 12:14:20.768 myApp[5039:60b] walking: 1.067749
2) I'm having difficulty figuring out how to detect when the user stopped. Does anybody have advice on how to implement "Stop Detection"
According to your logs, accelerometerUpdateInterval is about 0.02. Updates could be less frequent if you change mentioned property of CMMotionManager.
Checking only x-acceleration isn't very accurate. I can put a device on a table in a such way (let's say on left edge) that x-acceleration will be equal to 1, or tilt it a bit. This will cause a program to be in walking mode (x > 0.1) instead of idle.
Here's a link to ADVANCED PEDOMETER FOR SMARTPHONE-BASED ACTIVITY TRACKING publication. They track changes in the direction of the vector of acceleration. This is the cosine of the angle between two consecutive acceleration vector readings.
Obviously, without any motion, angle between two vectors is close to zero and cos(0) = 1. During other activities d < 1. To filter out noise, they use a weighted moving average of the last 10 values of d.
After implementing this, your values will look like this (red - walking, blue - running):
Now you can set a threshold for each activity to separate them. Note that average step frequency is 2-4Hz. You should expect current value to be over the threshold at least few times in a second in order to identify the action.
Another helpful publications:
ERSP: An Energy-efficient Real-time Smartphone Pedometer (analyze peaks and throughs)
A Gyroscopic Data based Pedometer Algorithm (threshold detection of gyro readings)
UPDATE
_acceleration.x, _accelaration.y, _acceleration.z are coordinates of the same acceleration vector. You use each of these coordinates in d formula. In order to calculate d you also need to store acceleration vector of previous update (with i-1 index in formula).
WMA just take into account 10 last d values with different weights. Most recent d values have more weight, therefore, more impact on resulting value. You need to store 9 previous d values in order to calculate current one. You should compare WMA value to corresponding threshold.
if you are using iOS7 and iPhone5S, I suggest you look into CMMotionActivityManager which is available in iPhone5S because of the M7 chip. It is also available in a couple of other devices:
M7 chip
Here is a code snippet I put together to test when I was learning about it.
#import <CoreMotion/CoreMotion.h>
#property (nonatomic,strong) CMMotionActivityManager *motionActivityManager;
-(void) inSomeMethod
{
self.motionActivityManager=[[CMMotionActivityManager alloc]init];
//register for Coremotion notifications
[self.motionActivityManager startActivityUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMMotionActivity *activity)
{
NSLog(#"Got a core motion update");
NSLog(#"Current activity date is %f",activity.timestamp);
NSLog(#"Current activity confidence from a scale of 0 to 2 - 2 being best- is: %ld",activity.confidence);
NSLog(#"Current activity type is unknown: %i",activity.unknown);
NSLog(#"Current activity type is stationary: %i",activity.stationary);
NSLog(#"Current activity type is walking: %i",activity.walking);
NSLog(#"Current activity type is running: %i",activity.running);
NSLog(#"Current activity type is automotive: %i",activity.automotive);
}];
}
I tested it and it seems to be pretty accurate. The only drawback is that it will not give you a confirmation as soon as you start an action (walking for example). Some black box algorithm waits to ensure that you are really walking or running. But then you know you have a confirmed action.
This beats messing around with the accelerometer. Apple took care of that detail!
You can use this simple library to detect if user is walking, running, on vehicle or not moving. Works on all iOS devices and no need M7 chip.
https://github.com/SocialObjects-Software/SOMotionDetector
In repo you can find demo project
I'm following this paper(PDF via RG) in my indoor navigation project to determine user dynamics(static, slow walking, fast walking) via merely accelerometer data in order to assist location determination.
Here is the algorithm proposed in the project:
And here is my implementation in Swift 2.0:
import CoreMotion
let motionManager = CMMotionManager()
motionManager.accelerometerUpdateInterval = 0.1
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue.mainQueue()) { (accelerometerData: CMAccelerometerData?, error: NSError?) -> Void in
if((error) != nil) {
print(error)
} else {
self.estimatePedestrianStatus((accelerometerData?.acceleration)!)
}
}
After all of the classic Swifty iOS code to initiate CoreMotion, here is the method crunching the numbers and determining the state:
func estimatePedestrianStatus(acceleration: CMAcceleration) {
// Obtain the Euclidian Norm of the accelerometer data
accelerometerDataInEuclidianNorm = sqrt((acceleration.x.roundTo(roundingPrecision) * acceleration.x.roundTo(roundingPrecision)) + (acceleration.y.roundTo(roundingPrecision) * acceleration.y.roundTo(roundingPrecision)) + (acceleration.z.roundTo(roundingPrecision) * acceleration.z.roundTo(roundingPrecision)))
// Significant figure setting
accelerometerDataInEuclidianNorm = accelerometerDataInEuclidianNorm.roundTo(roundingPrecision)
// record 10 values
// meaning values in a second
// accUpdateInterval(0.1s) * 10 = 1s
while accelerometerDataCount < 1 {
accelerometerDataCount += 0.1
accelerometerDataInASecond.append(accelerometerDataInEuclidianNorm)
totalAcceleration += accelerometerDataInEuclidianNorm
break // required since we want to obtain data every acc cycle
}
// when acc values recorded
// interpret them
if accelerometerDataCount >= 1 {
accelerometerDataCount = 0 // reset for the next round
// Calculating the variance of the Euclidian Norm of the accelerometer data
let accelerationMean = (totalAcceleration / 10).roundTo(roundingPrecision)
var total: Double = 0.0
for data in accelerometerDataInASecond {
total += ((data-accelerationMean) * (data-accelerationMean)).roundTo(roundingPrecision)
}
total = total.roundTo(roundingPrecision)
let result = (total / 10).roundTo(roundingPrecision)
print("Result: \(result)")
if (result < staticThreshold) {
pedestrianStatus = "Static"
} else if ((staticThreshold < result) && (result <= slowWalkingThreshold)) {
pedestrianStatus = "Slow Walking"
} else if (slowWalkingThreshold < result) {
pedestrianStatus = "Fast Walking"
}
print("Pedestrian Status: \(pedestrianStatus)\n---\n\n")
// reset for the next round
accelerometerDataInASecond = []
totalAcceleration = 0.0
}
}
Also I've used the following extension to simplify significant figure setting:
extension Double {
func roundTo(precision: Int) -> Double {
let divisor = pow(10.0, Double(precision))
return round(self * divisor) / divisor
}
}
With raw values from CoreMotion, the algorithm was haywire.
Hope this helps someone.
EDIT (4/3/16)
I forgot to provide my roundingPrecision value. I defined it as 3. It's just plain mathematics that that much significant value is decent enough. If you like you provide more.
Also one more thing to mention is that at the moment, this algorithm requires the iPhone to be in your hand while walking. See the picture below. Sorry this was the only one I could find.
My GitHub Repo hosting Pedestrian Status
You can use Apple's latest Machine Learning framework CoreML to find out user activity. First you need to collect labeled data and train the classifier. Then you can use this model in your app to classify user activity. You may follow this series if are interested in CoreML Activity Classification.
https://medium.com/#tyler.hutcherson/activity-classification-with-create-ml-coreml3-and-skafos-part-1-8f130b5701f6

Stop in front of obstacles in Cocos3d

I already know how to check for collisions with the doesintersectNode-method in Cocos3d, but in my case, I want to avoid obstacles, before I get in touch with them. In example, I want to stop in front of a wall, before I crash against it.
For this reasons I wrote the methods getNodeAtLocation in my subclass of CC3Scene and -(BOOL)shouldMoveDirectionallywithDistance:(float)distance in the class of my person, which should move around.
Unfortunately, I have some problems with the algorithm of the last method. Here the code:
-(BOOL)shouldMoveDirectionallywithDistance:(float)distance
{
BOOL shouldMove = NO;
float x = self.person.globalLocation.x;
float z = self.person.globalLocation.z;
int times = 5;
for (int i = 0; i < times; i++) {
CC3Vector newPos = cc3v(x, 0.5, z);
CC3PODResourceNode *obstacle = (CC3PODResourceNode *)[myScene getNodeAtLocation:newPos];
if (obstacle) {
return NO;
}else{
shouldMove = YES;
}
x += self.person.globalForwardDirection.x * distance / times;
z += self.person.globalForwardDirection.z * distance / times;
}
return shouldMove;
}
In this method, I get the important parts of the coordinates (for my proposal just the x- and z-values) and increase them by a fifth of the forwardDirection. I decided, that this makes sense, when the obstacle is i.e. a thin wall. But for reasons I don't know, this method doesn't work, and the person is able to walk through this wall. So where is the problem in my code?
I strongly believe, that the getNodeAtLocation-method works correctly, as I tested it multiple times, but maybe there are my mistakes:
-(CC3Node *)getNodeAtLocation:(CC3Vector )position
{
CC3Node *node = nil;
for (CC3PODResourceNode *aNode in self.children) {
if ([aNode isKindOfClass:[CC3PODResourceNode class]] ) {
for (CC3PODResourceNode *child in aNode.children) {
if (CC3BoundingBoxContainsLocation(child.globalBoundingBox, position)) {
node = aNode;
}
}
}
}
return node;
}
To conclude, in my view the mistake is in the -(BOOL)shouldMoveDirectionallywithDistance:(float)distance-method. I suppose, that something is wrong with the increase of the x- and z-values, but I couldn't figure out, what exactly is incorrect.
If you are still interested in finding an answer to this problem. I may be able to provide you with an alternative solution. I am about to release free source for a 3d collision engine I ported to cocos3d, and it will give you more flexibility than simply stoping an object in front of another.
I am currently polishing out the code a little for easy use, but if you are interested you can email me to: waywardson07#aol.com
you could also get a little preview of the engine in action here: http://www.youtube.com/watch?v=QpYZlF7EktU
Note: the video is a little dated.
After several attempts, it appears to be easier as I thought:
Just using the doesIntersectNode method has the right effect for me.
But please notice, that this is not a real solution to the problem of stopping in front of obstacles.

iOS cocos2d low frame rate on device

I'm developing a cocos2d game (using iOS 6 SDK and cocos2d 2.0rc2) and am having issues with a lower frame rate on the device. This causes issues with collision detection because most of it deals with the user drawing a line. The lower frame rate causes the points to be drawn farther apart and the object can go through the line because it never hits the points. The frame rate issue seems to happen most when I get a notification. Instead of the frame rate returning to normal when the notification disappears, it stays low and never returns to 60fps. Any ideas what might be causing this or a solution to handle the lines better with a lower fps?
Here is the drawing code, let me know if you want to see anything else.
-(void) draw {
glLineWidth(lineScale);
for (int i = 0; i < touchesArray.count; i += 2) {
CGPoint start = CGPointFromString([touchesArray objectAtIndex:i]);
CGPoint end = CGPointFromString([touchesArray objectAtIndex:i + 1]);
ccDrawLine(start, end);
}
}

Resources