how to detect if iphone is still? - ios

I have to make an app in which user can take photo only when iPhone is still. Can you please tell me how to proceed with that. Any help will be appreciated.
Below is the code that I have tried, please Suggest improvement on it, this code is giving jerky output
_previousMotionValue = 0.0f;
memset(xQueue, 0, sizeof(xQueue));
memset(yQueue, 0, sizeof(yQueue));
queueIndex = 0;
[_motionManager startAccelerometerUpdatesToQueue:_motionManagerUpdatesQueue withHandler:^(CMAccelerometerData *accelerometerData, NSError *error) {
if ([_motionManagerUpdatesQueue operationCount] > 1) {
return;
}
xQueue[queueIndex] = -accelerometerData.acceleration.x;
yQueue[queueIndex] = accelerometerData.acceleration.y;
queueIndex++;
if (queueIndex >= QueueCapacity) {
queueIndex = 0;
}
float xSum = 0;
float ySum = 0;
int i = 0;
while (i < QueueCapacity)
{
xSum += xQueue[i];
ySum += yQueue[i];
i++;
}
ySum /= QueueCapacity;
xSum /= QueueCapacity;
double motionValue = sqrt(xSum * xSum + ySum * ySum);
CGFloat difference = 50000.0 * ABS(motionValue - _previousMotionValue);
if (difference < 100)
{
//fire event for capture
}
[view setVibrationLevel:difference];
_previousMotionValue = motionValue;
}];
Based on vibration level, I am setting the different images like green, yellow, red.
I have chosen threshold 100.

To answer “…user can take photo only when iPhone is stabilized…?”:
You can use CoreMotion.framework and its CMMotionManager to obtain info about device movement. (I guess you are interested in accelerometer data.) These data will come at high rate (you can choose frequency, default if 1/60 s). Then you store (let's say) 10 latest values and make some statistics about the average and differences. By choosing optimal treshold you can tell when the device is in stabilized position.
But you mentioned image stabilization, which is not the same as taking photos in stabilized position. To stabilize image, I guess you will have to adjust the captured image by some small offset calculated from device motion.

Related

(iOS) Accelerometer Graph (convert g-force to +/- 128) granularity

I am using this Accelerometer graph from Apple and trying to convert their G-force code to calculate +/- 128.
The following image shows that the x, y, z values in the labels do not match the output on the graph: (Note that addX:y:z values are what is shown in the labels above the graph)
ViewController
The x, y, z values are received from a bluetooth peripheral, then converted using:
// Updates LABELS
- (void)didReceiveRawAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
_labelAccel.text = [NSString stringWithFormat:#"x:%li y:%li z:%li", (long)x, (long)y, (long)z];
});
}
// Updates GRAPHS
- (void)didReceiveAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
float xx = ((float)x) / 8192;
float yy = ((float)y) / 8192;
float zz = ((float)z) / 8192;
[_xGraph addX:xx y:0 z:0];
[_yGraph addX:0 y:yy z:0];
[_zGraph addX:0 y:0 z:zz];
});
}
GraphView
- (BOOL)addX:(UIAccelerationValue)x y:(UIAccelerationValue)y z:(UIAccelerationValue)z
{
// If this segment is not full, then we add a new acceleration value to the history.
if (index > 0)
{
// First decrement, both to get to a zero-based index and to flag one fewer position left
--index;
xhistory[index] = x;
yhistory[index] = y;
zhistory[index] = z;
// And inform Core Animation to redraw the layer.
[layer setNeedsDisplay];
}
// And return if we are now full or not (really just avoids needing to call isFull after adding a value).
return index == 0;
}
- (void)drawLayer:(CALayer*)l inContext:(CGContextRef)context
{
// Fill in the background
CGContextSetFillColorWithColor(context, kUIColorLightGray(1.f).CGColor);
CGContextFillRect(context, layer.bounds);
// Draw the grid lines
DrawGridlines(context, 0.0, 32.0);
// Draw the graph
CGPoint lines[64];
int i;
float _granularity = 16.f; // 16
NSInteger _granualCount = 32; // 32
// X
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].x = i;
lines[i*2+1].x = i + 1;
lines[i*2].y = xhistory[i] * _granularity;
lines[i*2+1].y = xhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _xColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Y
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = yhistory[i] * _granularity;
lines[i*2+1].y = yhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _yColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Z
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = zhistory[i] * _granularity;
lines[i*2+1].y = zhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _zColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
}
How can I calculate the above code to show the correct accelerometer values on the graph with precision?
I post this as an aswer not a comment, because I have not enough reputation, but what I'll write might be enough to send you in the right direction, that it even may count as an answer...
Your question still doesn't include what is really important. I assume the calculation of the xx/yy/zz is no problem. Although I have no idea what the 8192 is supposed to mean.
I guess the preblem is in the part where you map your values to pixel coordinates...
the lines[] contains your values in a range of 1/8192th of the values in the label. so your x value of -2 should be at a pixel position of -0.0000something, so slightly(far less than 1 Pixel) above the view... Because you see the line a lot further down there must be some translation in place (not shown in your code)
The second part that is important but not shown is DrawGridlines. Probably in there is a different approach to map the values to pixel-coordinates...
Use the debugger to check what pixel-coordinates you get when draw your +127-line and what you get if you insert the value of +127 in your history-array
And some Ideas for improvements when reading your code:
1.)Put the graph in it's own class that draws one graph(and has only one history. Somehow you seem to have that partially already (otherwise I cannot figure out your _xGraph/_yGraph/_zGraph) But on the other hand you draw all 3 values in one drawLayer??? Currently you seem to have 3*3 history buffers of which 3*2 are filled with zeros...
2.) use one place where you do the calculation of Y that you use both for drawing the grid and drawing the lines...
3.) use CGContextMoveToPoint(); + CGContextAddLineToPoint(); instead of copying into lines[] with these ugly 2*i+1 indecies...

NSMutableArray Acting Weird

I have an NSMutableArray in my game, in which the array stores "cloud" objects. When spawning the cloud, I iterate through the array and check whether there is a cloud that is nearby, if there is, then I do not spawn the cloud. Here is the code:
BOOL isCloudInRange = NO;
float distance;
do {
//Horizontal Position
isCloudInRange = NO;
if (self.sprite.physicsBody.velocity.dx > 0) {
cloud.position = CGPointMake(self.sprite.position.x + HW*16/5, 0);
}
else if (self.sprite.physicsBody.velocity.dx <0) {
cloud.position = CGPointMake(self.sprite.position.x-HW*16/5, 0);
}
else {
cloud.position = CGPointMake(self.sprite.position.x, 0);
}
//Vertical Position
int offset = arc4random() % (int) 2*self.frame.size.height;
offset -= (int) (self.frame.size.height);
if (self.sprite.physicsBody.velocity.dy > 0) {
cloud.position = CGPointMake(cloud.position.x, self.sprite.position.y + offset + self.sprite.physicsBody.velocity.dy);
}
else if (self.sprite.physicsBody.velocity.dy <0) {
cloud.position = CGPointMake(cloud.position.x, self.sprite.position.y - offset - self.sprite.physicsBody.velocity.dy);
}
else {
cloud.position = CGPointMake(cloud.position.x, self.sprite.position.y + 16*HW/5);
}
if (cloud.position.y <= 300) {
cloud.position = CGPointMake(cloud.position.x, 100 + arc4random() %200);
}
// THIS IS WHERE THE ERROR HAPPENS
for (SKNode *myNode in arrayOfClouds) {
float xPos = myNode.position.x;
float yPos = myNode.position.y;
distance = sqrt((cloud.position.x - xPos) * (cloud.position.x - xPos) + (cloud.position.y - yPos) * (cloud.position.y - yPos));
if (distance < 300.0f) {
NSLog(#"%f",distance);
isCloudInRange = YES;
}
}
} while (isCloudInRange);
If the bottom piece of code is changed to if (distance < 150.0f) everything works fine. If the distance is kept at 300.0f, however, in a couple seconds or runtime, the game starts iterating forever. Here is an example of a typical log file with this code:
![hola][1]
Click this link if above image doesn't appear (I don't know why it isn't): http://i.stack.imgur.com/qX8h7.png
The logged floats are the distances between the cloud and whatever cloud is nearby. None of these distances seem to match (I don't have a million clouds spawning every second, they're set to spawn every second or so), and since it freezes with these logs as soon as the game starts, I know there cannot be that many clouds. What is happening? Please help.. Thanks!
The main issue I can see here is the following:
You have a do..while loop running checking your cloud distance. Once a cloud is in range, you mark it as YES and re-run the loop. The cloud's X position is never changed in the loop which means it will never move out of range again (infinite loop).
Ideally this is a check that should happen once per game loop (remove the do while).
Also it will be a little more efficient if you put a break; in your for loop. Once a cloud has been found in range there is no need to check the others so you may as well end you loop here.
for (SKNode *myNode in arrayOfClouds) {
float xPos = myNode.position.x;
float yPos = myNode.position.y;
distance = sqrt((cloud.position.x - xPos) * (cloud.position.x - xPos) + (cloud.position.y - yPos) * (cloud.position.y - yPos));
if (distance < 300.0f) {
NSLog(#"%f",distance);
isCloudInRange = YES;
break; // <--drop out of the for each loop now
}
}
You say "When spawning the cloud, I iterate through the array and check whether there is a cloud that is nearby, if there is, then I do not spawn the cloud" but to me it looks like your doing the exact opposite. If a cloud is in range (<300) you set isCloudInRange to yes and repeat. Once there are enough clouds it always finds a cloud in range it should loop indefinitely. The more clouds you spawn the harder and harder this is to every get out of the loop ( noting you set it to no at top)
If you are moving clouds and checking to create them on the same thread ( same run loop of code or function calls that are synchronous), you can try moving this code to a background thread, using dispatch_asynch(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), your code block here); and see if that helps.
Info on how to set up concurrency with dispatch_asynch is here:
https://developer.apple.com/library/mac/documentation/General/Conceptual/ConcurrencyProgrammingGuide/ConcurrencyProgrammingGuide.pdf
and blocks are explained:
https://developer.apple.com/library/mac/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithBlocks/WorkingwithBlocks.html#//apple_ref/doc/uid/TP40011210-CH8-SW1

EZAudio: How do you separate the buffersize from the FFT window size(desire higher frequency bin resolution).

https://github.com/syedhali/EZAudio
I've been having success using this audio library, but now I'd like to increase the resolution of the microphone data that's read in, so that the FFT resolution, or frequency bin size goes down to 10Hz. To do that, I need a buffersize of 8820 instead of 512. Is the buffersize of the microphone and FFT windowing size separable? I can't see a way to separate it.
How do I set up the audio stream description, so that it can calculate the FFT with a larger window?
Any help would be much appreciated.
The FFT size and the audio buffer size should be completely independent. You can just save multiple audio input buffers (perhaps in a circular FIFO or queue), without processing them until you have enough samples for your desired length FFT.
Saving audio buffers this way also allows you to FFT overlapped frames for more time resolution.
Having browsed the source of the linked project, it appears that the audio callback passes a buffer size that is the preferred buffer size of the microphone device. I would recommend you buffer up the desired number of samples before calling the FFT. The following code is modified from FFTViewController.m in EZAudioFFTExample:
#pragma mark - EZMicrophoneDelegate
-(void) microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
dispatch_async(dispatch_get_main_queue(), ^{
// Update time domain plot
[self.audioPlotTime updateBuffer:buffer[0]
withBufferSize:bufferSize];
// Setup the FFT if it's not already setup
if( !_isFFTSetup ){
[self createFFTWithBufferSize:bufferSize withAudioData:buffer[0]];
_isFFTSetup = YES;
}
int samplesRemaining = bufferSize;
while (samplesRemaining > 0)
{
int samplestoCopy = max(bufferSize, FFTLEN - _fftBufIndex);
memcpy(_fftBuf, buffer[0], samplesToCopy*sizeof(float));
_fftBufIndex += samplesToCopy;
samplesRemaining -= samplesToCopy;
if (_fftBufIndex == FFTLEN)
{
_fftBufIndex = 0;
[self updateFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
}
}
});
}
In the modified program, FFTLEN a value that you defined, _fftBuf is an array of floats that you allocate and it needs to hold FFTLEN elements, and _fftBufIndex is an integer to track the write position into the array.
On a separate note, I would recommend you make a copy of the buffer parameter before calling the async delegate. The reason I say this is because looking at the source for EZMicrophone it looks like it's recycling the buffer so you'll have a race condition.
Thanks Jaket for suggestion. Buffer is the way to go and here is my working implementation of that same function now with adjustable FFT window:
-(void)microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
dispatch_async(dispatch_get_main_queue(),^{
[self.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
// Decibel Calculation.
float one = 1.0;
float meanVal = 0.0;
float tiny = 0.1;
vDSP_vsq(buffer[0], 1, buffer[0], 1, bufferSize);
vDSP_meanv(buffer[0], 1, &meanVal, bufferSize);
vDSP_vdbcon(&meanVal, 1, &one, &meanVal, 1, 1, 0);
// Exponential moving average to dB level to only get continous sounds.
float currentdb = 1.0 - (fabs(meanVal)/100);
if (lastdbValue == INFINITY || lastdbValue == -INFINITY || isnan(lastdbValue)) {
lastdbValue = 0.0;
}
dbValue = ((1.0 - tiny)*lastdbValue) + tiny*currentdb;
lastdbValue = dbValue;
// NSLog(#"dbval: %f",dbValue);
//
// Setup the FFT if it's not already setup
int samplestoCopy = fmin(bufferSize, FFTLEN - _fftBufIndex);
for ( size_t i = 0; i < samplestoCopy; i++ ) {
_fftBuf[_fftBufIndex+i] = buffer[0][i];
}
_fftBufIndex += samplestoCopy;
_samplesRemaining -= samplestoCopy;
if (_fftBufIndex == FFTLEN) {
if( !_isFFTSetup ){
[self createFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
_isFFTSetup = YES;
}
[self updateFFTWithBufferSize:FFTLEN withAudioData:_fftBuf];
_fftBufIndex = 0;
_samplesRemaining = FFTLEN;
}
});
}

Keep Rotating sprite from going off screen, but bounce back.

Im using a technique to control a sprite by rotating left/right and then accelerating forward. I have 2 questions regarding it. (The code it pasted together from different classes due to polymorphism. If it doesn't make sense, let me know. The movement works well and the off screen detection as well.)
When player moves off screen i call the Bounce method. I want the player not to be able to move off screen but to change direction and go back. This works on top and bottom but left and right edge very seldom. Mostly it does a wierd bounce and leaves the screen.
I would like to modify the accelerate algorithm so that i can set a max speed AND a acceleration speed. Atm the TangentalVelocity does both.
float TangentalVelocity = 8f;
//Called when up arrow is down
private void Accelerate()
{
Velocity.X = (float)Math.Cos(Rotation) * TangentalVelocity;
Velocity.Y = (float)Math.Sin(Rotation) * TangentalVelocity;
}
//Called once per update
private void Deccelerate()
{
Velocity.X = Velocity.X -= Friction * Velocity.X;
Velocity.Y = Velocity.Y -= Friction * Velocity.Y;
}
// Called when player hits screen edge
private void Bounce()
{
Rotation = Rotation * -1;
Velocity = Velocity * -1;
SoundManager.Vulture.Play();
}
//screen edge detection
public void CheckForOutOfScreen()
{
//Check if ABOVE screen
if (Position.Y - Origin.Y / 2 < GameEngine.Viewport.Y) { OnExitScreen(); }
else
//Check if BELOW screen
if (Position.Y + Origin.Y / 2 > GameEngine.Viewport.Height) { OnExitScreen(); }
else
//Check if RIGHT of screen
if (this.Position.X + Origin.X / 2 > GameEngine.Viewport.Width) { OnExitScreen(); }
else
//Check if LEFT of screen
if (this.Position.X - Origin.X / 2 < GameEngine.Viewport.X) { OnExitScreen(); }
else
{
if (OnScreen == false)
OnScreen = true;
}
}
virtual public void OnExitScreen()
{
OnScreen = false;
Bounce();
}
Let's see if I understood correctly. First, you rotate your sprite. After that, you accelerate it forward. In that case:
// Called when player hits screen edge
private void Bounce()
{
Rotation = Rotation * -1;
Velocity = Velocity * -1; //I THINK THIS IS THE PROBLEM
SoundManager.Vulture.Play();
}
Let's suposse your sprite has no rotation when it looks up. In that case, if it's looking right it has rotated 90º, and its speed is v = (x, 0), with x > 0. When it goes out of the screen, its rotation becomes -90º and the speed v = (-x, 0). BUT you're pressing the up key and Accelerate method is called so immediately the speed becomes v = (x, 0) again. The sprite goes out of the screen again, changes its velocity to v = (-x, 0), etc. That produces the weird bounce.
I would try doing this:
private void Bounce()
{
Rotation = Rotation * -1;
SoundManager.Vulture.Play();
}
and check if it works also up and bottom. I think it will work. If not, use two different Bounce methods, one for top/bottom and another one for left/right.
Your second question... It's a bit difficult. In Physics, things reach a max speed because air friction force (or another force) is speed-dependent. So if you increase your speed, the force also increases... at the end, that force will balance the other and the speed will be constant. I think the best way to simulate a terminal speed is using this concept. If you want to read more about terminal velocity, take a look on Wikipedia: http://en.wikipedia.org/wiki/Terminal_velocity
private void Accelerate()
{
Acceleration.X = Math.abs(MotorForce - airFriction.X);
Acceleration.Y = Math.abs(MotorForce - airFriction.Y);
if (Acceleration.X < 0)
{
Acceleration.X = 0;
}
if (Acceleration.Y < 0)
{
Acceleration.Y = 0;
}
Velocity.X += (float)Math.Cos(Rotation) * Acceleration.X
Velocity.Y += (float)Math.Sin(Rotation) * Acceleration.Y
airFriction.X = Math.abs(airFrictionConstant * Velocity.X);
airFriction.Y = Math.abs(airFrictionConstant * Velocity.Y);
}
First, we calculate the accelartion using a "MotorForce" and the air friction. The MotorForce is the force we make to move our sprite. The air friction always tries to "eliminate" the movement, so is always postive. We finally take absolute values because the rotation give us the direction of the vector. If the acceleration is lower than 0, that means that the air friction is greater than our MotorForce. It's a friction, so it can't do that: if acceleration < 0, we make it 0 -the air force reached our motor force and the speed becomes constant.
After that, the velocity will increase using the acceleration. Finally, we update the air friction value.
One thing more: you may update also the value of airFriction in the Deccelarate method, even if you don't consider it in that method.
If you have any problem with this, or you don't understand something (sometimes my English is not very good ^^"), say it =)

CMMotionManager and the Angular Path

I'm trying to code a very basic panorama app.
By using CMMotionManager I get motion updates to determine the appropriate moment to take the next picture. Sometimes this code works perfectly well, but in most cases it takes a photo too early or too late. Please help me understand what exactly I'm doing wrong.
Here is an example of code for an iPhone in its portrait mode.
#define CC_RADIANS_TO_DEGREES(__ANGLE__) ((__ANGLE__) * 57.29577951f) // PI * 180
#define FOV_IN_PORTRAIT_MODE 41.5;
double prevTime;
double currAngle;
- (void)motionUpdate:(CMDeviceMotion *)motion
{
if (!prevTime) {
prevTime = motion.timestamp;
return;
}
//Calculate delta time between previous motionUpdate call and _now_
double deltaTime = motion.timestamp - prevTime;
prevTime = motion.timestamp;
//Y axis rotation
CMRotationRate rotationRate = motion.rotationRate;
double rotation = rotationRate.y;
if (fabs(rotation) < 0.05) //igonre bias
return;
//Calculate the angular distance
double anglePathRad = rotation * deltaTime;
//calculate total panoram angle
currAngle += CC_RADIANS_TO_DEGREES(anglePathRad);
if (fabs(currAngle) >= FOV_IN_PORTRAIT_MODE) {
currAngle = 0;
[self takePicture];
}
}

Resources