How do I programmatically configure the accelerometer for the iPhone? - ios

I'm not sure how to set the null point of the iPhone in Xcode. I have an app that has a UIimage that moves around the screen and works well.
What I want to do is set it indefinitely so that when the iPhone is held as you would when reading a text (i.e 45 degree angle) it is level (e.g., image doesn't move). I don't need calibration options for the user to set I just need to set it in the code.

Related

iOS: Make sure phone is portrait, and perpendicular to floor

Forgive me if this is newb question, but I am new to this part of iOS programming and I'm a bit confused.
I am working on a demo that requires the iPhone to be held in portrait mode, and (as close to) perpendicular to the floor.
Simply making sure the phone is locked in portrait mode is not enough as I am doing image processing with the rear camera, and I need to make sure the image is captured with the phone held upright.
(i.e. screen may be portrait, but user could still be holding the phone sideways)
What's the best way to address this?
I understand that the on-board IMU will allow me to determine where the ground is (i.e. gravity), but how do I address rotation around a specific axis?

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

iOS: how to detect an Ipad is put in a special frame

I have programmed an Ipad application that has a behaviour that I would like to change if I put it in a wooden frame (any other material could be added). To simplify things, the background should change whenever is inside this frame, and there must be no tap-touch interaction, just putting the Ipad inside the frame.
Of course, we could program am specific gesture on the screen, like double tapping or swiping but it is not the searched solution.
Another thought has been to detect the lack of movement for a certain amount of time, but that would not assure that iPad is inside the frame.
I have thought about interacting with magnets (thinking about smartcovers) and the sleep sensor in the right side of the Ipad, but I don't know how to do it.
I cannot see any other useful sensor.
Any suggestion?
A combination of accelerometer and the camera seems like an idea worth trying out:
Scan the accelerometer data to detect a spike followed by a flat line (= putting the iPad into the frame, then resting).
After detecting the motion event, use the back camera (maybe combined with the flash) to detect a pattern image fixed inside of the frame for this purpose. It might be necessary to put the pattern into a little hole to create at least a blurry image.
The second step is there to distinguish the frame from any other surface the iPad might be placed upon.

DeepEnd(cydia tweak) like 3D background for iOS

I am trying to emulate 3D background in one of the application we are developing.
Check this video on what i am trying to do : http://www.youtube.com/watch?v=429kM-yXGz8
Here is what i am trying to do to emulate this 3D illusion in our
application for iPad.
I have a RootView with 3 rounded buttons centered on the screen which animates in circular motion.
On bottom screen i have some banners of size (600*200) which keeps rotating with flip animation.
I also have some graphical text that is part of the background which contains the "Welcome message"
All elements are individual graphics, and hence when the user moves the iPad we only move the background based on the position of iPad using x,y,z coordinates of accelerometer.
the background moves accordingly, however this is not enough to have 3D illusion, so we decided to add some shadows to graphical elements(buttons, banners, text) and move the shadow accordingly with the iPad's position.
However the result is not convincing, and accelerometer is not updating value if user moves iPad to left and right on stand up position facing iPad straight to the head.
I was wondering if anyone have tried to achieve something similar with success? or any resource to help on how to achieve this? i am just confused whether by using only accelerometer will work or should i go with gyroscope?
Using face detection for simulating a 3D effect already has been done (by me). You can download a complete sample from http://evict.nl/code/face-tracking See the video on that page for a quick demo.
You should definitely use both. accelerometer (movement) and gyroscope (device angle). But for a true 3d effect you probably need to use the camera + face detection.

Setting White balance and Exposure mode for iphone camera + enum default

I am using the back camera of an iphone4 and doing the standard and lengthy process of creating an AVCaptureSession and adding to it an AVCaptureDevice.
Before attaching the AvCaptureDeviceInput of that camera to the session, I am testing my understanding of white balance and exposure, so I am trying this:
[self.theCaptureDevice lockForConfiguration:nil];
[self.theCaptureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeLocked];
[self.theCaptureDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
[self.theCaptureDevice unlockForConfiguration];
1- Given that the various options for white balance mode are in an enum, I would have thought that the default is always zero since the enum Typedef variable was never assigned a value. I am finding out, if I breakpoint and po the values in the debugger, that the default white balance mode is actually set to 2. Unfortunately, the header files of AVCaptureDevice does not say what the default are for the different camera setting.
2- This might sound silly, but can I assume that once I stop the app, that all settings for whitebalance, exposure mode, will go back to their default. So that if I start another app right after, the camera device is not somehow stuck on those "hardware settings".
After a bit more research and help, I found the answers:
1- All camera settings (White balance, Focus and exposure) default to their "continuous" setting so that the camera is continuously adjusting for all. Check AVCaptureDevice.h for enum values.
2- All apps function in silo. Camera stops when the app that calls it is moved to the background. When a new app calls the camera, the above defaults are set again.

Resources