How to subview a camera view? - ios

I am making an app that will let the user see themselves in a 'mirror' (the front facing camera on the device). I know of multiple ways of making a UIImageViewController with a view overlay, but I want my app to have it be the opposite way. In my app, I want the camera view to be a subview of the main view, without the shutter animation or the ability to capture photos or take videos and without it being full screen. Any ideas?

The best way to accomplish this is to not use the built-in UIImagePickerController, but rather use the AVFoundation classes.
You want to create an AVCaptureSession and set the appropriate outputs and inputs. Once it's configured you can get an AVCapturePreviewLayer which can be added to a view that you have configured in your view controller. The preview layer has a number of properties that allow you to control how the preview is displayed.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureOutput *output = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:output];
//Setup camera input
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
//You could check for front or back camera here, but for simplicity just grab the first device
AVCaptureDevice *device = [possibleDevices objectAtIndex:0];
NSError *error = nil;
// create an input and add it to the session
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; //Handle errors
//set the session preset
session.sessionPreset = AVCaptureSessionPresetMedium; //Or other preset supported by the input device
[session addInput:input];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
//Set the preview layer frame
previewLayer.frame = self.cameraView.bounds;
//Now you can add this layer to a view of your view controller
[self.cameraView.layer addSublayer:previewLayer]
[session startRunning];
You can then use the captureStillImageAsynchronouslyFromConnection:completionHandler: of the output device to capture an image.
For more information on how AVFoundation is structured and examples on how to do this in more detail checkout the Apple Docs.
Apple's AVCamDemo lays all of this out as well

Related

How to change AVCaptureMovieFileOutput video orientation during running session?

I have made a code that captures device video input and so far it is working fine. Here is what I have set
// add preview layer
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.videoView.layer addSublayer:_previewLayer];
// add movie output
_movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
[_session addOutput:_movieFileOutput];
AVCaptureConnection *movieFileOutputConnection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutputConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
// start session
[_session startRunning];
where:
- (AVCaptureVideoOrientation) videoOrientationFromCurrentDeviceOrientation {
switch ([[UIApplication sharedApplication] statusBarOrientation]) {
case UIInterfaceOrientationPortrait: {
return AVCaptureVideoOrientationPortrait;
}
case UIInterfaceOrientationLandscapeLeft: {
return AVCaptureVideoOrientationLandscapeLeft;
}
case UIInterfaceOrientationLandscapeRight: {
return AVCaptureVideoOrientationLandscapeRight;
}
case UIInterfaceOrientationPortraitUpsideDown: {
return AVCaptureVideoOrientationPortraitUpsideDown;
}
case UIInterfaceOrientationUnknown: {
return 0;
}
}
}
Now when interface orientation changes I want my output also to change, so I have this:
- (void) updatePreviewLayer {
_previewLayer.frame = CGRectMake(0, 0, self.videoView.frame.size.width, self.videoView.frame.size.height);
_previewLayer.connection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_session beginConfiguration];
AVCaptureConnection *movieFileOutpurConnection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutpurConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_session commitConfiguration];
}
But alas it is not working. It seems once I first set video orientation on movie output, it stays like than, it can not be changed later. So if I start filming in landscape mode, and then change to portrait, the video will be ok for the landscape mode, but portrait mode will be rotated. It is the same if I start in portrait mode, than landscape will be rotated.
Is there any way to do this right?
Try adding this before you start your session:
[_movieFileOutput setRecordsVideoOrientationAndMirroringChanges:YES asMetadataTrackForConnection:movieFileOutputConnection];
The header file documentation for this method makes it sound very much like what you're looking for:
Controls whether or not the movie file output will create a timed metadata track that records samples which
reflect changes made to the given connection's videoOrientation and videoMirrored properties during
recording.
There's more interesting information there, I'd read it all.
However, this method doesn't actually rotate your frames, it uses timed metadata to instruct players to do it at playback time, so it's possible that not all players will support this feature. If that's a deal breaker, then you can abandon AVCaptureMovieFileOutput in favour of the lower level AVCaptureVideoDataOutput + AVAssetWriter combination, where your videoOrientation changes actually rotate the frames, resulting in files that will playback correctly in any player:
If an AVCaptureVideoDataOutput instance's connection's videoOrientation or videoMirrored properties are set to
non-default values, the output applies the desired mirroring and orientation by physically rotating and or flipping
sample buffers as they pass through it.
p.s. I don't think you need the beginConfiguration/commitConfiguration pair if you're only changing one property as that's for batching multiple modifications into one atomic update.
Have you tried pausing the session before changing configuration?

Lag when setting AVCaptureConnection video orientation

The problem in question uses AVFoundation to setup a camera whose output is displayed in a AVCaptureVideoPreviewLayer, and is also processed as a pixel buffer. In order for the pixel buffer to be processed by the - processSampleBuffer: method, it must be provided in the correct orientation, which is dependent on the device orientation.
As far as I know, this can be done either by rotating the pixel buffer as its given in the sample buffer delegate method by accessing the raw pixel values in -captureOutput:didOutputSampleBuffer:fromConnection:, or by setting the videoOrientation property on the appropriate AVCaptureConnection, which ensures the pixel buffer is provided at the desired orientation. An outline of the setup is as follows:
- (void)setupCamera
{
AVCaptureSession *session = [AVCaptureSession new];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:nil];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[session addInput:deviceInput];
dispatch_queue_t videoOutputQueue = dispatch_queue_create("com.MyApp.videoQueue", DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(videoOutputQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0));
AVCaptureVideoDataOutput *videoOutput = [AVCaptureVideoDataOutput new];
videoOutput.alwaysDiscardsLateVideoFrames = YES;
[videoOutput setSampleBufferDelegate:self queue:videoOutputQueue];
[session addOutput:videoOutput];
// more setup
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
connection.videoOrientation = [self getCurrentOrientation]; // setting this to a new value causes the preview layer to freeze momentarily
[self processSampleBuffer:sampleBuffer]; // some arbitrary image processing method
}
This works as intended as far as the orientation of the pixel buffer is concerned, however, whenever the device is rotated to a new orientation giving connection.videoOrientation a new value, the preview layer freezes for a fraction of a second. Blocking the delegate method's thread (e.g. by adding a sleep) doesn't freeze the preview layer, so that's not the problem. Any help towards a solution is hugely appreciated!

Camera in iOS app Tutorial

I was wondering if anyone was willing to share how to put a Camera feature into an iOS app, or if anyone knew of a simple tutorial. NOT one with any buttons, just showing on the screen what the camera is seeing. I tried Apple's Documentation, but it was too complex for my needs.
Thank you so much!
EDIT: Any simple tutorial will do fine. Like I said, I don't need anything else, besides it to display what the camera is seeing.
I don't know about a simple tutorial but adding a view that shows what the camera sees is super easy.
First:
Add a UIView to your interface builder that will be where the camera will be shown.
Second:
Add the AVFoundation framework to your project, and add its import to your ViewController .m file.
#import <AVFoundation/AVFoundation.h>
Third:
Add these 2 variables to your interface variable declarations
AVCaptureVideoPreviewLayer *_previewLayer;
AVCaptureSession *_captureSession;
Fourth:
Add this code to your viewDidLoad. (The explanation of what it does is commented)
//-- Setup Capture Session.
_captureSession = [[AVCaptureSession alloc] init];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice
error:&error];
if(error)
assert(0);
[_captureSession addInput:input];
//-- Configure the preview layer
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewLayer setFrame:CGRectMake(0, 0,
self.cameraPreviewView.frame.size.width,
self.cameraPreviewView.frame.size.height)];
//-- Add the layer to the view that should display the camera input
[self.cameraPreviewView.layer addSublayer:_previewLayer];
//-- Start the camera
[_captureSession startRunning];
Notes:
The asserts will make the program exit in places where a camera is not available.
This only shows "The preview" of what the camera sees, if you want to manipulate the input, or take a picture or record the video you need to configure additional things like the SessionPreset and add the corresponding capture delegates. But in that case you should follow a proper tutorial or read the documentation.

Setting a background image/view to live camera view?

Is it possible to set a background for a particular view controller to show a live camera view? If so, could one lead me in the right direction to make this possible?
Yes, definitely possible. You can embed live camera feed in UIView, which you can place anywhere you like.
Start by reading here: AVFoundation Reference - this is your framework
Particular class that you are looking for is AVCaptureVideoPreviewLayer
Which works in unison with AVCaptureSession
And this is an example project that covers everything you need: AVCam
Import:
#import <AVFoundation/AVFoundation.h>
To add camera view to a controller's view add this code in the viewDidLoad:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[session addInput:input];
AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
newCaptureVideoPreviewLayer.frame = self.view.bounds;
[self.view.layer addSublayer:newCaptureVideoPreviewLayer];
[session startRunning];
I think your best bet is to grab and understand this apple sample code, called AVCam. You'll see in the code how to create an AVCaptureVideoPreviewLayer. You'll insert this as a sublayer of a UIView that you'll use as your "background".
Once you've got that working, that UIView will be just like any other part of your view hierarchy. You can treat it like a background UIImageView (albeit, one that consumes a lot more batter power).

How to remove subview from memory, when returning to main view?

Suppose, I have a mainview, when I click on a button it will load a subview and let that subview starts the flashlight(LED) and then when I return to main view, it will release the sub view and shuts down the flashlight(LED)
- (void)loadView
{
[self setView:[[[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]] autorelease]];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// If torch supported, add button to toggle flashlight on/off
if ([device hasTorch] == YES)
{
// Create an AV session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Create device input and add to current session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error: nil];
[session addInput:input];
// Create video output and add to current session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Start session configuration
[session beginConfiguration];
[device lockForConfiguration:nil];
// Set torch to on
[device setTorchMode:AVCaptureTorchModeOn];
[device unlockForConfiguration];
[session commitConfiguration];
// Start the session
[session startRunning];
// Keep the session around
[self setAVSession:session];
[output release];
}
}
Now, when I close the subview it should release it from memory and stops the flashlight.
- (void)viewDidUnload
{
[self.view removeFromSuperview];
[self autorelease];
[AVSession stopRunning];
[AVSession release], AVSession = nil;
[super viewDidUnload];
}
But this viewDidUnload thing is not working, please tell me what I am doing wrong.
The problem is that you misunderstand the purpose of -viewDidLoad.
-viewDidUnload is only called in low memory situations when the system is trying to free up memory. In that case, inactive view controllers will release their views and call -viewDidUnload so that you can release any view-related resources.
Do not count on -viewDidUnload being called when the view controller becomes inactive or when the view controller is released.
Your view controller presumably knows when the AVSession should be ended, and it can end the session whenever it figures that out.
By the way, it's a good thing that your -viewDidUnload never ran; the call to [self autorelease] would probably have led to a crash. Why are you doing that? The code you've shown is part of a view controller. I don't know what the name of that class is, so I'll imagine that it's FlashlightViewController. Presumably, some other view controller is instantiating FlashlightViewController and somehow making it active, like this:
FlashlightViewController *fvc = [[FlashlightViewController alloc] initWithNibName:nil bundle:nil];
[self.navigationController pushViewController:fvc];
[fvc release];
Here, the navigation controller will have retained fvc when it was pushed, and the controller that created fvc releases it because it no longer needs its reference to the new controller. When the flashlight controller has finished its work and is popped off the navigation stack, the navigation controller will release it. Since no other object has retained the flashlight controller, it will automatically be deallocated. When that happens, it'll release its view, and since no other object has retained the view, the view will also be deallocated.
The details of your situation might be different, but the idea is always the same: both view controller and view will be removed from memory when the view controller has been released.
Chances are you are retaining the view controller, and it's deciding to keep the view in memory just in case your app decides to display it again (this is the caching that Caleb mentions in the comment). Just because a view is not visible doesn't mean it will get unloaded.
If you have a memory leak (and here I mean the VC), obviously it makes sense to fix it.
However, I think your best option for enabling and disabling the flashlight is to do so in viewDidAppear and viewDidDisappear (or their ..will.. variants).

Resources