Is there a way to take a picture in code on the iPhone without going through the Apple controls? I have seen a bunch of apps that do this, but I'm not sure what API call to use.
EDIT: As suggested in the comments below, I have now explicitly shown how the AVCaptureSession needs to be declared and initialized. It seems that a few were doing the initialization wrong or declaring AVCaptureSession as a local variable in a method. This would not work.
Following code allows to take a picture using AVCaptureSession without user input:
// Get all cameras in the application and find the frontal camera.
AVCaptureDevice *frontalCamera;
NSArray *allCameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
// Find the frontal camera.
for ( int i = 0; i < allCameras.count; i++ ) {
AVCaptureDevice *camera = [allCameras objectAtIndex:i];
if ( camera.position == AVCaptureDevicePositionFront ) {
frontalCamera = camera;
}
}
// If we did not find the camera then do not take picture.
if ( frontalCamera != nil ) {
// Start the process of getting a picture.
session = [[AVCaptureSession alloc] init];
// Setup instance of input with frontal camera and add to session.
NSError *error;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:frontalCamera error:&error];
if ( !error && [session canAddInput:input] ) {
// Add frontal camera to this session.
[session addInput:input];
// We need to capture still image.
AVCaptureStillImageOutput *output = [[AVCaptureStillImageOutput alloc] init];
// Captured image. settings.
[output setOutputSettings:
[[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil]];
if ( [session canAddOutput:output] ) {
[session addOutput:output];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in output.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
// Finally take the picture
if ( videoConnection ) {
[session startRunning];
[output captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput
jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *photo = [[UIImage alloc] initWithData:imageData];
}
}];
}
}
}
}
session variable is of type AVCaptureSession and has been declared in .h file of the class (either as a property or as a private member of the class):
AVCaptureSession *session;
It will then need to be initialized somewhere for instance in the class' init method:
session = [[AVCaptureSession alloc] init]
Yes, there are two ways to do this. One, available in iOS 3.0+ is to use the UIImagePickerController class, setting the showsCameraControls property to NO, and setting the cameraOverlayView property to your own custom controls. Two, available in iOS 4.0+ is to configure an AVCaptureSession, providing it with an AVCaptureDeviceInput using the appropriate camera device, and AVCaptureStillImageOutput. The first approach is much simpler, and works on more iOS version, but the second approach gives you much greater control over photo resolution and file options.
Related
I develop phonegap ios application.I used barcode scanner zxing library. But I have a problem
How to implement camera auto focus ?
thank you
My Code:
-(NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (!device) return #"unable to obtain video capture device";
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) return #"unable to obtain video capture device input";
AVCaptureVideoDataOutput* output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
if (!output) return #"unable to obtain video capture output";
NSDictionary* videoOutputSettings = [NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey
];
output.alwaysDiscardsLateVideoFrames = YES;
output.videoSettings = videoOutputSettings;
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if (![captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]) {
return #"unable to preset medium quality video capture";
}
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
else {
return #"unable to add video capture device input to session";
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
else {
return #"unable to add video capture output to session";
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
Unfortunately it appears that the plugin you are using doesn't expose the capture device directly. It does, however, expose the AVCaptureSession via the captureSession property. From this property you should be able to work backwards to get the AVCaptureInputDevice
AVCaptureSession *session=[zxing captureSession]; //Assuming zxing the variable holding a reference to your zxing instance
NSArray *inputs= [session inputs];
AVCaptureInputDevice *input=(AVCaptureInputDevice *)inputs[0]; // Obtain first input device
AVCaptureDevice *device=input.device;
NSError *error;
if ([device lockForConfiguration:&error])
{
device.focusMode=AVCaptureFocusModeContinuousAutoFocus;
[device unlockForConfiguration];
}
else
{
// TODO Handle the device lock error
}
I am new to Objective-C and iOS technology.I want to record the video through code and during run time, I have to get each frame as raw data for some processing.How can I achieve this? Please any one help me. Thanks in Advance. Here is my code so far:
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
The viewDidAppear function
-(void)viewDidAppear:(BOOL)animated
{
if (!_bpickeropen)
{
_bpickeropen = true;
_picker = [[UIImagePickerController alloc] init];
_picker.delegate = self;
NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
{
NSLog(#"device not supported");
return;
}
_picker.sourceType = UIImagePickerControllerSourceTypeCamera;
_picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
_picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[self presentModalViewController:_picker animated:YES];
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
**NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
**
PROBLEMS
1.(Here i am getting the raw bytes only once)
2.(After that i want to store this raw bytes as binary file in app path).
// Do whatever with your bytes
NSLog(#"bytes per row %zd",bytesPerRow);
[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];
NSLog(#"Sample Buffer Data is %#\n",dataForRawBytes);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
here i am setting the delegate of output// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
I appreciate any help.Thanks in advance.
You could look into the AVFoundation framework. It allows you access to the raw data generated from the camera.
This link is a good intro-level project to the AVFoundation video camera usage.
In order to get individual frames from the video output, you could use the AVCaptureVideoDataOutput class from the AVFoundation framework.
Hope this helps.
EDIT: You could look at the delegate functions of AVCaptureVideoDataOutputSampleBufferDelegate, in particular the captureOutput:didOutputSampleBuffer:fromConnection: method. This will be called every time a new frame is captured.
If you do not know how delegates work, this link is a good example of delegates.
I have a View Controller that is using AV Foundation. As soon as the View controller loads, the user is able to see exactly what the input device is seeing. This is because I have started the AVCaptureSession in the viewDidLoad method implementation.
Here is the code that I have in viewDidLoad:
[super viewDidLoad];
AVCaptureSession *session =[[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetHigh];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = [[NSError alloc]init];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if([session canAddInput:deviceInput])
[session addInput:deviceInput];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[rootLayer insertSublayer:previewLayer atIndex:0];
[session startRunning];
And then I have an IBAction method implementation that has been connected to a UIButton for this view controller. Here is the IBAction implementation's code:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc]init];
AVCaptureConnection *connection = [[AVCaptureConnection alloc]init];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(#"Image Data Captured: %#", imageDataSampleBuffer);
NSLog(#"Any errors? %#", error);
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
NSLog(#"%#", image);
}
}];
When I run the app on my iPhone and press the button connected to this implementation, I get this error in the console:
*** -[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] - inactive/invalid connection passed.'
I looked in the xcode docs and it does say "You can only add an AVCaptureConnection instance to a session using addConnection: if canAddConnection: returns YES", but I have tried doing the method call on my AVCaptureSession object for addConnection and canAddConnection but they don't even show up as available options.
I also read somewhere else that for iOS you don't have to manually create a connection, but this doesn't make sense to me because in my IBAction's code there is a method call of: captureStillImageAsynchronouslyFromConnection: which requires an input.
So if the connection is automatically created for you, what is it called so I can use it for the input?
This is my first time working with AV Foundation and I just can't seem to figure out this connection error.
Any help is greatly appreciated.
When you set up camera, add a stillImageOutput to your AVCaptureSession.
self.stillImageOutput = AVCaptureStillImageOutput()
let stillSettings = [AVVideoCodecJPEG:AVVideoCodecKey]
self.stillImageOutput.outputSettings = stillSettings
if(self.session.canAddOutput(self.stillImageOutput)){
self.session.addOutput(self.stillImageOutput)
}
Then when taking photo, get the AVCaptureSession from stillImageOutput.
func takePhoto(sender: UIButton!){
let connection = self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)
if(connection.enabled){
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(connection, completionHandler: {(buffer: CMSampleBuffer!, error: NSError!) -> Void in
println("picture taken")//this never gets executed
})
}
}
Did you have a property to retain the AVCaptureSession, like
#property (nonatomic, strong) AVCaptureSession *captureSession;
//...
self.captureSession = session;
I hope it helps you.
Right now I'm trying to allow users to take pictures in my app without using UIImagePickerController. I'm using AVCaptureSession and all the related classes to load a camera feed as a sublayer on a full-screen view I have on one of my view controllers. The code works but unfortunately the camera is very slow to load. Usually takes 2-3 seconds. Here is my code:
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh])
//Check size based configs are supported before setting them
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session setSessionPreset:AVCaptureSessionPreset1280x720];
CALayer *viewLayer = self.liveCameraFeed.layer;
//NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = viewLayer.bounds;
[viewLayer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device;
if(isFront)
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
else
{
device = [self frontCamera];
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
//NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
[session startRunning];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
Is there any way to speed it up? I've already tried loading it on another thread using Grand Central Dispatch and NSThread and though that stopped the app from freezing it made the loading of the camera take even longer. Any help is appreciated.
In my case, I need to wait for session to start running
dispatch_async(queue) {
self.session.startRunning()
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.cameraManDidStart(self)
let layer = AVCaptureVideoPreviewLayer(session: self.session)
}
}
Waiting for AVCaptureSession's startRunning function was my solution too. You can run startRunning in global async and then in main thread you can add your AVCaptureVideoPreviewLayer.
Swift 4 sample
DispatchQueue.global().async {
self.captureSession.startRunning()
DispatchQueue.main.async {
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
}
}
You can load the AVCaptureSession at the time of viewWillAppear. It works for me. When I switch to the view with the AVCaptureSession from other view, then I see the camera running immediately.
For anyone interested the solution I came up with was preloading the camera on a different thread and keeping it open.
I tried all the above methods but it was not as good as Instagram or Facebook, So I loaded the AVCaptureDevice, AVCaptureVideoPreviewLayer, AVCaptureSession in the Parent Screen and passed it as parameter to the Child Screen. It was loading very rapidly.
In my application I am displaying a AVCaptureVideoPreviewLayer and then capturing a still image when the user clicks a button using the captureStillImageAsynchronouslyFromConnection function in AVCaptureOutput. This has worked well for me up until the iPhone 5, on which it never completes.
My setup code is:
...
self.imageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[self.imageOutput setOutputSettings:outputSettings];
self.captureSession = [[[AVCaptureSession alloc] init] autorelease];
[self.captureSession addInput:self.rearFacingDeviceInput];
[self.captureSession addOutput:self.imageOutput];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
self.previewLayer.frame = CGRectMake(0, 0, 320, 427);
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
[self.captureSession startRunning];
[outputSettings release];
My capture method is:
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in self.imageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo] ){
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
//code to abort if not return 'soon'
...
[self.imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error){
//use image here
}];
captureStillImageAsynchronouslyFromConnection never completes for me using an iPhone5
I have tested:
It isn't OS 6 as this code works on both an iPhone 4s and an iPod (iPod touch (4th generation) that have been updated
The captureSession is running
videoConnection is not nil
imageOutput is not nil
Also:
I'm using this method and not UIImagePickerController because I need to put the preview as a subview.
calling stopRunning on the capture Session takes several seconds on the iPhone 5 as well
Well, this code works fine. Tested for both iPhone 4 and 5, (baseSDK 7.1, under ARC).
Few things you have to consider.
1) be sure you set rearFacingDeviceInput properly,
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
[self setRearFacingDeviceInput:[AVCaptureDeviceInput deviceInputWithDevice:device error:nil]];
2) as Vincent mentioned, there will be an error, try to log both an error and imageSampleBuffer
3) session's -startRunning and -stopRunning operations takes long time to complete (seconds, even 5-6s), those methods doesn't return until they are done all the work, to avoid blocked UI you shouldn't call these methods on main thread, one way is using GCD
dispatch_queue_t serialQueue = dispatch_queue_create("queue", NULL);
dispatch_async(serialQueue, ^{
[self.captureSession startRunning];
});
If still captureStillImageAsynchronously doesn't complete (to ensure that, add breakpoint in block, and log everything) you should check your device's camera. I believe your code works on all iPhone 5 devices. Hope this helps, good luck.