I am using AVCam made by apple for my custom camera view. Honestly it is not to simple to understand what's going on in the class AVCamViewController if you see it at first time.
Right now I am interested how they set frame of captured image. I tried to found where some fames setters or something like this, but I have not found any.
I searched in Google and found answer here AVCam not in fullscreen
But when I implemented that solution I just realised that it just made live camera preview layer with the same size as my view, but when app saves image in the method - (IBAction)snapStillImage:(id)sender in the gallery images still was with 2 stripes from left and right.
My question is how can I remove this stripes or in which line in source code apple set this stuff?
Also as additional sub-question how can I set type create just photo, because the app requests me "Microphone settings" and I don't need it just need make a photo and that's it.
This code from apple sources will save image to the photo library.
- (IBAction)snapStillImage:(id)sender
{
dispatch_async([self sessionQueue], ^{
// Update the orientation on the still image output video connection before capturing.
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] videoOrientation]];
// Flash set to Auto for Still Capture
[AVCamViewController setFlashMode:AVCaptureFlashModeAuto forDevice:[[self videoDeviceInput] device]];
// Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
UIImage *proccessingImage = [SAPAdjustImageHelper adjustImage:image];
NSNumber *email_id = [SAPCoreDataEmailHelper currentEmailId];
[SAPFileManagerHelper addImage:proccessingImage toFolderWithEmailId:email_id];
}
}];
});
}
Are you setting the session preset?
You can use your session with the session preset set in AVCaptureSessionPresetPhoto.
For the another subquestion: You need to add only the AVCaptureStillImageOutput output.
How to set the session Preset?
[session setSessionPreset:AVCaptureSessionPresetPhoto];
How to configure the session to use only StillImageOutput to take photos and ?
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// Setup the preview view
[[self previewView] setSession:session];
// Setup the session Preset
[session setSessionPreset:AVCaptureSessionPresetPhoto];
// Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
dispatch_async(sessionQueue, ^{
//[self setBackgroundRecordingID:UIBackgroundTaskInvalid];
NSError *error = nil;
AVCaptureDevice *videoDevice = [AVCamViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (error)
{
NSLog(#"%#", error);
}
if ([session canAddInput:videoDeviceInput])
{
[session addInput:videoDeviceInput];
[self setVideoDeviceInput:videoDeviceInput];
dispatch_async(dispatch_get_main_queue(), ^{
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AVCamPreviewView and UIView can only be manipulated on main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.
[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] setVideoOrientation:(AVCaptureVideoOrientation)[self interfaceOrientation]];
});
}
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ([session canAddOutput:stillImageOutput])
{
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}];
[session addOutput:stillImageOutput];
[self setStillImageOutput:stillImageOutput];
}
});
}
The blacks fields are needed to keep the aspect ratio, if want to avoid them you should change the videoGravity in the AVCaptureVideoPreviewLayer, or in the contentMode of the UIImageView displaying the captured image.
This is an expected behavior, don't understand your concern.
Related
My iPad app can only be used in landscape orientations. After what feels like I've looked at dozens of stack answers, I am unable to properly take a photo the right way up (the meta data corrects it but is only useful on macs). When the app is physically in LandscapeLeft it returns an upside down photo, whereas in LandscapeRight, it's correct.
As far as I can tell, setting VideoOrientation for the preview layer provides me the correct orientation when previewing the photo.
_capturePreview = [[ISCapturePreview alloc] init];
[self addSubview:_capturePreview];
// Session
self.session = [AVCaptureSession new];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
// Capture device
AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
// Device input
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ( [self.session canAddInput:self.deviceInput] )
[self.session addInput:self.deviceInput];
UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation];
// Preview
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]];
[previewLayer.connection setVideoOrientation:(AVCaptureVideoOrientation)orientation];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self capturePreview] layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:CGRectMake(0, 0, 1024, 600)];
[rootLayer insertSublayer:previewLayer atIndex:0];
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ([self.session canAddOutput:stillImageOutput])
{
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}];
[self.session addOutput:stillImageOutput];
[self setStillImageOutput:stillImageOutput];
}
[self.session startRunning];
And the output code is here. Setting the VideoOrientation here in my tests has no impact.
- (void)takePhotoButtonWasPressed:(id)sender{
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:[[(AVCaptureVideoPreviewLayer *)[[self capturePreview] layer] connection] videoOrientation]];
// Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
}];}
I currently cannot test the code myself, but if you set the available orientations (in the project settings) to both landscapeLeft and landscapeRight, iOS might be handling the orientation itself, so try not setting the orientation. Furthermore if you change the orientation of the device after the view has loaded, you will be using the wrong orientation, so think of adding an observer to check for changes. In addition apple says you should use UITraitCollection and UITraitEnvironment instead of UIInterfaceOrientation starting with iOS8.
I hope this helps you :)
Using XCode-6.4, iOS-8.4.1:
With AVCaptureSession, I would like to zoom further out (as much as the iPhone camera possibly can manage!)
I already use the "setVideoZoomFactor" method set equal to 1 (= its smallest value allowed). This works quite good (see code-example at the very bottom...). But then I did the following observation (recognising that the camera in photo-mode possibly manages to zoom even further out than being in video-mode):
The iPhone-camera in photo-mode shows a completely different zoom than the camera being in video-mode (at least for my iPhone 5S). You can test yourself using the native "Camera App" on your iPhone. Switch between PHOTO and VIDEO and you will see that the Photo-mode can possibly zoom further out than video-zoomfactor=1). How is that possible ???
And moreover, is there any way in achieving the same minimal zoomfactor the photo-mode achieves also in video-mode using AVCam under iOS ????
Here is an illustration of what the zoom-difference is between photo-mode and video-mode of my 5S iPhone (see picture):
Here is the code of the AVCamViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// Setup the preview view
[[self previewView] setSession:session];
// Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
// http://stackoverflow.com/questions/25110055/ios-captureoutputdidoutputsamplebufferfromconnection-is-not-called
dispatch_async(sessionQueue, ^{
self.session = [AVCaptureSession new];
self.session.sessionPreset = AVCaptureSessionPresetMedium;
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
backCamera = device;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error) {
NSLog(#"%#",error);
}
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
[output setSampleBufferDelegate:self queue:sessionQueue];
output.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)};
if ([self.session canAddOutput:output]) {
[self.session addOutput:output];
}
// Apply initial VideoZoomFactor to the device
NSNumber *DefaultZoomFactor = [NSNumber numberWithFloat:1.0];
if ([backCamera lockForConfiguration:&error])
{
// HERE IS THE ZOOMING DONE !!!!!!
[backCamera setVideoZoomFactor:[DefaultZoomFactor floatValue]];
[backCamera unlockForConfiguration];
}
else
{
NSLog(#"%#", error);
}
[self.session startRunning];
});
}
If your problem is that your app is zooming the photo by comparison to native iOS camera app, then this setup will probably help you.
I had "this" issue and the following solution fix it.
The solution:
Configure your session
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
--> self.session.sessionPreset = AVCaptureSessionPresetPhoto; <---
...
}
You must be aware that setup will only work for photos and not for videos. If you try with videos, your app shall crash.
You can configure your session based upon your needs (photo or video)
For video you can use this value: AVCaptureSessionPresetHigh
BR.
My app has a button that should take a screenshot of whatever is on the screen by calling "takeSnapShot" (see code below).
I have two views and one of them is a picker view that I use for the camera, so in the screen I see the image coming from the camera and next to it another view with images.
The thing is that I capture the screen but it doesn't capture the image coming from the camera.
Also, I think I render the view.layer but the debugger keeps saying>
"Snapshotting a view that has not been rendered results in an empty
snapshot. Ensure your view has been rendered at least once before
snapshotting or snapshot after screen updates."
Any ideas? Thanks!
This is the code:
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
UIImage *capturaPantalla;
-(void)takeSnapShot:(id)sender{
UIGraphicsBeginImageContext(picker.view.window.bounds.size);
[picker.view.layer renderInContext:UIGraphicsGetCurrentContext()];
capturaPantalla = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(capturaPantalla,nil,nil,nil);
}
I did the similar function before, and used AVCaptureSession to make it.
// 1. Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// 2. Setup the preview view
[[self previewView] setSession:session];
// 3. Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
============================================
My countdown snapshot function:
- (void) takePhoto
{
[timer1 invalidate];
dispatch_async([self sessionQueue], ^{
// 4. Update the orientation on the still image output video connection before capturing.
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] videoOrientation]];
// 5. Flash set to Auto for Still Capture
[iSKITACamViewController setFlashMode:flashMode forDevice:[[self videoDeviceInput] device]];
AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];
CGFloat maxScale = stillImageConnection.videoMaxScaleAndCropFactor;
if (effectiveScale > 1.0f && effectiveScale < maxScale)
{
stillImageConnection.videoScaleAndCropFactor = effectiveScale;;
}
[self playSound];
// 6.Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
if(iPhone5 && [self.modeLabel.text isEqualToString:#"PhotoMode"]) {
UIImage *image1 = [image crop:CGRectMake(0, 0, image.size.width, image.size.height)];
[[[ALAssetsLibrary alloc] init] writeImageToSavedPhotosAlbum:[image1 CGImage] orientation:(ALAssetOrientation)[image1 imageOrientation] completionBlock:nil];
} else {
[[[ALAssetsLibrary alloc] init] writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:nil];
}
self.priveImageView.image = image;
}
}];
});
}
What is the AVFoundation equivalent for this UIImagePicker enabled method? My app uses AVFoundation and I want to nix UIImagePicker entirley. This method comes after a UIImagePicker class is set up in the program. It sets a hypothetical image a user takes (physically using the iPhone camera interface) to an IBOutlet named photo, declared earlier in app. excerpt from http://www.raywenderlich.com/13541/how-to-create-an-app-like-instagram-with-a-web-service-backend-part-22 (download at bottom)
#pragma mark - Image picker delegate methods
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Resize the image from the camera
UIImage *scaledImage = [image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(photo.frame.size.width, photo.frame.size.height) interpolationQuality:kCGInterpolationHigh];
// Crop the image to a square (yikes, fancy!)
UIImage *croppedImage = [scaledImage croppedImage:CGRectMake((scaledImage.size.width -photo.frame.size.width)/2, (scaledImage.size.height -photo.frame.size.height)/2, photo.frame.size.width, photo.frame.size.height)];
// Show the photo on the screen
photo.image = croppedImage;
[picker dismissModalViewControllerAnimated:NO];
}
//Successful avcam snap i want to use. Uses AVFoundation
- (IBAction)snapStillImage:(id)sender
{
dispatch_async([self sessionQueue], ^{
// Update the orientation on the still image output video connection before capturing.
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] videoOrientation]];
// Flash set to Auto for Still Capture
[PhotoScreen setFlashMode:AVCaptureFlashModeAuto forDevice:[[self videoDeviceInput] device]];
// Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[[[ALAssetsLibrary alloc] init] writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:nil];
//
}
}];
});
}
Note: the AVCam method is part of an AVSession which apples describes as the class to coordinate all data flow for media capture using AVFoundation. The full official program from Apple for AVCam is here https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html
I happened to stumble upon this excerpt located in AVCamViewController. I noticed it looks a lot like the method in question. Can someone help verify its scalability?
#pragma mark File Output Delegate
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
if (error)
NSLog(#"%#", error);
[self setLockInterfaceRotation:NO];
// Note the backgroundRecordingID for use in the ALAssetsLibrary completion handler to end the background task associated with this recording. This allows a new recording to be started, associated with a new UIBackgroundTaskIdentifier, once the movie file output's -isRecording is back to NO — which happens sometime after this method returns.
UIBackgroundTaskIdentifier backgroundRecordingID = [self backgroundRecordingID];
[self setBackgroundRecordingID:UIBackgroundTaskInvalid];
[[[ALAssetsLibrary alloc] init] writeVideoAtPathToSavedPhotosAlbum:outputFileURL completionBlock:^(NSURL *assetURL, NSError *error) {
if (error)
NSLog(#"%#", error);
[[NSFileManager defaultManager] removeItemAtURL:outputFileURL error:nil];
if (backgroundRecordingID != UIBackgroundTaskInvalid)
[[UIApplication sharedApplication] endBackgroundTask:backgroundRecordingID];
}];
}
I've built my own custom camera class using AVCaptureSession and a lot of the code from apples AVCam demo app. Basically the function I use to capture my image is verbatim from apple's app, however when I snap a picture, I get a received memory warning in my console. It doesn't happen all the time, but almost always on the first picture. This is the code that is my problem..
- (void)capImage { //method to capture image from AVCaptureSession video feed
dispatch_async([self sessionQueue], ^{
// Update the orientation on the still image output video connection before capturing.
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:(AVCaptureVideoOrientation)orientation];
// Flash set to Auto for Still Capture
//[AVCamViewController setFlashMode:AVCaptureFlashModeAuto forDevice:[[self videoDeviceInput] device]];
// Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[[[ALAssetsLibrary alloc] init] writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:nil];
[self processImage:[UIImage imageWithData:imageData]];
}
}];
});
}
Any ideas why this would be happening?