I have been working on a project for a while now and I have come to one thing I want to really work out that I haven't been able to figure out.
In the application when taking a front facing picture, I would like front flash to actually make the picture brighter.
I am using a custom AVCaptureSession camera, it is full screen. Here is the code that does make a flash happen, just the picture isn't brighter at all.
//Here is the code for a front flash on the picture button press. It does flash, just doesn't help.
UIWindow* wnd = [UIApplication sharedApplication].keyWindow;
UIView *v = [[UIView alloc] initWithFrame: CGRectMake(0, 0, wnd.frame.size.width, wnd.frame.size.height)];
[wnd addSubview: v];
v.backgroundColor = [UIColor whiteColor];
[UIView beginAnimations: nil context: nil];
[UIView setAnimationDuration: 1.0];
v.alpha = 0.0f;
[UIView commitAnimations];
//imageView is just the actual view the the cameras image fills.
imageView.hidden = NO;
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for(AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}if (videoConnection) {
break;
}
} [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *thePicture = [UIImage imageWithData:imageData];
self.imageView.image = thePicture;
//After the picture is on the screen, I just make sure some buttons are supposed to be where they are supposed to be.
saveButtonOutlet.hidden = NO;
saveButtonOutlet.enabled = YES;
diaryEntryOutlet.hidden = YES;
diaryEntryOutlet.enabled = NO;
}
}];
}
You need to set the screen to white before the image is captured, wait for the capture to complete and then remove the white screen in the completion block.
You should also dispatch the capture after a short delay to ensure the screen has turned white -
UIWindow* wnd = [UIApplication sharedApplication].keyWindow;
UIView *v = [[UIView alloc] initWithFrame: CGRectMake(0, 0, wnd.frame.size.width, wnd.frame.size.height)];
[wnd addSubview: v];
v.backgroundColor = [UIColor whiteColor];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *thePicture = [UIImage imageWithData:imageData];
dispatch_async(dispatch_get_main_queue(), ^{
self.imageView.image = thePicture;
[v removeFromSuperview];
});
}
//After the picture is on the screen, I just make sure some buttons are supposed to be where they are supposed to be.
saveButtonOutlet.hidden = NO;
saveButtonOutlet.enabled = YES;
diaryEntryOutlet.hidden = YES;
diaryEntryOutlet.enabled = NO;
}];
}];
Related
I'm a new to programming and Objective-C (~6 weeks) and now I'm working with AVFoundation for the first time. My goal is a stretch for my level, but shouldn't be too difficult for someone familiar with the framework.
My goal is to create a 'Snapchat' style custom camera interface that captures a still image when you tap on the button, and records video when you hold it down.
I've been able to piece together and crush through most of the code (video preview, capturing still images, programmatic buttons, etc.), but I'm not able to successfully save the video locally (will add it to a project built on top of Parse later this week).
ViewController.h
(reference)
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface ViewController : UIViewController
#property UIButton *button;
#property UIButton *saveButton;
#property UIImageView *previewView;
#define VIDEO_FILE #"test.mov"
#end
ViewController.m
The way I've constructed my code is I initialize the session in the first set of methods, and then break apart image and video capture into their own separate sections. The input device is AVMediaTypeVideo and it outputs to AVCaptureStillImageOutput and AVCaptureMovieFileOutput respectively.
#import "ViewController.h"
#interface ViewController () <AVCaptureFileOutputRecordingDelegate>
#end
#implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *imageOutput;
AVCaptureMovieFileOutput *movieOutput;
AVCaptureConnection *videoConnection;
- (void)viewDidLoad {
[super viewDidLoad];
[self testDevices];
self.view.backgroundColor = [UIColor blackColor];
//Image preview
self.previewView = [[UIImageView alloc]initWithFrame:self.view.frame];
self.previewView.backgroundColor = [UIColor whiteColor];
self.previewView.contentMode = UIViewContentModeScaleAspectFill;
self.previewView.hidden = YES;
[self.view addSubview:self.previewView];
//Buttons
self.button = [self createButtonWithTitle:#"REC" chooseColor:[UIColor redColor]];
UILongPressGestureRecognizer *longPressRecognizer = [[UILongPressGestureRecognizer alloc]initWithTarget:self action:#selector(handleLongPressGesture:)];
[self.button addGestureRecognizer:longPressRecognizer];
[self.button addTarget:self action:#selector(captureImage) forControlEvents:UIControlEventTouchUpInside];
self.saveButton = [self createSaveButton];
[self.saveButton addTarget:self action:#selector(saveActions) forControlEvents:UIControlEventTouchUpInside];
}
- (void)viewWillAppear:(BOOL)animated {
//Tests
[self initializeAVItems];
NSLog(#"%#", videoConnection);
NSLog(#"%#", imageOutput.connections);
NSLog(#"%#", imageOutput.description.debugDescription);
}
#pragma mark - AV initialization
- (void)initializeAVItems {
//Start session, input
session = [[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
} else {
NSLog(#"%#", error);
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//Layer preview
CALayer *viewLayer = [[self view] layer];
[viewLayer setMasksToBounds:YES];
CGRect frame = self.view.frame;
[previewLayer setFrame:frame];
[viewLayer insertSublayer:previewLayer atIndex:0];
//Image Output
imageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *imageOutputSettings = [[NSDictionary alloc]initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
imageOutput.outputSettings = imageOutputSettings;
//Video Output
movieOutput = [[AVCaptureMovieFileOutput alloc] init];
[session addOutput:movieOutput];
[session addOutput:imageOutput];
[session startRunning];
}
- (void)testDevices {
NSArray *devices = [AVCaptureDevice devices];
for (AVCaptureDevice *device in devices) {
NSLog(#"Device name: %#", [device localizedName]);
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
NSLog(#"Device position : back");
}
else {
NSLog(#"Device position : front");
}
}
}
}
#pragma mark - Image capture
- (void)captureImage {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in imageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"Requesting capture from: %#", imageOutput);
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
self.previewView.image = image;
self.previewView.hidden = NO;
}
}];
[self saveButtonFlyIn:self.saveButton];
}
#pragma mark - Video capture
- (void)captureVideo {
NSLog(#"%#", movieOutput.connections);
[[NSFileManager defaultManager] removeItemAtURL:[self outputURL] error:nil];
videoConnection = [self connectionWithMediaType:AVMediaTypeVideo fromConnections:movieOutput.connections];
/* This is where the code is breaking */
[movieOutput startRecordingToOutputFileURL:[self outputURL] recordingDelegate:self];
- (AVCaptureConnection *)connectionWithMediaType:(NSString *)mediaType fromConnections:(NSArray *)connections {
for (AVCaptureConnection *connection in connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:mediaType]) {
return connection;
}
}
}
return nil;
}
#pragma mark - AVCaptureFileOutputRecordingDelegate
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error {
if (!error) {
//Do something
} else {
NSLog(#"Error: %#", [error localizedDescription]);
}
}
#pragma mark - Recoding Destination URL
- (NSURL *)outputURL {
NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:VIDEO_FILE];
return [NSURL fileURLWithPath:filePath];
}
#pragma mark - Buttons
- (void)handleLongPressGesture:(UILongPressGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
NSLog(#"Press");
self.button.backgroundColor = [UIColor greenColor];
[self captureVideo];
}
if (recognizer.state == UIGestureRecognizerStateEnded) {
NSLog(#"Unpress");
self.button.backgroundColor = [UIColor redColor];
}
}
- (UIButton *)createButtonWithTitle:(NSString *)title chooseColor:(UIColor *)color {
UIButton *button = [[UIButton alloc] initWithFrame:CGRectMake(self.view.frame.size.width - 100, self.view.frame.size.height - 100, 85, 85)];
button.layer.cornerRadius = button.bounds.size.width / 2;
button.backgroundColor = color;
button.tintColor = [UIColor whiteColor];
[self.view addSubview:button];
return button;
}
- (UIButton *)createSaveButton {
UIButton *button = [[UIButton alloc]initWithFrame:CGRectMake(self.view.frame.size.width, 15, 85, 85)];
button.layer.cornerRadius = button.bounds.size.width / 2;
button.backgroundColor = [UIColor greenColor];
button.tintColor = [UIColor whiteColor];
button.userInteractionEnabled = YES;
[button setTitle:#"save" forState:UIControlStateNormal];
[self.view addSubview:button];
return button;
}
- (void)saveButtonFlyIn:(UIButton *)button {
CGRect movement = button.frame;
movement.origin.x = self.view.frame.size.width - 100;
[UIView animateWithDuration:0.2 animations:^{
button.frame = movement;
}];
}
- (void)saveButtonFlyOut:(UIButton *)button {
CGRect movement = button.frame;
movement.origin.x = self.view.frame.size.width;
[UIView animateWithDuration:0.2 animations:^{
button.frame = movement;
}];
}
#pragma mark - Save actions
- (void)saveActions {
[self saveButtonFlyOut:self.saveButton];
self.previewView.image = nil;
self.previewView.hidden = YES;
}
#end
The code breaks on this line:
[movieOutput startRecordingToOutputFileURL:[self outputURL] recordingDelegate:self];
Off the top of my head, I'm thinking that it could be a couple of things:
Is the data even there (logged it, but can't verify)?
Am I initializing the destination url properly?
Is the data compatible with the destination? Is that a thing?
Would love your perspectives / fresh sets of eyes / thoughts on how to check, test, or debug this.
Cheers,
J
The problem lies in your implementation of -initializeAVItems:
- (void)initializeAVItems {
//Start session, input
session = [[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
...
}
If you want to use AVCaptureMovieFileOutput to record videos, you cannot set the AVCaptureSession's sessionPreset to AVCaptureSessionPresetPhoto, that's for still images only. For high quality video output I would recommend using AVCaptureSessionPresetHigh.
And it's better to call canSetSessionPreset: before really set it:
session = [AVCaptureSession new];
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh]) {
session.sessionPreset = AVCaptureSessionPresetHigh;
}
recently I am working on a camera app and everything is doing good but one thing: The shutter response is unbelievable slow....That cause of very poor her experience. Here is my code for capturing:
- (void)captureThe7Head {
AVCaptureConnection *myVideoConnection = nil;
for (AVCaptureConnection *connection in myStillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
myVideoConnection = connection;
break;
}
}
}
[myStillImageOutput captureStillImageAsynchronouslyFromConnection:myVideoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIView* exportView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 1500, 1500)];
if (!isBackCam) {
UIImage* img = [self squareImageFromImage:[[UIImage alloc] initWithData:imageData] scaledToSize:1500.0];
img = [self flippingBack:img];
[exportView addSubview:[[UIImageView alloc] initWithImage:img]];
}
else{
[exportView addSubview:[[UIImageView alloc] initWithImage:[self squareImageFromImage:[[UIImage alloc] initWithData:imageData] scaledToSize:1500.0]]];
}
UIImageView* overlayFrameView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 1500, 1500)];
overlayFrameView.image = [frameDict objectForKey:selectedFrame];
[exportView addSubview:overlayFrameView];
[self passingImgToShareWith:[self captureImageFromView:exportView] AndSelectedFrame:[frameNameArray indexOfObject:selectedFrame]];
}
}];
}
Can anyone help one the solve this issue??? I really find no example on the web for this....Please help.
Attached with the screen recording of the response: http://youtu.be/_yH8za8loE8
When the shutter is triggered, it will call the method above.
I want to develop a small Jigsaw puzzle game but having problem when combining the image pieces. I can split image but cannot combine them as per my requirement. Here is what I am doing.
For cropping:
[customImageView setImage:[self cropImage:self.mainImage withRect:mCropFrame]];
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect
{
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
For Clipping:
[self setClippingPath:[pieceBezierPathsMutArray_ objectAtIndex:i]:view];
- (UIImageView *) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView;
{
if (![[imgView layer] mask])
{
[[imgView layer] setMask:[CAShapeLayer layer]];
}
[(CAShapeLayer*) [[imgView layer] mask] setPath:[clippingPath CGPath]];
return imgView;
}
For Combining:
-(id)initByCombining:(id)oneView andOther:(id)twoView withRegularSize:(CGSize)pieceSize;
{
CustomImageView *one = oneView;//[oneView copy];
CustomImageView *two = twoView;
CGPoint onepoint, twopoint;
if (one.frame.origin.x < two.frame.origin.x)
{
onepoint.x = 0;
twopoint.x = onepoint.x + one.frame.size.width;
}
else
{
onepoint.x = onepoint.x + one.frame.size.width;
twopoint.x = 0;
}
if (one.frame.origin.y < two.frame.origin.y)
{
onepoint.y = 0;
twopoint.y = 0;
}
else
{
onepoint.y = 0;
twopoint.y = 0;
}
CGRect frame;
frame.origin = CGPointZero;
frame.size.width = onepoint.x + one.frame.size.width + two.frame.size.width;
frame.size.height = MAX(one.frame.size.height , two.frame.size.height);
if (self = [self initWithFrame:frame])
{
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
UIGraphicsBeginImageContext(frame.size);
[one.image drawAtPoint:onepoint];
[two.image drawAtPoint:twopoint];
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.image = UIGraphicsGetImageFromCurrentImageContext();
self.backgroundColor = [UIColor redColor];
UIGraphicsEndImageContext();
UIGraphicsPopContext();
self.center = one.center;
self.transform = CGAffineTransformScale(incomingTransform, 0.5, 0.5);
self.previousRotation = self.transform;
}
return self;
}
My initial image is this:
After cropping and clipping it becomes like this:
It should look like this after combining.
But it is becoming like this.
When You want to combine the Images I would suggest you to place the Clipping within another UIView , so that the UIView becomes a SuperView of all the Placed Clippings, after the Images have been Placed you can do something like this,
UIGraphicsBeginImageContextWithOptions(superView.bounds.size, superView.opaque, 0.0);
[superView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * CombinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return CombinedImage;
and Then just save it as follows..
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[CombinedImage CGImage] orientation:(ALAssetOrientation)[SAVEIMAGE imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
UIAlertView *al=[[UIAlertView alloc]initWithTitle:#"" message:#"Error saving image, Please try again" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil, nil];
[al show];
} else {
NSData *imageData = UIImagePNGRepresentation(CombinedImage);
UIImage *finalImage=[UIImage imageWithData:imageData];
}
..Hope this helps
I am working on a photo taking app. The app's preview layer is set to take up exactly half of the screen using this code:
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
This looks perfect and there is no distortion at all while the user is viewing the camera's "preview" / what they are seeing while taking the picture.
However, once they actually take the photo, I create a sub layer and set it's frame property to my preview layer's property, and set the photo as the contents of the sub layer.
This does technically work. Once the user takes the photo, the photo shows up on the top half of the screen like it should.
The only problem is that the photo is distorted.
It looks stretched out, almost as if I'm taking a landscape photo.
Any help is greatly appreciated I am totally desperate on this and have not been able to fix it after working on it all day today.
Here is all of my view controller's code:
#import "MediaCaptureVC.h"
#interface MediaCaptureVC ()
#end
#implementation MediaCaptureVC
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Custom initialization
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
AVCaptureSession *session =[[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = [[NSError alloc]init];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if([session canAddInput:deviceInput])
[session addInput:deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[rootLayer insertSublayer:_previewLayer atIndex:0];
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:_stillImageOutput];
[session startRunning];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
-(UIImage*) rotate:(UIImage*) src andOrientation:(UIImageOrientation)orientation
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context=(UIGraphicsGetCurrentContext());
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, 90/180*M_PI) ;
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, -90/180*M_PI);
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, 90/180*M_PI);
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
-(IBAction)stillImageCapture {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in _stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", _stillImageOutput);
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
image = [self rotate:image andOrientation:image.imageOrientation];
CALayer *subLayer = [CALayer layer];
CGImageRef imageRef = image.CGImage;
subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
subLayer.frame = _previewLayer.frame;
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[_previewLayer addSublayer:subLayer];
NSLog(#"%#", subLayer.contents);
NSLog(#"Orientation: %d", image.imageOrientation);
}
}];
}
#end
Hi I hope this helps you -
The code seems more complex than it should be because most of the code is done on the CALayer level instead of the imageView / view level however I think the issue is that the proportion of the frame from the original capture to your mini viewport is different and this is distorting the UIImage in this statement :
[subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
What needs to be done is to capture the proportion of sublayer.frame and get the best size that will fit into the rootlayer or the Image view associated with it
I have some code before that does this : coded a subroutine before that handles the proportion (Note you will need to adjust the origin of the frame to get what you want!)
...
CGRect newbounds = [self figure_proportion:image to_fit_rect(rootLayer.frame)
if (newbounds.size.height < rootLayer.frame.size.height) {
rootLayer ..... (Code to adjust origin of image view frame)
-(CGRect) figure_proportion:(UIImage *) image2 to_fit_rect:(CGRect) rect {
CGSize image_size = image2.size;
CGRect newrect = rect;
float wfactor = image_size.width/ image_size.height;
float hfactor = image_size.height/ image_size.width;
if (image2.size.width > image2.size.height) {
newrect.size.width = rect.size.width;
newrect.size.height = (rect.size.width * hfactor);
}
else if (image2.size.height > image2.size.width) {
newrect.size.height = rect.size.height;
newrect.size.width = (rect.size.height * wfactor);
}
else {
newrect.size.width = rect.size.width;
newrect.size.height = newrect.size.width;
}
if (newrect.size.height > rect.size.height) {
newrect.size.height = rect.size.height;
newrect.size.width = (newrect.size.height* wfactor);
}
if (newrect.size.width > rect.size.width) {
newrect.size.width = rect.size.width;
newrect.size.height = (newrect.size.width* hfactor);
}
return(newrect);
}
I am able to capture images from the iOS rear facing camera. Everything is working flawlessly except I want it to take the picture as per the bounds in my UIView.
My code is below:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = vImagePreview.bounds;
[vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [self backFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[session startRunning];
[session addOutput:stillImageOutput];
}
-(AVCaptureDevice *)backFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionBack){
captureDevice = device;
break;
}
}
// couldn't find one on the back, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
And below is the code to capture the image:
- (IBAction)captureTask {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
stillImage = image;
}];
}
The issue i'm facing is that it's taking the picture, and saving to stillImage, however, the image is for the whole iPhone screen from what I can tell. It's not to the bounds of the UIView *vImagePreview I created. Is there a way to clip the bounds of the captured image??
[EDIT]
After reading the docs, I realized the image was proper resolution, as per here: session.sessionPreset = AVCaptureSessionPresetMedium;. Is there a way to make the image like a square? Like how Instagram makes their images? All of the session presets according to the docs are not at all squares :(
I tried with the below:
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResize;
However, it only resizes the image to fit the current view, doesn't make a square image.
I understand your frustration, presets should be customizable or have more options! What I do with my images is crop them about the center, for which I wrote the following code:
- (UIImage *)crop:(UIImage *)image from:(CGSize)src to:(CGSize)dst
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
Where src represents the original dimensions and dst represents the cropped dimensions; and image is of course the image you want cropped.
If the device is retina display,
then this screenshot works like mentioned below:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
{
CGPoint cropCenter = CGPointMake((src.width), (src.height));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width)), (cropCenter.y - (dst.height)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width*2, dst.height*2);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
else
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}