AVFoundation photo size and rotation - ios

I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.

I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.

Related

Get the pixel position of pan On UIImage in a UIImageView

I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.

CALayer frame gives strange position

I am currently trying to use CALayer to show a mask and then use this mask to crop the picture according to this mask but I can't find a way to get the good position and size of my mask in my image.
When I draw my mask, I use kCAGravityResizeAspectFill to keep the ratio of my image. In this case the layer use the height to fill my screen height and compute the proper width / (x, y) to keep the ratio.
CGRect screenViewRect = [self.viewForBaselineLayout bounds];
CGFloat screenViewWidth = screenViewRect.size.width;
CGFloat screenViewHeight = screenViewRect.size.height;
masqueLayer.frame = CGRectMake(screenViewWidth *0.45, screenViewHeight*0.05, screenViewWidth *0.10, screenViewHeight*0.92);
masqueLayer.contents = (__bridge id)([UIImage imageNamed:masqueJustif].CGImage);
masqueLayer.contentsGravity = kCAGravityResizeAspectFill;
[self.layer insertSublayer:masqueLayer atIndex:2];
As I get the mask on my screen I can easily see that the screenViewWidth *0.10 is not respected as I wanted, but my trouble come from the fact that when I get the frame or my layer the width isn't updated and so I can't get the real position of my layer on my screen.
Is there a method to get the real position of my layer on my screen.
I am actually trying to get the crop rectangle with this (considering my ratio is 21/29.7 as it is a A4 mask). This code is actually working on IPad but not on Iphone as the ratio is different :
CGRect outputRect = [masqueLayer convertRect:masqueLayer.bounds toLayer:self.layer];
outputRect.origin.y *= 2;
outputRect.size.height *= 2;
outputRect.size.width = outputRect.size.height * (21/29.7);
I also tried using my mask percentage :
CGRect outputRect = masqueLayer.frame;
outputRect.origin.y = its.size.height * 0.05;
outputRect.origin.x = its.size.width * 0.45 * masqueLayer.anchorPoint.x;
outputRect.size.height = its.size.height * 0.92;
outputRect.size.width = its.size.height * 0.92 * (21/29.7);
Here is a screenshot of my mask on another layer. I want to extract the image bounded by the blue corner (which is the border of my layer)
Thanks.

Crop Image With Desire Rect

Many questions are available on SO, but unfortunately I couldn't solve my problem using them.
I've added a overlay view on my camera, and now want to get image within the blue border (only water bottle).
I tried code chunks like following
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
but having two issues
Either cropped image is getting too big
The orientation changes to -90.
for point 1, I think I'm providing cropRect too small thats why it showing very small part of image with too zoomed view. on my other viewController I have UIImageView (where cropped image need to display) of same size as camera rect within blue border.
So question is how to crop the image and what values should I provide for cropRect?
Assuming the image size 1280* 1080 and your crop view size 320*480 You need to do the following
Convert your crop view's frame to Image size rect (0, 0, 1280, 1080) by find the scale factor
float xScale = 1280 / 320;
float yScale = 1080 / 480;
float scaleFactor = (xScale < yScale) ? xScale : yScale;
Multiply cropView frame by scale factor. This will map the screen coordinates to image size coordinates. Then use the new cropRect with the
CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
The problem with different orientation is that CoreGraphics uses a different coordinate system as compare to view's coordinate system. Quartz 2D Coordinate Systems so try setting
[UIImage imageWithCGImage:imageRef].imageOrientation = largeImage.imageOrientation

Flip video around arbitrary axis

I have a video recorded by the user. I want the user to be able to define an arbitrary axis of rotation, and flip the video along that axis. I also want the final flipped video to crop to the original size.
I have used the CGAffineTransformMakeScale(-1, 1) to flip the video along the horizontal axis, but that's around the center point.
I'm already using an AVMutableComposition to do some compositing. Are there any AVMutableVideoCompositionLayerInstruction that would help?
_mike
You should calculate where the user set the axis or how far the image is off set, flip the image and re-apply this offset in the opposite direction. Then you just need to get a subimage of an image.
Which you can do by doing this:
CGRect fromRect = CGRectMake(0, 0, 320, 480); // or whatever rectangle
CGImageRef drawImage = CGImageCreateWithImageInRect(image.CGImage, fromRect);
UIImage *newImage = [UIImage imageWithCGImage:drawImage];
CGImageRelease(drawImage);
Code Reference Subimage of Image

UIImageView image crop based on UIView mask

So i have an canvas (UIView) and a UIImageView, the canvas acts as a mask over the imageview
i am using UIGestureRecognizers to zoom and rotate the UIImageView which is under the canvas.
i want to convert the final image (show in the canvas to a UIImage, one solution is to convert the canvas to an image like below
UIGraphicsBeginImageContext(self.canvas.bounds.size);
[self.canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newCombinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
now this works fine but the problem with this solution is the image is cropped to the dimensions of the canvas so the resolution is very low.
another option i explored was to use some custom UIImage categories to rotate and scale.
[[[self.photoImage image] imageRotatedByDegrees:rotaton_angle]
imageAtRect:CGRectMake(x,y width,height)]
i need to provide rotation angle (the rotation angle provided by UIGesture Delegate is not in Degrees or Radians, then there is x,y,width,height, i imagine these needs to be calculated based on some scale, (i do get scale value from UIGesture delegate but they do not seem to be correct for this function)
there are a number of solutions here, that guides you to crop and image given a rect. but in my case the rect is not the same scale as the image also there is rotation involved.
any help will be appreciated.
i have managed to solve this, here is my solution, it most definitely isn't the cleanest, but it works.
i needed to handle 3 things, pan, zoom and rotate.
firstly i used the the UIGestureRecognizer Delegate to get cumulative values incrementing at UIGestureRecognizerStateEnded for all 3.
then for rotation i just used the UIImage Category discussed here
self.imagetoEdit = [self.imagetoEdit imageRotatedByRadians:total_rotation];
for zooming (scale) i used GPUImage (i am using this throughout the app)
GPUImageTransformFilter *scaleFilter = [[GPUImageTransformFilter alloc] init];
[scaleFilter setAffineTransform:CGAffineTransformMakeScale(total_scale, total_scale)];
[scaleFilter prepareForImageCapture];
self.imagetoEdit = [scaleFilter imageByFilteringImage:self.imagetoEdit];
for panning, im doing this. (Not the cleanest code :S ) also using the above mention UIImage+Categories.
CGFloat x_ = (translation_point.x/canvas.frame.size.width)*self.imagetoEdit.size.width;
CGFloat y_ = (translation_point.y/canvas.frame.size.height)*self.imagetoEdit.size.height;
CGFloat xx = 0;
CGFloat yy = 0;
CGFloat ww = self.imagetoEdit.size.width-x_;
CGFloat hh = self.imagetoEdit.size.height-y_;
if (translation_point.x < 0) {
xx = x_*-1;
ww = self.imagetoEdit.size.width + xx;
}
if (translation_point.y < 0) {
yy = y_*-1;
hh = self.imagetoEdit.size.height + yy;
}
CGRect cgrect = CGRectMake(xx,yy, ww, hh);
self.imagetoEdit = [self.imagetoEdit imageAtRect:cgrect];
everything seem to work.
This might be helpful...
Resizing a UIImage the right way
It might need some updating for ARC etc... though I think there are people who have done it and have posted it on Github.

Resources