I am using the library
linked here to resize a camera-captured image with the following code:
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGFloat squareLength = SC_APP_SIZE.width;
CGFloat headHeight = _previewLayer.bounds.size.height - squareLength;//_previewLayer的frame是(0, 44, 320, 320 + 44)
CGSize size = CGSizeMake(squareLength * 2, squareLength * 2);
UIImage *scaledImage = [image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:size interpolationQuality:kCGInterpolationHigh];
CGRect cropFrame = CGRectMake((scaledImage.size.width - size.width) / 2, (scaledImage.size.height - size.height) / 2 + headHeight, size.width, size.height);
UIImage *croppedImage = [scaledImage croppedImage:cropFrame];
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
if (orientation != UIDeviceOrientationPortrait) {
CGFloat degree = 0;
if (orientation == UIDeviceOrientationPortraitUpsideDown) {
degree = 180;// M_PI;
} else if (orientation == UIDeviceOrientationLandscapeLeft) {
degree = -90;// -M_PI_2;
} else if (orientation == UIDeviceOrientationLandscapeRight) {
degree = 90;// M_PI_2;
}
croppedImage = [croppedImage rotatedByDegrees:degree];
}
The code works no matter what orientation, back camera or front camera, when the image is captured through the camera.
Problem happens when I use the same code (except getting the image's exif orientation instead of the device orientation) for an image that was originally camera-captured, but accessed through iOS camera roll.
Case: when the image saved in the camera roll was captured in landscape orientation by the BACK camera. Effectively, the image gets an exif orientation that indicates it was captured in portrait mode...but it's still a widescreen image.
The code crops the image disproportionately, leaving a black bar of empty space on the edge. I can't figure out what the problem is. Can someone point me in the right direction?
Related
I am building an iOS app that is to have a full screen ImagePickerController, with the resulting image that is captured to be the same that is shown in the ImagePickerController View. This is my current relevant code:
To create and transform the ImagePickerController:
self.imagePicker = [[UIImagePickerController alloc] init];
self.imagePicker.delegate = self;
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
// set the aspect ratio of the camera
float heightRatio = 4.0f / 3.0f;
// calculate the height of the camera based on the screen width
float cameraHeight = floorf(screenSize.width * heightRatio);
// calculate the ratio that the camera height needs to be scaled by
float scale = ceilf((screenSize.height / cameraHeight) * 10.0) / 10.0;
if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {
self.imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
self.imagePicker.showsCameraControls = NO;
[self.imagePicker setCameraOverlayView:cameraView];
// move the controller to the center of the screen
self.imagePicker.cameraViewTransform = CGAffineTransformMakeTranslation(0, (screenSize.height - cameraHeight) / 2.0);
// concatenate the scale transform
self.imagePicker.cameraViewTransform = CGAffineTransformScale(self.imagePicker.cameraViewTransform, scale, scale);
}
Once the image captured, here is the code I am using to redraw the captured image to match the Preview:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
self.image = [info objectForKey:UIImagePickerControllerOriginalImage];
self.image = [self croppedImage:self.image];
- (UIImage *) croppedImage: (UIImage *)image{
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
// set the aspect ratio of the camera
float heightRatio = 4.0f / 3.0f;
// calculate the height of the camera based on the screen width
float cameraHeight = floorf(screenSize.width * heightRatio);
// calculate the ratio that the camera height needs to be scaled by
float scale = ceilf((screenSize.height / cameraHeight) * 10.0) / 10.0;
CGSize originalImageSize = [image size];
CGSize newImageSize = CGSizeMake(floorf(originalImageSize.width / scale)* 3/4, floorf(originalImageSize.height / scale)* 3/4);
CGRect newImageRect = CGRectMake((originalImageSize.width - newImageSize.width)/2.0, (originalImageSize.height - newImageSize.height)/2.0, newImageSize.width, newImageSize.height);
return [image croppedImage:newImageRect];
}
So my problem is that my CroppedImage method is not calculating correctly, as the resulting image seems to be more "zoomed in" than needed. Not really sure what is wrong in the calculation.
Note - this app is to intended to scale properly on all iPhones - Portrait mode only. I am currently testing on iPhone 6.
In case this helps anybody - on my device i was able to fix it by switching
CGSize newImageSize = CGSizeMake(floorf(originalImageSize.width / scale)* 3/4, floorf(originalImageSize.height / scale)* 3/4);
to
CGSize newImageSize = CGSizeMake(floorf(originalImageSize.width / scale), floorf(originalImageSize.height / scale)* 4/3);
only the height/scale needed to by be multiplied and by 4/3, not 3/4. I have not tested this on any other devices yet. Just figured this might help anybody running into the same thing.
I have a MKMapView in which I can have multiple ARDVenueAnnotationView (subclass of MKAnnotationView) with a custom image at same coordinates. Thus, these annotations overlap. What I have done for this, is to change the anchorPoint of the annotation view's layer. This is working as image below (4 centered annotations have the same coordinates) :
Besides, I would like the annotations to change their image orientation so the little image tail points to the coordinate (don't mind the annotations order) :
Here comes my issue, when I setImage: on my annotation view, constructing this image with + (UIImage *)imageWithCGImage:scale:orientation:, the orientation does not change. Here is my code that update the image :
- (void)updateImage
{
UIImage *selectedImage = [UIImage imageNamed:#"redPin"];
if (!self.isCluster && self.selected) {
selectedImage = [UIImage imageNamed:#"whitePin"];
}
UIImageOrientation orientation;
switch (self.anchorCorner) {
case ARDVenueAnnotationAnchorCornerBottomLeft:
orientation = UIImageOrientationUpMirrored;
break;
case ARDVenueAnnotationAnchorCornerTopLeft:
orientation = UIImageOrientationDown;
break;
case ARDVenueAnnotationAnchorCornerTopRight:
orientation = UIImageOrientationDownMirrored;
break;
default:
orientation = UIImageOrientationUp;
break;
}
UIImage *image = [[UIImage alloc] initWithCGImage:selectedImage.CGImage scale:selectedImage.scale orientation:orientation];
[self setImage:image];
}
Where anchorCorner is the property to set when I want the annotation view to shift for the image little tail to points to the coordinates.
This method never changes the image orientation (default image has the tail at bottom right) and it keeps rendering as first picture above.
When I add an UIImageView as subview of my annotation view, it shows the good image orientation (as shown in the second picture).
My questions :
Why setImage: does not consider the image orientation ? Or maybe I am doing something wrong...
How can I achieve this without adding UIImageView as subview ? after all, image property is here for a reason
Try creating an image with the original image with the code:
static inline double radians (double degrees) {return degrees * M_PI/180;}
-(UIImage*) image :(UIImage*) src withOrientation: (UIImageOrientation) orientation
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, radians(90));
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, radians(-90));
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, radians(90));
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
You may also need to flip the image so use this code:
UIImage* flippedImage = [UIImage imageWithCGImage:image.CGImage
scale:image.scale
orientation:UIImageOrientationUpMirrored];
I am interested in using a custom overlay for the UIImagePickerController for camera controls that take the entire screen, very similar to what the default Camera app does when switched to the video function.
I have referenced several questions here on SO on this topic, particularly this one, as well as the Apple example on how to use a custom overlay for the UIImagePickerController, but none of them have been able to produce a successful, full-screen result with iOS 7.
So far, this is what the camera UI looks like with the custom overlay (without any buttons):
Note that there is a black bar in the bottom of the screen. I noticed that I could get the black bar removed if I change the cameraAspectRatio to 1, but this distorts the camera zoom as it then has a very zoomed in view. Is there a way to have full screen without either distorting the zoom too much (I understand that a small bit is necessary), and also remove the black bar?
Here is my code that configures the custom overlay:
{
self.imagePickerController.showsCameraControls = NO;
[[NSBundle mainBundle] loadNibNamed:#"overlayView" owner:self options:nil];
self.overlayView.frame = self.imagePickerController.cameraOverlayView.frame;
self.imagePickerController.cameraOverlayView = self.overlayView;
self.overlayView = nil;
// Device's screen size (ignoring rotation intentionally):
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
float cameraAspectRatio = 4.0 / 3.0; //! Note: 4.0 and 4.0 works
float imageWidth = floorf(screenSize.width * cameraAspectRatio);
float scale = ceilf((screenSize.height / imageWidth) * 10.0) / 10.0;
self.imagePickerController.cameraViewTransform = CGAffineTransformMakeScale(scale, scale);
}
Update:
An additional issue that I noticed is, the picture that is shown in camera preview is always smaller and has less details than the picture that is saved when the method [self.imagePickerController takePicture]. To illustrate:
This is what the keyboard looks like in the camera preview with the black bar below (sorry its a bit shaky):
However, note that the actual captured image in the preview panel has a lot more details as shown here, especially towards the top, as well as both left and right.
My question is, how would be able to set my camera preview so that what the user sees in preview is exactly the image that it will capture and could be shown to them afterwards? Thanks!
Update 2
Here is the entire code in viewDidLoad that sets up the camera controls.
- (void)viewDidLoad
{
[super viewDidLoad];
//Appearance configuration
self.navigationItem.hidesBackButton = YES;
//UIImagePickerController initialization and setup
self.imagePickerController = [[UIImagePickerController alloc] init];
self.imagePickerController.delegate = self;
self.imagePickerController.allowsEditing = NO;
if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]){
self.imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; //if there is a camera avaliable
} else {
self.imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;//otherwise go to the folder
}
self.imagePickerController.mediaTypes = [[NSArray alloc] initWithObjects: (NSString *) kUTTypeImage, nil];
if (self.imagePickerController.sourceType == UIImagePickerControllerSourceTypeCamera)
{
self.imagePickerController.showsCameraControls = NO;
[[NSBundle mainBundle] loadNibNamed:#"overlayView" owner:self options:nil];
self.overlayView.frame = self.imagePickerController.cameraOverlayView.frame;
self.imagePickerController.cameraOverlayView = self.overlayView;
self.overlayView = nil;
// Device's screen size (ignoring rotation intentionally):
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
int heightOffset = 0;
if(SYSTEM_VERSION_GREATER_THAN_OR_EQUAL_TO(#"7.0"))
{
heightOffset = 120; //whole screen :)
}
float cameraAspectRatio = 4.0 / 3.0; //! Note: 4.0 and 4.0 works
float imageWidth = floorf(screenSize.width * cameraAspectRatio);
float scale = ceilf(((screenSize.height + heightOffset) / imageWidth) * 10.0) / 10.0;
self.imagePickerController.cameraViewTransform = CGAffineTransformMakeScale(scale, scale);
}
}
And in viewWillAppear, I call the image picker view:
BOOL modalPresent = (BOOL)(self.presentedViewController);
//Present the Camera UIImagePicker if no image is taken
if (!appDelegate.imageStorageDictionary[#"picture1"]){
if (modalPresent == NO){ //checks if the UIImagePickerController is modally active
[self presentViewController:self.imagePickerController animated:NO completion:nil];
}
}
After many attempts, this is what worked for me with many thanks to other people's suggestions. The following facts were very helpful to know and keep in mind:
The camera's points resolution is 426 * 320. In order for the camera preview's height to be stretched to the phone's screen height of 568, it needs to be multiplied by a factor of 1.3333 when using CGAffineTransformScale.
Note that the below are hard coded with various numbers based on the iPhone 5's screen resolution in points. They could be improved by using such objects such as screen.height, screen.width and other variables to make it applicable to iPhone 4/4s dimensions as well.
self.imagePickerController.showsCameraControls = NO;
[[NSBundle mainBundle] loadNibNamed:#"overlayView" owner:self options:nil];
self.overlayView.frame = self.imagePickerController.cameraOverlayView.frame;
self.imagePickerController.cameraOverlayView = self.overlayView;
self.overlayView = nil;
//For iphone 5+
//Camera is 426 * 320. Screen height is 568. Multiply by 1.333 in 5 inch to fill vertical
CGAffineTransform translate = CGAffineTransformMakeTranslation(0.0, 71.0); //This slots the preview exactly in the middle of the screen by moving it down 71 points
self.imagePickerController.cameraViewTransform = translate;
CGAffineTransform scale = CGAffineTransformScale(translate, 1.333333, 1.333333);
self.imagePickerController.cameraViewTransform = scale;
In Swift
var picker: UIImagePickerController = UIImagePickerController();
picker.sourceType = UIImagePickerControllerSourceType.Camera;
picker.showsCameraControls = false;
var screenBounds: CGSize = UIScreen.mainScreen().bounds.size;
var scale = screenBounds.height / screenBounds.width;
picker.cameraViewTransform = CGAffineTransformScale(picker.cameraViewTransform, scale, scale);
Make sure that you account for 20px status bar change in iOS7. If you are facing a 20px black screen at bottom of the screen then this will be your issue. You can check that whether the app is running in ios7 or not by one of these preprocessors
#define SYSTEM_VERSION_EQUAL_TO(v) ([[[UIDevice currentDevice] systemVersion] compare:v options:NSNumericSearch] == NSOrderedSame)
#define SYSTEM_VERSION_GREATER_THAN(v) ([[[UIDevice currentDevice] systemVersion] compare:v options:NSNumericSearch] == NSOrderedDescending)
#define SYSTEM_VERSION_GREATER_THAN_OR_EQUAL_TO(v) ([[[UIDevice currentDevice] systemVersion] compare:v options:NSNumericSearch] != NSOrderedAscending)
#define SYSTEM_VERSION_LESS_THAN(v) ([[[UIDevice currentDevice] systemVersion] compare:v options:NSNumericSearch] == NSOrderedAscending)
#define SYSTEM_VERSION_LESS_THAN_OR_EQUAL_TO(v) ([[[UIDevice currentDevice] systemVersion] compare:v options:NSNumericSearch] != NSOrderedDescending)
And you can make following changes to your code and see if it works
{
self.imagePickerController.showsCameraControls = NO;
[[NSBundle mainBundle] loadNibNamed:#"overlayView" owner:self options:nil];
self.overlayView.frame = self.imagePickerController.cameraOverlayView.frame;
self.imagePickerController.cameraOverlayView = self.overlayView;
self.overlayView = nil;
// Device's screen size (ignoring rotation intentionally):
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
int heightOffset = 0;
if(SYSTEM_VERSION_GREATER_THAN_OR_EQUAL_TO(#"7.0"))
{
heightOffset = 20;
}
float cameraAspectRatio = 4.0 / 3.0; //! Note: 4.0 and 4.0 works
float imageWidth = floorf(screenSize.width * cameraAspectRatio);
float scale = ceilf(((screenSize.height + heightOffset) / imageWidth) * 10.0) / 10.0;
self.imagePickerController.cameraViewTransform = CGAffineTransformMakeScale(scale, scale);
}
Please let me know if it works. I haven't coded it to test.
Because the screen aspect ratio is different than the 4/3 ratio of the camera itself, you need (as you mentioned) to slightly crop the edges in order to have the image extend to the bottom.
In order to do that, you need to have the width be wide enough so that the height can be the full screen. So, your image width and height must be:
float cameraAspectRatio = 4.0 / 3.0; //! Note: 4.0 and 4.0 works
float imageWidth = screenSize.height / cameraAspectRatio;
You'll also probably want the view presenting the camera image to be that width that extends off both sides of the screen.
There is a way easier method to get your overlay fullscreen at least tested in iOS 9.
let view = ... your overlayView
let bounds = UIScreen.mainScreen().bounds
view.center = CGPoint(x: bounds.midX, y: bounds.midY)
view.bounds = bounds
self.imagePicker.cameraOverlayView = view
I don't know why you use screen size. Just try this simple code:
if ([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera])
{
UIImagePickerController *controller = [[UIImagePickerController alloc] init];
controller.sourceType = UIImagePickerControllerSourceTypeCamera;
controller.allowsEditing = NO;
controller.mediaTypes = [UIImagePickerController availableMediaTypesForSourceType:
UIImagePickerControllerSourceTypeCamera];
controller.delegate = self;
[self presentViewController:controller animated:NO completion:^{}];
}
Try this code to maintain your resultant image in custom size. I got result by using custom method for getting resultant image as custom size.
First create IBOutlet for UIImageview.
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = info[UIImagePickerControllerMediaType];
[self dismissViewControllerAnimated:YES completion:nil];
if ([mediaType isEqualToString:(NSString *)kUTTypeImage]) {
OriginalImage=info[UIImagePickerControllerOriginalImage];
image = info[UIImagePickerControllerOriginalImage];
//----------------------------------------
imageview.image = image; //------------- additional method for custom image size
self resizeImage];
//-----------------------------------------
if (_newMedia)
UIImageWriteToSavedPhotosAlbum(image,self,#selector(image:finishedSavingWithError:contextInfo:),nil);
}
else if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
{
// Code here to support video if enabled
}
}
//---- Resize the original image by using custom method----------------------
-(void)resizeImage
{
UIImage *resizeImage = imageview.image;
float width = 320;
float height = 320;
CGSize newSize = CGSizeMake(320,320);
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
CGRect rect = CGRectMake(0, 0, width, height);
float widthRatio = resizeImage.size.width / width;
float heightRatio = resizeImage.size.height / height;
float divisor = widthRatio > heightRatio ? widthRatio : heightRatio;
width = resizeImage.size.width / divisor;
height = resizeImage.size.height / divisor;
rect.size.width = width;
rect.size.height = height;
//indent in case of width or height difference
float offset = (width - height) / 2;
if (offset > 0) {
rect.origin.y = offset;
}
else {
rect.origin.x = -offset;
}
[resizeImage drawInRect: rect];
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = smallImage;
imageview.contentMode = UIViewContentModeScaleAspectFit;
}
To solve a similar problem, I instantiated a ViewController from storyboard and used the controller's view as camera overlay.
I don't know if it's what you're looking for, but i used the following code:
UIImagePickerController *globalPicker = [[UIImagePickerController alloc] init];
globalPicker.delegate = self;
globalPicker.sourceType = UIImagePickerControllerSourceTypeCamera;controller=[self.storyboard instantiateViewControllerWithIdentifier:#"myVC"];
overlayView = controller.view;
UIView *cameraView=[[UIView alloc] initWithFrame:self.view.bounds];
[cameraView addSubview:overlayView];
[globalPicker setCameraOverlayView:cameraView];
Try this code, it works fine for me
UIImagePickerController *cameraUI = [[UIImagePickerController alloc] init];
cameraUI.sourceType = UIImagePickerControllerSourceTypeCamera;
cameraUI.showsCameraControls = NO;
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
float cameraAspectRatio = 4.0 / 3.0;
float imageHeight = floorf(screenSize.width * cameraAspectRatio);
float scale = screenSize.height / imageHeight;
float trans = (screenSize.height - imageHeight)/2;
CGAffineTransform translate = CGAffineTransformMakeTranslation(0.0, trans);
CGAffineTransform final = CGAffineTransformScale(translate, scale, scale);
cameraUI.cameraViewTransform = final
;
cameraUI.delegate = self;
[self presentViewController:cameraUI animated:YES completion:nil];
This code assumes that the aspect of the original camera preview area is 4:3.
When an image appears in my modal view, I want the image to scale correctly, like so:
If the view is landscape:
If it is a landscape image it should fit to the screen bounds
If it is a portrait image it should center
If the view is portrait
If it is a portrait image fit vertical center
Same for landscape image
I have the portrait view ok because I use scale to aspect fit, which works great in portrait
However, no matter what I do I cannot seem to get the images to scale/resize/position correctly when I change to landscape.
Thank you in advance.
(void)viewDidLoad
{
[super viewDidLoad];
self.scollView.delegate=self;
UITapGestureRecognizer *doubleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleDoubleTap:)];
[doubleTap setNumberOfTapsRequired:2];
[self.viewImage addGestureRecognizer:doubleTap];
}
- (void)resetContentMode:(UIImageView *)imageView
{
if (imageView.image == nil)
return;
CGSize imageSize = imageView.image.size;
CGSize imageViewSize = imageView.bounds.size;
if (imageSize.height == 0 || imageViewSize.height == 0)
return;
CGFloat imageRatio = imageSize.width / imageSize.height;
CGFloat imageViewRatio = imageViewSize.width / imageViewSize.height;
CGFloat percentDifference = fabs(imageRatio - imageViewRatio) / imageViewRatio;
if (percentDifference < 0.25)
imageView.contentMode = UIViewContentModeScaleAspectFill;
else
imageView.contentMode = UIViewContentModeScaleAspectFit;
}
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
- (BOOL)shouldAutorotate {
return YES;
}
- (NSUInteger)supportedInterfaceOrientations {
return UIInterfaceOrientationMaskAll;
}
-(void)willAnimateRotationToInterfaceOrientation: (UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
if(toInterfaceOrientation == UIInterfaceOrientationPortrait){
[self resetContentMode:im];
}
else if (toInterfaceOrientation == UIInterfaceOrientationLandscapeRight || toInterfaceOrientation == UIInterfaceOrientationLandscapeLeft) {
[self resetContentMode:im];
}
}
- (void) adjustViewsForOrientation:(UIInterfaceOrientation) orientation {
[self.scollView setZoomScale:self.scollView.minimumZoomScale animated:YES];
if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown)
{
[self resetContentMode:im];
}
else if (orientation == UIInterfaceOrientationLandscapeLeft || orientation == UIInterfaceOrientationLandscapeRight)
{
[self resetContentMode:im];
}
}
- (void)orientationChanged:(NSNotification *)notification{
[self adjustViewsForOrientation:[[UIApplication sharedApplication] statusBarOrientation]];}
If you don't need pinch-zoom, but rather just need to tweak the appearance of the image view based upon whether the orientation of the image matches that of the image view, you can retire the scrollview in this process.
You can accomplish the desired effect with just an image view, but changing the contentMode based upon how close the ratio of the image dimensions are to the image view's ratio of dimensions. If they're close, I'll default to UIViewContentModeScaleAspectFill (and because the ratios are close to each other, I don't have to worry about too much getting clipped). If they're not close, I'll default to UIViewContentModeScaleAspectFit (because the ratios are not close, I know a lot would get clipped if I used "fill", so instead I default to "fit").
- (void)resetContentMode:(UIImageView *)imageView
{
if (imageView.image == nil)
return;
CGSize imageSize = imageView.image.size;
CGSize imageViewSize = imageView.bounds.size;
if (imageSize.height == 0 || imageViewSize.height == 0)
return;
CGFloat imageRatio = imageSize.width / imageSize.height;
CGFloat imageViewRatio = imageViewSize.width / imageViewSize.height;
CGFloat percentDifference = fabs(imageRatio - imageViewRatio) / imageViewRatio;
if (percentDifference < 0.25)
imageView.contentMode = UIViewContentModeScaleAspectFill;
else
imageView.contentMode = UIViewContentModeScaleAspectFit;
}
I then, whenever I rotate, call this method again. For example:
- (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation
{
[self resetContentMode:self.imageView];
}
I also turn on userInteractionEnabled on the image, and have a double tap gesture that toggles between the two content modes:
- (void)handleDoubleTap:(UITapGestureRecognizer *)gesture
{
UIImageView *imageView = (id)gesture.view;
if (imageView.contentMode == UIViewContentModeScaleAspectFit)
imageView.contentMode = UIViewContentModeScaleAspectFill;
else if (imageView.contentMode == UIViewContentModeScaleAspectFill)
imageView.contentMode = UIViewContentModeScaleAspectFit;
else
[self resetContentMode:imageView];
}
I'll also make sure that the UIImageView has its clipsToBounds set to YES, such that when it's using UIViewContentModeScaleAspectFill, I don't have to worry about the image bleeding over the bounds of the image view.
This is a quick-and-dirty solution, but it works pretty well for simple scenarios, where you only want to worry about aspect fit vs aspect fill.
im using someone else's pinch gesture code for scaling which works perfect but its scaling my image in my photo editing app and once the user presses done scaling, i need the changes to be reflected and saved or another way to say it i need the image to actually be zoomed in and cropped if someone used pinch to scale. I figured i could use the amount they scaled * the frame size for uigraphicsbeginimagecontext but that strategy is not working since when the user scales the image and hits the done button the image gets saved smaller because this now very large size is getting squeezed into the view when what i really want it crop off any leftovers and not do any fitting.
- (IBAction)pinchGest:(UIPinchGestureRecognizer *)sender{
if (sender.state == UIGestureRecognizerStateEnded
|| sender.state == UIGestureRecognizerStateChanged) {
NSLog(#"sender.scale = %f", sender.scale);
CGFloat currentScale = self.activeImageView.frame.size.width / self.activeImageView.bounds.size.width;
CGFloat newScale = currentScale * sender.scale;
if (newScale < .5) {
newScale = .5;
}
if (newScale > 4) {
newScale = 4;
}
CGAffineTransform transform = CGAffineTransformMakeScale(newScale, newScale);
self.activeImageView.transform = transform;
scalersOfficialChange = newScale;
sender.scale = 1;
}
}
- (IBAction)doneMoverViewButtonPressed:(UIButton *)sender {
// turn off ability to move & scale
moverViewActive = NO;
NSLog(#"%f %f",dragOfficialChange.x,dragOfficialChange.y);
NSLog(#"%f",rotationOfficialChange);
NSLog(#"%f",scalersOfficialChange);
//problem area below...
CGSize newSize = CGSizeMake(self.activeImageView.bounds.size.width * scalersOfficialChange, self.activeImageView.bounds.size.height * scalersOfficialChange );
UIGraphicsBeginImageContext(newSize);
[self.activeImageView.image drawInRect:CGRectMake(dragOfficialChange.x, dragOfficialChange.y, self.layerContainerView.bounds.size.width, self.layerContainerView.bounds.size.height)];
self.activeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self hideMoveViewerAnimation];
//resets activeimageview coords
CGRect myFrame = self.layerContainerView.bounds;
myFrame.origin.x = 0;
myFrame.origin.y = 0;
self.activeImageView.frame = myFrame;
//reset changes values
dragOfficialChange.x = 0;
dragOfficialChange.y = 0;
rotationOfficialChange = 0;
scalersOfficialChange = 0;
}
first of all, can you make your question more clear? I suggest you want to draw your image in a rect and don't want to squeeze it, am I right?
Then lets try this method:
//The method: drawInRect:(CGRect) will scale your pic and squeeze it to fit the Rect area
//So if you dont want to scale your pic, you can use the method below
[image drawAsPatternInRect:(CGRect)_rect];
//This method would not scale your image and cut the needless part