Similar to Instagram I have a square crop view (UIScrollView) that has a UIImageView inside it. So the user can drag a portrait or landscape image inside the square rect (equal to the width of the screen) and then the image should be cropped at the scroll offset. The UIImageView is set to aspect fit. The UIScrollView content size is set to a scale factor for either landscape or portrait, so that it correctly renders with aspect fit ratio.
When the user is done dragging I want to scale the image up based on a given size, let's say 1000x1000px square and then crop it at the scroll offset (using [UIImage drawAtPoint:CGPoint].
The problem is I can't get the math right to get the right offset point. If I get it close on a 6+ it will be way off on a 4S.
Here's my code for the scale and crop:
(UIImage *)squareImageFromImage:(UIImage *)image scaledToSize:(CGFloat)newSize {
CGAffineTransform scaleTransform;
CGPoint origin;
if (image.size.width > image.size.height) {
//landscape
CGFloat scaleRatio = newSize / image.size.height;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake((int)(-self.scrollView.contentOffset.x*scaleRatio),0);
} else if (image.size.width < image.size.height) {
//portrait
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, (int)(-self.scrollView.contentOffset.y*scaleRatio));
} else {
//square
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, 0);
}
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, scaleTransform);
[image drawAtPoint:origin];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So for example with landscape, if I drag the scroll left so that the image is cropped all the way to the right, my offset will be close on a 6+ but on a 4S it will be off by about 150-200 in terms of the CGPoint.
Here is my code for setting up the scroll view and image view:
CGRect cropRect = CGRectMake(0.0f,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
CGFloat ratio = (int)self.image.size.height/self.image.size.width;
CGRect r = CGRectMake(0.0,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
if (ratio>1.00) {
//portrait
r = CGRectMake(0.0,0.0,SCREEN_WIDTH,(int)(SCREEN_WIDTH*ratio));
} else if (ratio<1.00) {
//landscape
CGFloat size = (int)self.image.size.width/self.image.size.height;
cropOffset = (SCREEN_WIDTH*size)-SCREEN_WIDTH;
r = CGRectMake(0.0,0.0,(int)(SCREEN_WIDTH*size),SCREEN_WIDTH);
}
NSLog(#"r.size.height == %.4f",r.size.height);
self.scrollView.frame = cropRect;
self.scrollView.contentSize = r.size;
self.imageView = [[UIImageView alloc] initWithFrame:r];
self.imageView.backgroundColor = [UIColor clearColor];
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
self.imageView.image = self.image;
[self.scrollView addSubview:self.imageView];
Cropping math can be tricky. It's been a while since I've had to deal with this, so hopefully I'm pointing you in the right direction. Here is a chunk of code from Pixology that grabs a scaled visible rect from a UIScrollView. I think the missing ingredient here might be zoomScale.
CGRect visibleRect;
visibleRect.origin = _scrollView.contentOffset;
visibleRect.size = _scrollView.bounds.size;
// figure in the scale
float theScale = 1.0 / _scrollView.zoomScale;
visibleRect.origin.x *= theScale;
visibleRect.origin.y *= theScale;
visibleRect.size.width *= theScale;
visibleRect.size.height *= theScale;
You may also need to figure in device screen scale:
CGFloat screenScale = [[UIScreen mainScreen] scale];
See how far you can get with this info, and let me know.
Related
I am using the following crop method to crop a uiimage that's sitting in a UIImageView which is then sitting in a UIScrollView.
-(UIImage *)cropImage:(UIImage *)image
{
float scale = 1.0f/_scrollView.zoomScale;
NSLog(#"Oh and heres that zoomScale: %f", _scrollView.zoomScale);
CGRect visibleRect;
visibleRect.origin.x = _scrollView.contentOffset.x * scale;
visibleRect.origin.y = _scrollView.contentOffset.y * scale;
visibleRect.size.width = _scrollView.bounds.size.width * scale;
visibleRect.size.height = _scrollView.bounds.size.height * scale;
NSLog(#"Oh and here's that CGRect: %f", visibleRect.origin.x);
NSLog(#"Oh and here's that CGRect: %f", visibleRect.origin.y);
NSLog(#"Oh and here's that CGRect: %f", visibleRect.size.width);
NSLog(#"Oh and here's that CGRect: %f", visibleRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], visibleRect);
UIImage *croppedImage = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
I need the image to be cropped to a CGSize of (321,115). Upon cropping the image and seeing the print results, I can see that visibleRect is (0,0,321,115) - what it is supposed to be, the croppedImage UIImage then has width:321 and height:115. for some reason however the image appears to be zoomed in entirely too far (the method cropped a smaller portion of the original image to a size of 321x115).
Why is this method not correctly cropping my image?
-As a side note: When I call this method, I am calling like so _croppedImage = [self cropImage:_imageView.image]; which sets a UIImage property of a custom UIView class to the cropped image.
Please try this function. It may help you.
Parameters:
UIImage
CGSize (321,115) or any size
// crop image - image will crop from full image
- (UIImage *)cropImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
}
else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x,
-offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
To crop only selected portion of image
Please check this link
I am using the code below first to create an image thumb (using a category) and then tailor the thumb to the VC in question, for example, make it round.
Somehow, the aspect ratio of images is not getting preserved with some getting squashed vertically...so a face looks like a sideways oval while others get squashed horizontally, so a round ball looks like an upright football. In the code for individual VCs, I am using UIViewContentModeScaleAspectFill and setting clip to bounds to yes but to no avail. Also tried checking these in Storybord but still no luck.
Can anyone see what might be wrong with code below?
//code in viewDidLoad
UIImage *thumbnail = [selectedImage createThumbnailToFillSize:CGSizeMake(side, side)];
//see createThumbNail method below
self.contactImage.image = thumbnail;
//image has been selected and trimmed to thumb. Now format it
CGSize itemSize = CGSizeMake(64, 64);
UIGraphicsBeginImageContextWithOptions(itemSize, NO, UIScreen.mainScreen.scale);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
self.contactImage.contentMode = UIViewContentModeScaleAspectFill;
self.contactImage.clipsToBounds = YES;
[self.contactImage.image drawInRect:imageRect];
self.contactImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.contactImage.layer.cornerRadius=60.0;
UIGraphicsEndImageContext();
//Generic category to create thumb
-(UIImage *) createThumbnailToFillSize:(CGSize)size
{
CGSize mainImageSize = size;
UIImage *thumb;
CGFloat widthScaler = size.width / mainImageSize.width;
CGFloat heightScaler = size.height / mainImageSize.height;
CGSize repositionedMainImageSize = mainImageSize;
CGFloat scaleFactor;
// Determine if we should shrink based on width or hight
if(widthScaler > heightScaler)
{
// calculate based on width scaler
scaleFactor = widthScaler;
repositionedMainImageSize.height = ceil(size.height / scaleFactor);
}
else {
// calculate based on height scaler
scaleFactor = heightScaler;
repositionedMainImageSize.width = ceil(size.width / heightScaler);
}
UIGraphicsBeginImageContext(size);
CGFloat xInc = ((repositionedMainImageSize.width-mainImageSize.width) / 2.f) *scaleFactor;
CGFloat yInc = ((repositionedMainImageSize.height-mainImageSize.height) / 2.f) *scaleFactor;
[self drawInRect:CGRectMake(xInc, yInc, mainImageSize.width * scaleFactor, mainImageSize.height * scaleFactor)];
thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumb;
}
I have an UIImageView (red squares) that will display a UIImage that must be scaled (I can receive images greater or smaller that the UIImageView). After scaling it, the showed part of the UIImage is the center of it.
What I need is to show the part of the image in the blue squares, how can I archive it?
I'm only able to get the image size (height and width), but it display the original size, when it's supposed to be the scaled one.
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.backgroundColor = [UIColor greenColor];
self.viewIm.layer.borderColor = [UIColor redColor].CGColor;
self.viewIm.layer.borderWidth = 5.0;
UIImage *im = [UIImage imageNamed:#"benjen"];
self.viewIm.image = im;
self.viewIm.contentMode = UIViewContentModeScaleAspectFill;
// self.viewim.clipsToBounds = YES;
[self.view addSubview:self.viewIm];
To do what you're trying to do, I'd recommend looking into CALayer's contentsRect property.
Since seeing your answer, I've been trying to work out the proper solution for a while, but the mathematics escapes me because contentsRect:'s x and y parameters seem sort of mysterious... But here's some code that may point you in the right direction...
float imageAspect = self.imageView.image.size.width/self.imageView.image.size.height;
float imageViewAspect = self.imageView.frame.size.width/self.imageView.frame.size.height;
if (imageAspect > imageViewAspect) {
float scaledImageWidth = self.imageView.frame.size.height * imageAspect;
float offsetWidth = -((scaledImageWidth-self.imageView.frame.size.width)/2);
self.imageView.layer.contentsRect = CGRectMake(offsetWidth/self.imageView.frame.size.width, 0.0, 1.0, 1.0);
} else if (imageAspect < imageViewAspect) {
float scaledImageHeight = self.imageView.frame.size.width * imageAspect;
float offsetHeight = ((scaledImageHeight-self.imageView.frame.size.height)/2);
self.imageView.layer.contentsRect = CGRectMake(0.0, offsetHeight/self.imageView.frame.size.height, 1.0, 1.0);
}
Try something like this:
CGRect cropRect = CGRectMake(0,0,200,200);
CGImageRef imageRef = CGImageCreateWithImageInRect([ImageToCrop CGImage],cropRect);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I found a very good approximation on this answer. In that, the category resize the image, and use the center point to crop after that. I adapt it to crop using (0,0) as origin point. As I don't really need a category, I use it as a single method.
- (UIImage *)imageByScalingAndCropping:(UIImage *)image forSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetSize.width;
CGFloat scaledHeight = targetSize.height;
if (CGSizeEqualToSize(image.size, targetSize) == NO) {
if ((targetSize.width / image.size.width) > (targetSize.height / image.size.height)) {
scaleFactor = targetSize.width / image.size.width; // scale to fit height
} else {
scaleFactor = targetSize.height / image.size.height; // scale to fit width
}
scaledWidth = image.size.width * scaleFactor;
scaledHeight = image.size.height * scaleFactor;
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointZero;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil) {
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
And my call is something like this:
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.image = [self imageByScalingAndCropping:[UIImage imageNamed:#"benjen"] forSize:CGSizeMake(120, 80)];
[self.view addSubview:self.viewIm];
I've spent some time on this and finally created a Swift 3.2 solution (based on one of my answers on another thread, as well as one of the answers above). This code only allows for Y translation of the image, but with some tweaks anyone should be able to add horizontal translation as well ;)
let yOffset: CGFloat = 20
myImageView.contentMode = .scaleAspectFill
//scale image to fit the imageView's width (maintaining aspect ratio), but allow control over the image's Y position
UIGraphicsBeginImageContextWithOptions(myImageView.frame.size, myImageView.isOpaque, 0.0)
let ratio = myImage.size.width / myImage.size.height
let newHeight = myImageView.frame.width / ratio
let rect = CGRect(x: 0, y: -yOffset, width: myImageView.frame.width, height: newHeight)
myImage.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext() ?? myImage
UIGraphicsEndImageContext()
//set the new image
myImageView.image = newImage
Now you can adjust how far down or up you need the image to be by changing the yOffset.
I need to find the biggest centered square from a portrait or a landscape image scaled to a size.
E.g. if I get an image of size 1200x800 and I need to get the centered square down to size 300x300.
I found an answer on this question on stackoverflow which has been widely copied. However that answer is incorrect, so want to post the correct answer which is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.height / image.size.height;
delta = ratio*(image.size.width - image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.width;
delta = ratio*(image.size.height - image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width),
(ratio * image.size.height));
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Incorrect answer which I found earlier is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The problem with this code is that it does not crop correctly.
Both the codes can be tried on following image:
https://s3.amazonaws.com/anandprakash/ImageWithPixelGrid.jpg
Correct Algo generates following image on the above base url:
https://s3.amazonaws.com/anandprakash/ScreenshotCorrectAlgo.png
Wrong Algo generates following image on the above base url - notice the extra 50px on the width on each side.
https://s3.amazonaws.com/anandprakash/ScreenshotWrongAlgo.png
Same answer above as a Swift extension on UIImage:
private extension UIImage {
func cropBiggestCenteredSquareImage(withSide side: CGFloat) -> UIImage {
if self.size.height == side && self.size.width == side {
return self
}
let newSize = CGSizeMake(side, side)
let ratio: CGFloat
let delta: CGFloat
let offset: CGPoint
if self.size.width > self.size.height {
ratio = newSize.height / self.size.height
delta = ratio * (self.size.width - self.size.height)
offset = CGPointMake(delta / 2, 0)
}
else {
ratio = newSize.width / self.size.width
delta = ratio * (self.size.height - self.size.width)
offset = CGPointMake(0, delta / 2)
}
let clipRect = CGRectMake(-offset.x, -offset.y, ratio * self.size.width, ratio * self.size.height)
if UIScreen.mainScreen().respondsToSelector(#selector(NSDecimalNumberBehaviors.scale)) {
UIGraphicsBeginImageContextWithOptions(newSize, true, 0.0)
} else {
UIGraphicsBeginImageContext(newSize);
}
UIRectClip(clipRect)
self.drawInRect(clipRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
I'am doing an app something like this: You load a photo and you put images over it, like balloons, etc..
When I try to merge one of this over images with only resize it works fine. Like 10px more than it should be but no problem.
The problem comes when you rotate the image [UIImageView] it appears much bigger that the image its, I try allot of things and nothing. I leave the code. I hope someone could help.
Note: The image size its inside UIImageView, then multiplied it by the scale of the main image
- (UIImage *)mergeImage:(UIImageView *)mainImage withImageView:(UIImageView *)imageView {
UIImage *temp = imageView.image;
UIImage *tempMain = mainImage.image;
CGFloat mainScale = [self imageViewScaleFactor:mainImage];
CGFloat tempScale = 1/mainScale;
NSLog(#"%f", tempScale);
//Rotate UIIMAGE
UIGraphicsBeginImageContext(temp.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, temp.size.width/2, temp.size.height/2);
CGFloat angle = atan2(imageView.transform.b, imageView.transform.a);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(ctx, transform);
// Draw the image into the context
CGContextDrawImage(ctx, CGRectMake(-temp.size.width/2, -temp.size.height/2, temp.size.width, temp.size.height), temp.CGImage);
// Get an image from the context
temp = [UIImage imageWithCGImage: CGBitmapContextCreateImage(ctx)];
NSLog(#"%f %f %f", mainScale, mainImage.frame.size.width, mainImage.frame.size.height);
UIGraphicsBeginImageContextWithOptions(tempMain.size, NO, 1.0f);
//Get imageView size & position
NSLog(#"%f %f %f %f", imageView.frame.origin.x, imageView.frame.origin.y, imageView.frame.size.width, imageView.frame.size.height);
CGFloat offsetX = 0;
CGFloat offsetY = -44;
if (tempMain.size.height > tempMain.size.width) {
offsetX = ((tempMain.size.width * mainScale) - 320)/2;
}else{
offsetY = ((tempMain.size.height * mainScale) - 416)/2;
offsetY -= 44;
}
CGFloat imageViewX = (imageView.frame.origin.x + offsetX) * tempScale;
CGFloat imageViewY = (imageView.frame.origin.y + offsetY) * tempScale;
CGFloat imageViewW = imageView.frame.size.width * tempScale;
CGFloat imageViewH = imageView.frame.size.height * tempScale;
CGRect tempRect = CGRectMake(imageViewX, imageViewY, imageViewW, imageViewH);
[tempMain drawAtPoint:CGPointZero];
[temp drawInRect:tempRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks
This is the solution that works for me
Merging a previosly rotated by gesture UIImageView with another one. WYS is not WYG
I just take a photo to the main screen and then crop it to the size of the photo, its faster, and clean. and the resolution it ok if the apps runs in retina in a normal device isn't too good. And you need to prepare that code to work in retina & non-retina