I am creating an iPhone app which has image cropping feature. In this, I am getting the photos from the UIImagePickerController and passing it for cropping. There it has a scrollview and the selected image will be added as a subview to the scrollview. And I am using a UIButton for selecting the area for cropping. User can move the button over the imageview and place it anywhere, and when click on CROP button, the area similar to the frame size of the button should be cropped from the imageview.
I used the following code, but it is not returning the actual image.
CGRect clippedRect = CGRectMake(self.scrollView.frame.origin.x+90, self.scrollView.frame.origin.y, self.scrollView.frame.size.width-180, self.scrollView.frame.size.height-220);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.myPhoto CGImage], clippedRect);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
self.imageView.image = newImage;
also used
- (UIImage *)cropImage:(UIImage *)oldImage {
CGSize imageSize = self.cropFrame.frame.size;
UIGraphicsBeginImageContextWithOptions( CGSizeMake( imageSize.width, imageSize.height), NO, 0.);
[oldImage drawAtPoint:CGPointMake( xPosition, yPosition)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
but the result image is not the exact image as per the button frame. I am getting the image from another area.
Updated code
- (void)loadPhoto{
CGFloat w = self.myPhoto.size.width;
CGFloat h = self.myPhoto.size.height;
CGRect imageViewFrame = CGRectMake(0.0f, 0.0f, roundf(w / 2.0f), roundf(h / 2.0f));
self.scrollView.contentSize = imageViewFrame.size;
UIImageView *iv = [[UIImageView alloc] initWithFrame:imageViewFrame];
iv.contentMode = UIViewContentModeScaleAspectFit;
iv.image = self.myPhoto;
[self.view addSubview:iv];
self.imageView = iv;
[iv release];
}
CGRect crop;//= CGRectMake(10, 10, 360, 360);
crop.origin.x = self.cropFrame.frame.origin.x;
crop.origin.y = self.cropFrame.frame.origin.y;
crop.size.width = roundf(self.cropFrame.frame.size.width * 2.0f); //self.cropFrame.frame.size.width * 2;
crop.size.height = roundf(self.cropFrame.frame.size.height * 2.0f); //self.cropFrame.frame.size.height * 2;
NSLog(#"Rect: %#", NSStringFromCGRect(crop));
self.imageView.image = [self croppedImage:crop];
- (UIImage *)croppedImage:(CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect([self.imageView.image CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:self.myPhoto.imageOrientation];
CGImageRelease(imageRef);
return croppedImage;
}
Please help to find a solution.
The iOS has a default feature for cropping images.Try this code.
picker.allowsEditing = YES;
and also check this controller for cropping..this is exactly the one you are looking for I think https://github.com/barrettj/BJImageCropper .Hope this helps you..
Since you are using a scrollView that allows the image to be scrolled, you need to adjust your crop rect to the scrollView's position:
float zoomScale = self.scrollView.zoomScale;
int cropX = (self.scrollView.contentOffset.x-imageView.frame.origin.x)/zoomScale;
int cropY = (self.scrollView.contentOffset.y-imageView.frame.origin.y)/zoomScale;
You could use this crop tool that I made. It essentially gives you an interface to allow the user to select the crop area. I think it is in line with that you are looking for.
https://github.com/nicholjs/BFCropInterface
Believing you have solve this problem. Me too had this when tried cropping functionality
Set image.size as the imageView.size & scrollView.contentSize. Below code will give the rect to crop
cropRect.origin = scrollView.contentOffset;
cropRect.size = scrollView.bounds.size;
cropRect.origin.x /= scrollView.zoomScale;
cropRect.origin.y /= scrollView.zoomScale;
cropRect.size.width /= scrollView.zoomScale;
cropRect.size.height /= scrollView.zoomScale;
If planning to show the full image first on visible rect. Setting the imageView.size & scrollView.contentSize to visible view size will give crop image of some other area. Instead try finding the zoom scale by
CGFloat dxWidth = viewCrop.frame.size.width / imageView.image.size.width;
CGFloat dxHeight = viewCrop.frame.size.height / imageView.image.size.height;
CGFloat zoomScale = fmaxf(dWidth, dHeight)
and apply (if by adding subView then after addSubView)
scrollView.minimumZoomScale = zoomScale; // to disable further zoom-out
[scrollView setZoomScale: zoomScale];
Related
I have an UIImageView (red squares) that will display a UIImage that must be scaled (I can receive images greater or smaller that the UIImageView). After scaling it, the showed part of the UIImage is the center of it.
What I need is to show the part of the image in the blue squares, how can I archive it?
I'm only able to get the image size (height and width), but it display the original size, when it's supposed to be the scaled one.
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.backgroundColor = [UIColor greenColor];
self.viewIm.layer.borderColor = [UIColor redColor].CGColor;
self.viewIm.layer.borderWidth = 5.0;
UIImage *im = [UIImage imageNamed:#"benjen"];
self.viewIm.image = im;
self.viewIm.contentMode = UIViewContentModeScaleAspectFill;
// self.viewim.clipsToBounds = YES;
[self.view addSubview:self.viewIm];
To do what you're trying to do, I'd recommend looking into CALayer's contentsRect property.
Since seeing your answer, I've been trying to work out the proper solution for a while, but the mathematics escapes me because contentsRect:'s x and y parameters seem sort of mysterious... But here's some code that may point you in the right direction...
float imageAspect = self.imageView.image.size.width/self.imageView.image.size.height;
float imageViewAspect = self.imageView.frame.size.width/self.imageView.frame.size.height;
if (imageAspect > imageViewAspect) {
float scaledImageWidth = self.imageView.frame.size.height * imageAspect;
float offsetWidth = -((scaledImageWidth-self.imageView.frame.size.width)/2);
self.imageView.layer.contentsRect = CGRectMake(offsetWidth/self.imageView.frame.size.width, 0.0, 1.0, 1.0);
} else if (imageAspect < imageViewAspect) {
float scaledImageHeight = self.imageView.frame.size.width * imageAspect;
float offsetHeight = ((scaledImageHeight-self.imageView.frame.size.height)/2);
self.imageView.layer.contentsRect = CGRectMake(0.0, offsetHeight/self.imageView.frame.size.height, 1.0, 1.0);
}
Try something like this:
CGRect cropRect = CGRectMake(0,0,200,200);
CGImageRef imageRef = CGImageCreateWithImageInRect([ImageToCrop CGImage],cropRect);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I found a very good approximation on this answer. In that, the category resize the image, and use the center point to crop after that. I adapt it to crop using (0,0) as origin point. As I don't really need a category, I use it as a single method.
- (UIImage *)imageByScalingAndCropping:(UIImage *)image forSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetSize.width;
CGFloat scaledHeight = targetSize.height;
if (CGSizeEqualToSize(image.size, targetSize) == NO) {
if ((targetSize.width / image.size.width) > (targetSize.height / image.size.height)) {
scaleFactor = targetSize.width / image.size.width; // scale to fit height
} else {
scaleFactor = targetSize.height / image.size.height; // scale to fit width
}
scaledWidth = image.size.width * scaleFactor;
scaledHeight = image.size.height * scaleFactor;
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointZero;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil) {
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
And my call is something like this:
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.image = [self imageByScalingAndCropping:[UIImage imageNamed:#"benjen"] forSize:CGSizeMake(120, 80)];
[self.view addSubview:self.viewIm];
I've spent some time on this and finally created a Swift 3.2 solution (based on one of my answers on another thread, as well as one of the answers above). This code only allows for Y translation of the image, but with some tweaks anyone should be able to add horizontal translation as well ;)
let yOffset: CGFloat = 20
myImageView.contentMode = .scaleAspectFill
//scale image to fit the imageView's width (maintaining aspect ratio), but allow control over the image's Y position
UIGraphicsBeginImageContextWithOptions(myImageView.frame.size, myImageView.isOpaque, 0.0)
let ratio = myImage.size.width / myImage.size.height
let newHeight = myImageView.frame.width / ratio
let rect = CGRect(x: 0, y: -yOffset, width: myImageView.frame.width, height: newHeight)
myImage.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext() ?? myImage
UIGraphicsEndImageContext()
//set the new image
myImageView.image = newImage
Now you can adjust how far down or up you need the image to be by changing the yOffset.
When a image is cropped from the center then crop image will take the aspect ratio of source image,But According to my requirement, aspect ratio will be change with new crop size.
I want to get exact center part of image with new aspect ratio.For example a large image is of size (320*480) then I want to crop center part of image of size (100,100) and aspect ratio will also be 100*100 ,No outer white or black part is required and image quality will be high.
Cropping function :
- (UIImage *)cropImage:(UIImage*)image andFrame:(CGRect)rect {
//Note : rec is nothing but the image frame which u want to crop exactly.
rect = CGRectMake(rect.origin.x*image.scale,
rect.origin.y*image.scale,
rect.size.width*image.scale,
rect.size.height*image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:image.scale
orientation:image.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Please help me.
This might run
- (UIImage *)imageByCroppingImage:(UIImage *)image toSize:(CGSize)size
{
// not equivalent to image.size (which depends on the imageOrientation)!
double refWidth = CGImageGetWidth(image.CGImage);
double refHeight = CGImageGetHeight(image.CGImage);
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x, y, size.height, size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:0.0 orientation:self.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
var imageView = UIImageView()
// height and width values corresponds to rectangle height and width
imageView = UIImageView(frame: CGRectMake(0, 0, width, height ))
imageView.image = UIImage(named: "Your Image Name")
// by setting content mode to .ScaleAspectFill image centrally fill in imageView. image might appear beyond image frame.
imageView.contentMode = .ScaleAspectFill
// by setting .clipsToBouds to true, image set to image frame.
imageView.clipsToBounds = true
view.addSubview(imageView)
// Work this code
-(UIImage *)croppedImage
{
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.bezierPath closePath];
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0.0, 0.0, 0.0, 0.0);
_b_image = self.bg_imageview.image;
CGSize imageSize = _b_image.size;
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO, [[UIScreen mainScreen] scale]);
[self.bezierPath addClip];
[_b_image drawInRect:imageRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Calculate crop rect from image
float imgHeight = 100.0f; //Any according to requirement
float imgWidth = 100.0f; //Any according to requirement
CGRect cropRect = CGRectMake((largeImage.size.width/2)-(imgWidth/2),largeImage.size.height/2)-(imgHeight/2),imgWidth,imgHeight);
Now crop it
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
// or use the UIImage wherever you like
UIImage *croppedImg = [UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
How may I retrieve the image from imageView sized as it is displayed (given the content mode), and not as it is according to native properties?
Code:
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, WID, WID)];
imageView.center = CGPointMake(point.x, point.y + Y_OFFSET);
imageView.image = [UIImage imageNamed:#"img"];
imageView.contentMode = UIViewContentModeScaleAspectFit;
You have to draw the image again then save it.
// Image frame size
CGSize size = imageView.bounds.size;
// Grab a new CGContext
UIGraphicsBeginImageContextWithOptions(size, false, 0.0);
// Draw the image
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Grab the new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The above code draws the image in the frame, stretched to the bounds. If you want any other modes of how it is drawn, you have to calculate them yourself and put the desired stuff in the "Draw the image" line of code.
For example, for aspect fit, check out this algorithm:
- (CGRect) aspectFittedRect:(CGSize)inSize max:(CGRect)maxRect {
float originalAspectRatio = inSize.width / inSize.height;
float maxAspectRatio = maxRect.size.width / maxRect.size.height;
CGRect newRect = maxRect;
if (originalAspectRatio > maxAspectRatio) { // scale by width
newRect.size.height = maxRect.size.height * inSize.height / inSize.width;
newRect.origin.y += (maxRect.size.height - newRect.size.height)/2.0;
} else {
newRect.size.width = maxRect.size.height * inSize.width / inSize.height;
newRect.origin.x += (maxRect.size.width - newRect.size.width)/2.0;
}
return CGRectIntegral(newRect);
}
Just pass in imageView.image.size as inSize and imageView.bounds as maxRect.
Source:
http://iphonedevsdk.com/forum/iphone-sdk-development-advanced-discussion/15001-aspect-fit-algorithm.html
After going through this link, issue with my code is that output image is unable to set proper x and y values as cropped image seems to have 0 and 0 in the resultant image irrespective to where I zoom (or where the scroll offset is calculated). Here's what I tried.
- (IBAction)crop:(id)sender
{
float zoomScale = 1.0f / [self.scroll zoomScale];
CGRect rect;
NSLog(#"contentOffset is :%f,%f",[self.scroll contentOffset].x,[self.scroll contentOffset].y);
rect.origin.x = self.scroll.contentOffset.x * zoomScale;
rect.origin.y = self.scroll.contentOffset.y * zoomScale;
rect.size.width = self.scroll.bounds.size.width * zoomScale;
rect.size.height = self.scroll.bounds.size.height * zoomScale;
UIGraphicsBeginImageContextWithOptions( CGSizeMake(rect.size.width, rect.size.height),
NO,
0.);
NSLog(#"rect offset is :%f,%f",rect.origin.x,rect.origin.y);
CGPoint point = CGPointMake(-rect.origin.x, -rect.origin.y); **//even though above NSLog have some values, but output image is unable to set proper x and y values as cropped image seems to have 0 and 0 in the resultant image.**
[[self.imagV image] drawAtPoint:point
blendMode:kCGBlendModeCopy
alpha:1];
self.croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
DTImageViewController *imageViewController = [[DTImageViewController alloc] initWithNibName:#"DTImageViewController" bundle:nil];
imageViewController.image = self.croppedImage;
[self.navigationController pushViewController:imageViewController animated:YES];
}
Similar code as already posted but just taking whole UIScrollView bounds without passing a CGRect
-(void)takeScreenShotOfScrollView
{
UIGraphicsBeginImageContextWithOptions(scrollView.bounds.size, YES, [UIScreen mainScreen].scale);
CGPoint offset = scrollView.contentOffset;
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y);
[scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img = [SDImageHelper imageWithImage:img_image scaledToSize:CGSizeMake(769, 495)];
}
BONUS:
The cropping method
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I'm using the following code for this and it does exactly what expected:
- (UIImage *) croppedImageOfView:(UIView *) view withFrame:(CGRect) rect
{
UIGraphicsBeginImageContextWithOptions(rect.size,NO,0.0);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -rect.origin.x, -rect.origin.y);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return visibleScrollViewImage;
}
- (void) crop
{
CGRect neededRect = CGRectMake(50,50,100,100);
UIImage *image = [self croppedImageOfView:_scrollView withFrame:neededRect];
}
Works well even if content is zoomed, the only thing that must be calculated wisely is needed area CGRect.
I'm trying to add a video player icon on top of a thumbnail of a video.
I get the image from the YouTube API, then crop it to be square, then resize it to be the proper size. I then add my player icon image on top of it.
The problem lies in the fact that the player icon is much smaller than it should be on the thumbnail (it's 28x28pt when on screen it's much smaller). See in the below image where I added it to the cell to show the size it should be, versus the thumbnail size:
I crop it to a square with this method:
/**
* Given a UIImage, return it with a square aspect ratio (via cropping, not smushing).
*/
- (UIImage *)createSquareVersionOfImage:(UIImage *)image {
CGFloat originalWidth = image.size.width;
CGFloat originalHeight = image.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
// Determine the offset needed to crop the center of the image out.
CGFloat xOffsetToBeCentered = (originalWidth - smallestDimension) / 2;
CGFloat yOffsetToBeCentered = (originalHeight - smallestDimension) / 2;
// Create the square, making sure the position and dimensions are set appropriately for retina displays.
CGRect square = CGRectMake(xOffsetToBeCentered * image.scale, yOffsetToBeCentered * image.scale, smallestDimension * image.scale, smallestDimension *image.scale);
CGImageRef squareImageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:squareImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(squareImageRef);
return squareImage;
}
Resize it with this method:
/**
* Resize the given UIImage to a new size and return the newly resized image.
*/
- (UIImage *)resizeImage:(UIImage *)image toSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And add it on top of the other image with this method:
/**
* Adds a UIImage on top of another UIImage and returns the result. The top image is centered.
*/
- (UIImage *)addImage:(UIImage *)additionalImage toImage:(UIImage *)backgroundImage {
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[additionalImage drawInRect:CGRectMake((backgroundImage.size.width - additionalImage.size.width) / 2, (backgroundImage.size.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
And this is how it is implemented:
UIImage *squareThumbnail = [self resizeImage:[self createSquareVersionOfImage:responseObject] toSize:CGSizeMake(110.0, 110.0)];
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
UIImage *squareThumbnailWithPlayerIcon = [self addImage:playerIcon toImage:squareThumbnail];
But in the end, the icon is always too small. The sizing things confuse me when working with images, as I'm used to it figuring out retina screen related things automatically, and for example in the above code block, I'm not sure why I set it to 110.0, 110.0 as it's a 55x55 UIImageView and I thought it scales automatically (but if I put it to 55 it's stretched terribly).
The reason you have to put 110 in your resizeImage call is because you are creating a CGGraphics context with a scale of 1.0. The graphics context for views in a view hierarchy on retina displays have a scale of 2.0 (provided you did nothing to scale anything else).
I believe that new UIImage that you create is now a "normal" image (Sorry I can't remember the technical term). It is not an #2x image. So its size that you will get when you ask for size will not scale for #2x.
Note this answer:
UIGraphicsGetImageFromCurrentImageContext retina resolutions?
I haven't tested this, but it should work. If it doesn't it should at least be more straightforward to debug.
//images should be passed in with their original scales
-(UIImage*)compositedImageWithSize:(CGSize)newSize bg:(UIImage*)backgroundImage fgImage:(UIImage*)foregroundImage{
//match the scale of screen.
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
//instead of resizing the image ahead of time, we just draw it into the context at the appropriate size. The context will clip the image.
CGRect aspectFillRect = CGRectZero;
if(newSize.width/newSize.height > backgroundImage.size.width/backgroundImage.size.height){
aspectFillRect.y = 0;
aspectFillRect.height = newSize.height;
CGFloat scaledWidth = (newSize.height / backgroundImage.size.height) * newSize.width;
aspectFillRect.x = (newSize.width - scaledWidth)/2.0;
aspectFillRect.width = scaledWidth;
}else{
aspectFillRect.x = 0;
aspectFillRect.width = newSize.width;
CGFloat scaledHeight = (newSize.width / backgroundImage.size.width) * newSize.height;
aspectFillRect.y = (newSize.height - scaledHeight)/2.0;
aspectFillRect.height = scaledHeight;
}
[backgroundImage drawInRect:aspectFillRect];
//pass in the 2x image for the fg image so it provides a better resolution
[foregroundImage drawInRect:CGRectMake((newSize.width - additionalImage.size.width) / 2, (newSize.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
You would skip all those methods you were calling before and do:
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
//pass in the non-retina scale of the image
UIImage *result = [self compositedImageWithSize:CGSizeMake(55.0, 55.0)
bg:responseObject
fg:playerIcon];
Hope this helps!