select card rectangle from captured image iOS - ios

I am working in iOS application where I want to select card rectangle from captured image using camera. So if anybody knows any solution please let me know. Thank you.

My code is below:
/**
* Cut out a image to a new image with the rect
*
* #param image UIImage image origin image
* #param rect CGRect rect the rect area you want
*
* #return UIImage
*/
+ (UIImage *)ct_imageFromImage:(UIImage *)image inRect:(CGRect)rect{
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat x= rect.origin.x*scale,y=rect.origin.y *scale,w=rect.size.width*scale,h=rect.size.height*scale;
CGRect dianRect = CGRectMake(x, y, w, h);
//cut the image with the rect area
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, dianRect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
return newImage;
}

Related

Convert UIScrollViewPoint to UIImage + ZOOM + ROTATE

I need to crop UIImage which is loaded in UIScrollview with some rect of Another UIView which is also in UIScrollView
Following is View Hierarchy
--> View
--> UIScrollView
--> viewBase (UIView)
--> UIImageView -> (Zoomed & rotated )
--> UIView (Target View)(Movable User can move anywhere in scrollview to crop rect)
My Image is Rotated & Zoomed I need to get exact part of image in TargetView
I am drawing UIImage with rotation on context following is code
CGFloat angleCroppedImageRetreacted = atan2f(self.imgVPhoto.transform.b, self.imgVPhoto.transform.a);
angleCroppedImageRetreacted = angleCroppedImageRetreacted * (180 / M_PI);
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, self.imgVPhoto.image.size.width, self.imgVPhoto.image.size.height)];
rotatedViewBox.transform = CGAffineTransformMakeRotation(-angleCroppedImageRetreacted);
CGSize rotatedSize = rotatedViewBox.frame.size;
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width / 2.0f, rotatedSize.height / 2.0f);
CGContextRotateCTM(bitmap, -angleCroppedImageRetreacted);
CGContextScaleCTM(bitmap, 1.0f, -1.0f);
CGContextDrawImage(bitmap, CGRectMake(-self.imgVPhoto.image.size.width / 2.0f,
-self.imgVPhoto.image.size.height / 2.0f,
self.imgVPhoto.image.size.width,
self.imgVPhoto.image.size.height),
self.imgVPhoto.image.CGImage);
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
And it works fine . I am getting Rotated UIImage same as i can see in Simulator
For converting Point of Target View to UIImage I use following code which is NOT WORKING
CGPoint imageViewPoint = [self.viewBase convertPoint:self.targetImageview.center toView:self.imgVPhoto];
float percentX = imageViewPoint.x / self.imgVPhoto.frame.size.width;
float percentY = imageViewPoint.y / self.imgVPhoto.frame.size.height;
CGPoint imagePoint = CGPointMake(resultImage.size.width * percentX, resultImage.size.height * percentY);
rect.origin = imagePoint;
//rect.origin.x *= (self.imgVPhoto.image.size.width / self.imgVPhoto.frame.size.width);
//rect.origin.y *= (self.imgVPhoto.image.size.height / self.imgVPhoto.frame.size.height);
imageRef = CGImageCreateWithImageInRect([resultImage CGImage], rect);
img = [UIImage imageWithCGImage:imageRef scale:viewImage.scale orientation:viewImage.imageOrientation];
I think issue is we can't use Rect after Transform Apply
Please Help me to crop UIImage which is zoomed and rotated from rect on same Hierarchy
If you need more info pls ask
I am answering my own question .
Thanks to Matic for giving a idea
I changed a logic
I have achieved same functionality what i looking for
CGPoint locationInImageView = [self.viewBase convertPoint:self.targetImageview.center toView:self.view]; // received from touch
locationInImageView = [self.view convertPoint:locationInImageView toView:self.imgVPhoto];
// I GOT LOCATION IN UIIMAGEVIEW OF TOUCH POINT
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0);
[self.imgVPhoto.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// I GOT UIIMAGE FROM CURRENT CONTEXT
CGFloat width = self.targetImageview.frame.size.width * self.zoomScale ;
CGFloat height = self.targetImageview.frame.size.height * self.zoomScale ;
//2 IS SCALE FACTOR
CGFloat xPos = (locationInImageView.x * 2) - width / 2;
CGFloat yPos = (locationInImageView.y * 2) - height / 2;
CGRect rect1 = CGRectMake(xPos, yPos, width, height);
CGImageRef imageRef = CGImageCreateWithImageInRect([img1 CGImage], rect1);
// YAHHH YOU HAVE EXACT IMAGE
UIImage *img = [UIImage imageWithCGImage:imageRef scale:img1.scale orientation:img1.imageOrientation];

How to crop center part of large image in iOS?

When a image is cropped from the center then crop image will take the aspect ratio of source image,But According to my requirement, aspect ratio will be change with new crop size.
I want to get exact center part of image with new aspect ratio.For example a large image is of size (320*480) then I want to crop center part of image of size (100,100) and aspect ratio will also be 100*100 ,No outer white or black part is required and image quality will be high.
Cropping function :
- (UIImage *)cropImage:(UIImage*)image andFrame:(CGRect)rect {
//Note : rec is nothing but the image frame which u want to crop exactly.
rect = CGRectMake(rect.origin.x*image.scale,
rect.origin.y*image.scale,
rect.size.width*image.scale,
rect.size.height*image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:image.scale
orientation:image.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Please help me.
This might run
- (UIImage *)imageByCroppingImage:(UIImage *)image toSize:(CGSize)size
{
// not equivalent to image.size (which depends on the imageOrientation)!
double refWidth = CGImageGetWidth(image.CGImage);
double refHeight = CGImageGetHeight(image.CGImage);
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x, y, size.height, size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:0.0 orientation:self.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
var imageView = UIImageView()
// height and width values corresponds to rectangle height and width
imageView = UIImageView(frame: CGRectMake(0, 0, width, height ))
imageView.image = UIImage(named: "Your Image Name")
// by setting content mode to .ScaleAspectFill image centrally fill in imageView. image might appear beyond image frame.
imageView.contentMode = .ScaleAspectFill
// by setting .clipsToBouds to true, image set to image frame.
imageView.clipsToBounds = true
view.addSubview(imageView)
// Work this code
-(UIImage *)croppedImage
{
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.bezierPath closePath];
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0.0, 0.0, 0.0, 0.0);
_b_image = self.bg_imageview.image;
CGSize imageSize = _b_image.size;
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO, [[UIScreen mainScreen] scale]);
[self.bezierPath addClip];
[_b_image drawInRect:imageRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Calculate crop rect from image
float imgHeight = 100.0f; //Any according to requirement
float imgWidth = 100.0f; //Any according to requirement
CGRect cropRect = CGRectMake((largeImage.size.width/2)-(imgWidth/2),largeImage.size.height/2)-(imgHeight/2),imgWidth,imgHeight);
Now crop it
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
// or use the UIImage wherever you like
UIImage *croppedImg = [UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);

Cropping UIImage from pinch/Zoom state

I have been looking around to find a way to crop an image (imageView) inside ScrollView. What i need to achieve here is user pinch to zoom an image, (which is working great) and then able to crop ONLY the part of the image that is shown on the screen after the zoom. so in another words, if i have a series of number 1 to 10, user zoom it to see only number 4 and 5 and when user taps crop, i ONLY need image of 4 and 5 that was visible at the time of crop. I'm using this code...
- (UIImage *)croppedImageWithImage:(UIImage *)image zoom:(CGFloat)zoom
{
CGFloat zoomReciprocal = 1.0f / zoom;
CGRect croppedRect = CGRectMake(self.scrollView.contentOffset.x*zoom,
self.scrollView.contentOffset.y*zoom,
image.size.width * zoomReciprocal,
image.size.height * zoomReciprocal);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([image CGImage], croppedRect);
UIImage* croppedImage = [[UIImage alloc] initWithCGImage:croppedImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(croppedImageRef);
return croppedImage;
}
This method crops the image however, the x and y values are incorrect... What do i have to do to get the correct x and y of my crop area?
FYI.. the image i'm trying to crop is 2200 * 3200, not sure if that would make any difference?
I solved it!! here is the updated method for others who might come across this issue.
- (UIImage *)croppedImageWithImage:(UIImage *)image zoom:(CGFloat)zoom {
CGFloat zoomReciprocal = 1.0f / zoom;
CGFloat xOffset = image.size.width / self.scrollViewBackground.contentSize.width;
CGFloat yOffset = image.size.height / self.scrollViewBackground.contentSize.height;
CGRect croppedRect = CGRectMake(self.scrollViewBackground.contentOffset.x * xOffset,
self.scrollViewBackground.contentOffset.y * yOffset,
image.size.width * zoomReciprocal,
image.size.height * zoomReciprocal);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([image CGImage], croppedRect);
UIImage *croppedImage = [[UIImage alloc] initWithCGImage:croppedImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(croppedImageRef);
return croppedImage;
}

Why when I add add a UIImage on top of another UIImage for a new image does the added image shrink?

I'm trying to add a video player icon on top of a thumbnail of a video.
I get the image from the YouTube API, then crop it to be square, then resize it to be the proper size. I then add my player icon image on top of it.
The problem lies in the fact that the player icon is much smaller than it should be on the thumbnail (it's 28x28pt when on screen it's much smaller). See in the below image where I added it to the cell to show the size it should be, versus the thumbnail size:
I crop it to a square with this method:
/**
* Given a UIImage, return it with a square aspect ratio (via cropping, not smushing).
*/
- (UIImage *)createSquareVersionOfImage:(UIImage *)image {
CGFloat originalWidth = image.size.width;
CGFloat originalHeight = image.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
// Determine the offset needed to crop the center of the image out.
CGFloat xOffsetToBeCentered = (originalWidth - smallestDimension) / 2;
CGFloat yOffsetToBeCentered = (originalHeight - smallestDimension) / 2;
// Create the square, making sure the position and dimensions are set appropriately for retina displays.
CGRect square = CGRectMake(xOffsetToBeCentered * image.scale, yOffsetToBeCentered * image.scale, smallestDimension * image.scale, smallestDimension *image.scale);
CGImageRef squareImageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:squareImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(squareImageRef);
return squareImage;
}
Resize it with this method:
/**
* Resize the given UIImage to a new size and return the newly resized image.
*/
- (UIImage *)resizeImage:(UIImage *)image toSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And add it on top of the other image with this method:
/**
* Adds a UIImage on top of another UIImage and returns the result. The top image is centered.
*/
- (UIImage *)addImage:(UIImage *)additionalImage toImage:(UIImage *)backgroundImage {
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[additionalImage drawInRect:CGRectMake((backgroundImage.size.width - additionalImage.size.width) / 2, (backgroundImage.size.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
And this is how it is implemented:
UIImage *squareThumbnail = [self resizeImage:[self createSquareVersionOfImage:responseObject] toSize:CGSizeMake(110.0, 110.0)];
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
UIImage *squareThumbnailWithPlayerIcon = [self addImage:playerIcon toImage:squareThumbnail];
But in the end, the icon is always too small. The sizing things confuse me when working with images, as I'm used to it figuring out retina screen related things automatically, and for example in the above code block, I'm not sure why I set it to 110.0, 110.0 as it's a 55x55 UIImageView and I thought it scales automatically (but if I put it to 55 it's stretched terribly).
The reason you have to put 110 in your resizeImage call is because you are creating a CGGraphics context with a scale of 1.0. The graphics context for views in a view hierarchy on retina displays have a scale of 2.0 (provided you did nothing to scale anything else).
I believe that new UIImage that you create is now a "normal" image (Sorry I can't remember the technical term). It is not an #2x image. So its size that you will get when you ask for size will not scale for #2x.
Note this answer:
UIGraphicsGetImageFromCurrentImageContext retina resolutions?
I haven't tested this, but it should work. If it doesn't it should at least be more straightforward to debug.
//images should be passed in with their original scales
-(UIImage*)compositedImageWithSize:(CGSize)newSize bg:(UIImage*)backgroundImage fgImage:(UIImage*)foregroundImage{
//match the scale of screen.
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
//instead of resizing the image ahead of time, we just draw it into the context at the appropriate size. The context will clip the image.
CGRect aspectFillRect = CGRectZero;
if(newSize.width/newSize.height > backgroundImage.size.width/backgroundImage.size.height){
aspectFillRect.y = 0;
aspectFillRect.height = newSize.height;
CGFloat scaledWidth = (newSize.height / backgroundImage.size.height) * newSize.width;
aspectFillRect.x = (newSize.width - scaledWidth)/2.0;
aspectFillRect.width = scaledWidth;
}else{
aspectFillRect.x = 0;
aspectFillRect.width = newSize.width;
CGFloat scaledHeight = (newSize.width / backgroundImage.size.width) * newSize.height;
aspectFillRect.y = (newSize.height - scaledHeight)/2.0;
aspectFillRect.height = scaledHeight;
}
[backgroundImage drawInRect:aspectFillRect];
//pass in the 2x image for the fg image so it provides a better resolution
[foregroundImage drawInRect:CGRectMake((newSize.width - additionalImage.size.width) / 2, (newSize.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
You would skip all those methods you were calling before and do:
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
//pass in the non-retina scale of the image
UIImage *result = [self compositedImageWithSize:CGSizeMake(55.0, 55.0)
bg:responseObject
fg:playerIcon];
Hope this helps!

Zooming UIImage/CGImage

I am implementing a zooming feature in a camera app using AVFoundation. I am scaling my preview view like this:
[videoPreviewView setTransform:CGAffineTransformMakeScale(cameraZoom, cameraZoom)];
Now, after I take a picture, I would like to zoom/crop the picture with the cameraZoom value before I save it to the camera roll. How best should I do this?
Edit: Using Justin's answer:
CGRect imageRect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], imageRect);
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef), CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef));
CGContextScaleCTM(bitmapContext, scale, scale);
CGContextDrawImage(bitmapContext, imageRect, imageRef);
CGImageRef zoomedCGImage = CGBitmapContextCreateImage(bitmapContext);
UIImage* zoomedImage = [[UIImage alloc] initWithCGImage:imageRef];
It is zooming the image, but it is not taking the center of it, but rather seems to be taking the top right area. (I'm not positive).
The other problem, (I should have been clearer in the OP), is that the image remains the same resolution, but I would rather just crop it down.
+ (UIImage*)croppedImageWithImage:(UIImage *)image zoom:(CGFloat)zoom
{
CGFloat zoomReciprocal = 1.0f / zoom;
CGPoint offset = CGPointMake(image.size.width * ((1.0f - zoomReciprocal) / 2.0f), image.size.height * ((1.0f - zoomReciprocal) / 2.0f));
CGRect croppedRect = CGRectMake(offset.x, offset.y, image.size.width * zoomReciprocal, image.size.height * zoomReciprocal);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([image CGImage], croppedRect);
UIImage* croppedImage = [[UIImage alloc] initWithCGImage:croppedImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(croppedImageRef);
return croppedImage;
}
to scale/zoom:
create a CGBitmapContext
alter the context's transform (CGContextScaleCTM)
draw the image (CGContextDrawImage) - the rect you pass may be used to offset the origin and/or dimensions.
generate a new CGImage from the context (CGBitmapContextCreateImage)
to crop:
create a CGBitmapContext. pass NULL so the context creates is buffer for the bitmap.
draw the image as-is (CGContextDrawImage)
create a CGBitmapContext for the crop (with the new dimensions), using an offset of the first context's pixel data for the buffer.
generate a new CGImage from the second context (CGBitmapContextCreateImage)

Resources