I need to rotate my UIImageView, or the UIImage in the UIImageView, while moving its frame with NSTimer. This is the NSTimer movement :
timer = [NSTimer scheduledTimerWithTimeInterval:time target:self selector:#selector(animate) userInfo:nil repeats:YES];
This is the animate method (a part of it, the most important) :
CGRect viewLocation = [[[self layer] presentationLayer] frame];
self.frame = CGRectMake(viewLocation.origin.x, viewLocation.origin.y + 0.2, viewLocation.size.width, viewLocation.size.height);
And this is the method i'm trying to use to rotate the UIImage :
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This code is inside a subclass of UIImageView, so self is referred to UIImageView.
I also try to rotate UIImageView with CGAffineTransform, but the UIImageView is being stretched and rotate in a strange manner.
This is the method :
- (void)rotateWithDegrees:(CGFloat)degrees {
CGAffineTransform transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(degrees));
self.transform = transform;
}
And this is the macro :
#define DEGREES_TO_RADIANS(angle) (angle / 180.0 * M_PI)
The important thing is that you can always use the timer to move the object, so that part is unchangeable.
I found the solution! First of all we exclude the solution that provides the resize of UIImage. So, we rotate the UIImageView, but when use CGAffineTransform the frame have NOT to be modified! So, to move the UIImageView with the timer we have to modify the center of the UIImageView. This is to delete :
CGRect viewLocation = [[[self layer] presentationLayer] frame];
self.frame = CGRectMake(viewLocation.origin.x, viewLocation.origin.y + 0.2, viewLocation.size.width, viewLocation.size.height);
And this is to add :
self.center = CGPointMake(self.frame.origin.x + (self.frame.size.width / 2), self.frame.origin.y + 0.2 + (self.frame.size.height / 2));
Then, we can use the rotateWithDegrees: method to rotate the UIImageView :
- (void)rotateWithDegrees:(CGFloat)degrees {
CGAffineTransform transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(degrees));
self.transform = transform;
_degrees+=0.5;
if (_degrees == 360) {
_degrees = 1;
}
}
So, we can add the line :
[self rotateWithDegrees:_degrees];
below :
self.center = CGPointMake(self.frame.origin.x + (self.frame.size.width / 2), self.frame.origin.y + 0.2 + (self.frame.size.height / 2));
The UIImageView moves due to the change of its center, and not due to the change of its frame and this allow to use CGAffineTransform for the rotation.
Related
I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are,
- (NSInteger) processImage1: (UIImage*) image
{
CGFloat width = image.size.width;
CGFloat height = image.size.height;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease];
}
How i take(croping outside bordered) the middle part of UIImage?????????
Try something like this:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Note: cropRect is smaller rectangle with middle part of the image...
I was looking for a way to get an arbitrary rectangular crop (ie., sub-image) of a UIImage.
Most of the solutions I tried do not work if the orientation of the image is anything but UIImageOrientationUp.
For example:
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Typically if you use your iPhone camera, you will have other orientations like UIImageOrientationLeft, and you will not get a correct crop with the above. This is because of the use of CGImageRef/CGContextDrawImage which differ in the coordinate system with respect to UIImage.
The code below uses UI* methods (no CGImageRef), and I have tested this with up/down/left/right oriented images, and it seems to work great.
// get sub image
- (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect {
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[img drawInRect:drawRect];
// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return subImage;
}
Because I needed it just now, here is M-V 's code in Swift 4:
func imageWithImage(image: UIImage, croppedTo rect: CGRect) -> UIImage {
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let drawRect = CGRect(x: -rect.origin.x, y: -rect.origin.y,
width: image.size.width, height: image.size.height)
context?.clip(to: CGRect(x: 0, y: 0,
width: rect.size.width, height: rect.size.height))
image.draw(in: drawRect)
let subImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return subImage!
}
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible. It would certainly eliminate a lot of effort!
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify. The pixel scaling is preserved from the original image.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done when that tab is selected.
This code is also posted at Most efficient way to draw part of an image in iOS
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
#Very small/simple Swift 5 version,
You shouldn't mix UI and CG objects, they sometimes have very different coordinate spaces. This can make you sad.
Note 👉 : self.draw(at:)
#inlinable private prefix func - (right: CGPoint) -> CGPoint
{
return CGPoint(x: -right.x, y: -right.y)
}
extension UIImage
{
public func cropped(to cropRect: CGRect) -> UIImage?
{
let renderer = UIGraphicsImageRenderer(size: cropRect.size)
return renderer.image
{
_ in
self.draw(at: -cropRect.origin)
}
}
}
Using the function
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
Here's an example code, used for a different purpose but clips ok.
- (UIImage *)aspectFillToSize:(CGSize)size
{
CGFloat imgAspect = self.size.width / self.size.height;
CGFloat sizeAspect = size.width/size.height;
CGSize scaledSize;
if (sizeAspect > imgAspect) { // increase width, crop height
scaledSize = CGSizeMake(size.width, size.width / imgAspect);
} else { // increase height, crop width
scaledSize = CGSizeMake(size.height * imgAspect, size.height);
}
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
[self drawInRect:CGRectMake(0.0f, 0.0f, scaledSize.width, scaledSize.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you want a portrait crop down the center of every photo.
Use #M-V solution, & replace cropRect.
CGFloat height = imageTaken.size.height;
CGFloat width = imageTaken.size.width;
CGFloat newWidth = height * 9 / 16;
CGFloat newX = abs((width - newWidth)) / 2;
CGRect cropRect = CGRectMake(newX,0, newWidth ,height);
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/
I want to create a new UIImage from another one which is turned to 45° (at its bottom left corner, clockwise). The space around the old image would be filled white or so. In the image I uploaded, the old image would be the blue one and the new image would be the actual image I linked, including the white parts.
Played a little bit in playground with Swift and here is my solution:
func rotateImage(image: UIImage!, var rotationDegree: CGFloat) -> UIImage {
// 180 degress = 540 degrees, that's why we calculate modulo
rotationDegree = rotationDegree % 360
// If degree is negative, then calculate positive
if rotationDegree < 0.0 {
rotationDegree = 360 + rotationDegree
}
// Get image size
let size = image.size
let width = size.width
let height = size.height
// Get degree which we will use for calculation
var calcDegree = rotationDegree
if calcDegree > 90 {
calcDegree = 90 - calcDegree % 90
}
// Calculate new size
let newWidth = width * CGFloat(cosf(Float(calcDegree.degreesToRadians))) + height * CGFloat(sinf(Float(calcDegree.degreesToRadians)))
let newHeight = width * CGFloat(sinf(Float(calcDegree.degreesToRadians))) + height * CGFloat(cosf(Float(calcDegree.degreesToRadians)))
let newSize = CGSize(width: newWidth, height: newHeight)
// Create context using new size, make it opaque, use screen scale
UIGraphicsBeginImageContextWithOptions(newSize, true, UIScreen.mainScreen().scale)
// Get context variable
let context = UIGraphicsGetCurrentContext()
// Set fill color to white (or any other)
// If no color needed, then set opaque to false when initialize context
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPointZero, size: newSize))
// Rotate context and draw image
CGContextTranslateCTM(context, newSize.width * 0.5, newSize.height * 0.5)
CGContextRotateCTM(context, rotationDegree.degreesToRadians);
CGContextTranslateCTM(context, newSize.width * -0.5, newSize.height * -0.5)
image.drawAtPoint(CGPoint(x: (newSize.width - size.width) / 2.0, y: (newSize.height - size.height) / 2.0))
// Get image from context
let returnImage = UIGraphicsGetImageFromCurrentImageContext()
// End graphics context
UIGraphicsEndImageContext()
return returnImage
}
Do not forget to include this extension:
extension CGFloat {
var degreesToRadians : CGFloat {
return self * CGFloat(M_PI) / 180.0
}
}
I would recommend to go threw this answer to better understand how I calculated newSize after image is rotated.
If you just want to change the way an image is displayed, transform the image view that displays it.
If you really want a new rotated image, redraw the image in a transformed graphics context.
If you just want to rotate the UIImageView used to display the image, you could do this:
#define DegreesToRadians(x) ((x) * M_PI / 180.0) //put this at the top of your file
imageView.transform = CGAffineTransformMakeRotation(DegreesToRadians(45));
But if you want to rotate the actual image, do something like this:
- (UIImage *)image:(UIImage *)image rotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), [image CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Above code adapted from this answer by The Lion https://stackoverflow.com/a/11667808/1757960
Similar to Instagram I have a square crop view (UIScrollView) that has a UIImageView inside it. So the user can drag a portrait or landscape image inside the square rect (equal to the width of the screen) and then the image should be cropped at the scroll offset. The UIImageView is set to aspect fit. The UIScrollView content size is set to a scale factor for either landscape or portrait, so that it correctly renders with aspect fit ratio.
When the user is done dragging I want to scale the image up based on a given size, let's say 1000x1000px square and then crop it at the scroll offset (using [UIImage drawAtPoint:CGPoint].
The problem is I can't get the math right to get the right offset point. If I get it close on a 6+ it will be way off on a 4S.
Here's my code for the scale and crop:
(UIImage *)squareImageFromImage:(UIImage *)image scaledToSize:(CGFloat)newSize {
CGAffineTransform scaleTransform;
CGPoint origin;
if (image.size.width > image.size.height) {
//landscape
CGFloat scaleRatio = newSize / image.size.height;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake((int)(-self.scrollView.contentOffset.x*scaleRatio),0);
} else if (image.size.width < image.size.height) {
//portrait
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, (int)(-self.scrollView.contentOffset.y*scaleRatio));
} else {
//square
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, 0);
}
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, scaleTransform);
[image drawAtPoint:origin];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So for example with landscape, if I drag the scroll left so that the image is cropped all the way to the right, my offset will be close on a 6+ but on a 4S it will be off by about 150-200 in terms of the CGPoint.
Here is my code for setting up the scroll view and image view:
CGRect cropRect = CGRectMake(0.0f,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
CGFloat ratio = (int)self.image.size.height/self.image.size.width;
CGRect r = CGRectMake(0.0,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
if (ratio>1.00) {
//portrait
r = CGRectMake(0.0,0.0,SCREEN_WIDTH,(int)(SCREEN_WIDTH*ratio));
} else if (ratio<1.00) {
//landscape
CGFloat size = (int)self.image.size.width/self.image.size.height;
cropOffset = (SCREEN_WIDTH*size)-SCREEN_WIDTH;
r = CGRectMake(0.0,0.0,(int)(SCREEN_WIDTH*size),SCREEN_WIDTH);
}
NSLog(#"r.size.height == %.4f",r.size.height);
self.scrollView.frame = cropRect;
self.scrollView.contentSize = r.size;
self.imageView = [[UIImageView alloc] initWithFrame:r];
self.imageView.backgroundColor = [UIColor clearColor];
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
self.imageView.image = self.image;
[self.scrollView addSubview:self.imageView];
Cropping math can be tricky. It's been a while since I've had to deal with this, so hopefully I'm pointing you in the right direction. Here is a chunk of code from Pixology that grabs a scaled visible rect from a UIScrollView. I think the missing ingredient here might be zoomScale.
CGRect visibleRect;
visibleRect.origin = _scrollView.contentOffset;
visibleRect.size = _scrollView.bounds.size;
// figure in the scale
float theScale = 1.0 / _scrollView.zoomScale;
visibleRect.origin.x *= theScale;
visibleRect.origin.y *= theScale;
visibleRect.size.width *= theScale;
visibleRect.size.height *= theScale;
You may also need to figure in device screen scale:
CGFloat screenScale = [[UIScreen mainScreen] scale];
See how far you can get with this info, and let me know.
Help, Im new to ios programming, I want to rotate may UIImage but I dont want the edges to be cut or loose some part of the image.
this is my code:
double angle = M_PI * 10/ 180; CGSize s = {image.size.width, image.size.height}; UIGraphicsBeginImageContext(s); CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, image.size.width/2, image.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx,CGRectMake(-[image size].width/2,-[image size].height/2,image.size.width, image.size.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
The image is rotating but the size of the frame does not change because of that some of the image has been cut.
OUTPUT:https://fbcdn-sphotos-e-a.akamaihd.net/hphotos-ak-xpf1/t1.0-9/1555286_761679470521535_1800180000235265553_n.jpg
EXPECTED OUTPUT: https://fbcdn-photos-g-a.akamaihd.net/hphotos-ak-xfa1/t1.0-0/10314770_761675840521898_6536715783383115855_a.jpg
Please help me thank you.
This might be what you are searching for.
Just copy the following code at the end of the .m file (after the #end) in which you want to rotate an image.
#interface UIImage (RotationMethods)
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees;
#end
#implementation UIImage (RotationMethods)
static CGFloat DegreesToRadians(CGFloat degrees) {return degrees * M_PI / 180;};
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Then rotate your image like in the example below:
CGFloat degrees = 90;
yourImage = [yourImage imageRotatedByDegrees:degrees];
I'am doing an app something like this: You load a photo and you put images over it, like balloons, etc..
When I try to merge one of this over images with only resize it works fine. Like 10px more than it should be but no problem.
The problem comes when you rotate the image [UIImageView] it appears much bigger that the image its, I try allot of things and nothing. I leave the code. I hope someone could help.
Note: The image size its inside UIImageView, then multiplied it by the scale of the main image
- (UIImage *)mergeImage:(UIImageView *)mainImage withImageView:(UIImageView *)imageView {
UIImage *temp = imageView.image;
UIImage *tempMain = mainImage.image;
CGFloat mainScale = [self imageViewScaleFactor:mainImage];
CGFloat tempScale = 1/mainScale;
NSLog(#"%f", tempScale);
//Rotate UIIMAGE
UIGraphicsBeginImageContext(temp.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, temp.size.width/2, temp.size.height/2);
CGFloat angle = atan2(imageView.transform.b, imageView.transform.a);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(ctx, transform);
// Draw the image into the context
CGContextDrawImage(ctx, CGRectMake(-temp.size.width/2, -temp.size.height/2, temp.size.width, temp.size.height), temp.CGImage);
// Get an image from the context
temp = [UIImage imageWithCGImage: CGBitmapContextCreateImage(ctx)];
NSLog(#"%f %f %f", mainScale, mainImage.frame.size.width, mainImage.frame.size.height);
UIGraphicsBeginImageContextWithOptions(tempMain.size, NO, 1.0f);
//Get imageView size & position
NSLog(#"%f %f %f %f", imageView.frame.origin.x, imageView.frame.origin.y, imageView.frame.size.width, imageView.frame.size.height);
CGFloat offsetX = 0;
CGFloat offsetY = -44;
if (tempMain.size.height > tempMain.size.width) {
offsetX = ((tempMain.size.width * mainScale) - 320)/2;
}else{
offsetY = ((tempMain.size.height * mainScale) - 416)/2;
offsetY -= 44;
}
CGFloat imageViewX = (imageView.frame.origin.x + offsetX) * tempScale;
CGFloat imageViewY = (imageView.frame.origin.y + offsetY) * tempScale;
CGFloat imageViewW = imageView.frame.size.width * tempScale;
CGFloat imageViewH = imageView.frame.size.height * tempScale;
CGRect tempRect = CGRectMake(imageViewX, imageViewY, imageViewW, imageViewH);
[tempMain drawAtPoint:CGPointZero];
[temp drawInRect:tempRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks
This is the solution that works for me
Merging a previosly rotated by gesture UIImageView with another one. WYS is not WYG
I just take a photo to the main screen and then crop it to the size of the photo, its faster, and clean. and the resolution it ok if the apps runs in retina in a normal device isn't too good. And you need to prepare that code to work in retina & non-retina