I'm trying to implement image rotation in my code and running a couple of tests. Since I don't know much about the math of how rotation is performed, I just followed some instructions that I'd found on the internet and implemented my code.
I found out that a whole row or column of 1-pixel around the edge is lost everytime I rotate the image. (more than M_PI degree at a time)
This is not obvious when I host the image as a UIImage object, but you can see it when you save the UIImage as a file.
Here's my test code link:
https://github.com/asldkfjwoierjlk/UIImageRotationTest/tree/master
I don't understand that this loss happens. Did I miss something? Or is it something mathematically inevitable?
Here's the rotation method that I implemented if you are interested.
- (UIImage*)rotateImg:(UIImage*)srcImg
{
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0, srcImg.size.width, srcImg.size.height)];
CGFloat rotationInDegree = ROTATION_DEGREE;
CGAffineTransform t = CGAffineTransformMakeRotation(rotationInDegree);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// set opaque option to NO to leave the alpha channel intact.
// Otherwise, there's no point of saving to either png or jpg.
UIGraphicsBeginImageContextWithOptions(rotatedSize, NO, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, rotatedSize.width / 2.0f, rotatedSize.height / 2.0f);
CGContextRotateCTM(context, rotationInDegree);
[srcImg drawInRect:CGRectMake(-srcImg.size.width / 2.0f, -srcImg.size.height / 2.0f, srcImg.size.width, srcImg.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
you can handle the UIImageView as a View. This code works perfectly for me!
CABasicAnimation *rotationAnimation = [CABasicAnimation animationWithKeyPath:#"transform.rotation.z"];
NSNumber *currentAngle = [rotatedViewBox.layer.presentationLayer valueForKeyPath:#"transform.rotation"];
rotationAnimation.fromValue = currentAngle;
rotationAnimation.toValue = #(50*M_PI);
rotationAnimation.duration = 50.0f; // this might be too fast
rotationAnimation.repeatCount = HUGE_VALF; // HUGE_VALF is defined in math.h so import it
[rotatedViewBox.layer addAnimation:rotationAnimation forKey:#"rotationAnimationleft"];
Happy coding!
Related
I have a problem to rotate UIImage without quality loss. Now I am using ready method provided by BFKit imageRotatedByDegrees:, but my image become diffused (blurred).
Sample code:
UIImage *image = [[UIImage imageNamed:#"car"] imageRotatedByDegrees:(((float)arc4random() / ARC4RANDOM_MAX) * (360))];
Code of imageRotatedByDegrees: :
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Is there another way to rotate UIImage without quality loss?
I'm the author of BFKit, and when I read about your question I investigated about that issue.
In the latest version (1.6.4) I've fixed that!
Now the image will no more be diffused (blurred).
My aim was to rotate an image that was in my MKAnnotationView. So I solved the problem by rotating not image, but whole MKAnnotationView.
An answer which helped me is this.
My code in MKMapViewDelegate method mapView:viewForAnnotation::
annotationView.transform = CGAffineTransformMakeRotation((((float)arc4random() / ARC4RANDOM_MAX) * (360)));
I'm re-introducing myself to iOS programming and am running into a seemingly simple problem. I have been following this example about how to rotate a image according to the user's heading. The crucial part is:
float heading = -1.0f * M_PI * newHeading.magneticHeading / 180.0f;
arrowImage.transform = CGAffineTransformMakeRotation(heading);
The image is indeed rotated but the center doesn't seem to be the anchor point for the rotation. I've been googling but can't figure it out. Could someone point me in the right direction?
Here's a screenshot in case it isn't clear: in the initial state, the label is centered in the circle and the white square is centered in the image:
This code works for my:
UIImage *img = [UIImage imageNamed:#"Image.png"];
UIImageView *imageToMove = [[UIImageView alloc] initWithImage:img];
CATransform3D rotationTransform = CATransform3DMakeRotation(1.0f * M_PI, 0, 0, 1.0);
CABasicAnimation* rotationAnimation;
rotationAnimation = [CABasicAnimation animationWithKeyPath:#"transform"];
rotationAnimation.toValue = [NSValue valueWithCATransform3D:rotationTransform];
rotationAnimation.duration = 0.6f;
rotationAnimation.cumulative = YES;
rotationAnimation.repeatCount = FLT_MAX;
[imageToMove.layer addAnimation:rotationAnimation forKey:#"rotationAnimation"];
[self.view addSubview:imageToMove];
Swift port of Greg's code
let rotationTransform: CATransform3D = CATransform3DMakeRotation(.pi, 0, 0, 1.0);
let rotationAnimation = CABasicAnimation(keyPath: "transform");
rotationAnimation.toValue = NSValue(caTransform3D: rotationTransform)
rotationAnimation.duration = 0.6;
rotationAnimation.isCumulative = true;
rotationAnimation.repeatCount = Float.infinity;
progressArc.layer.add(rotationAnimation, forKey:"rotationAnimation")
try this code for your case
var angle=0;
self.btnImage.center=CGPointMake(self.btnImage.center.x, self.btnImage.center.y);
self.btnImage.transform=CGAffineTransformMakeRotation (angle);
angle+=0.15;
keep the above one into method, use the time interval or call the method when ever you want to change the angle of the image.
I want to blend a image, but have a problem that the pixel almost lose half after I blend it. My code is:
UIImageView *baseIgv2 = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 76, 76)];
[self.view addSubview:baseIgv2];
[baseIgv2 setImage:[UIImage imageNamed:#"btn_award_open.png"]];
baseIgv2.center = CGPointMake(300, 300);
UIGraphicsBeginImageContext(baseIgv2.bounds.size);
[baseIgv2.image drawInRect:baseIgv2.bounds];
[baseIgv2.image drawInRect:baseIgv2.bounds blendMode:kCGBlendModeScreen alpha:.8];
[baseIgv2.image drawInRect:baseIgv2.bounds blendMode:kCGBlendModeDestinationIn alpha:.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[baseIgv2 setImage:newImage];
CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:#"opacity"];
anim.beginTime = CACurrentMediaTime();
anim.fromValue = #.5;
anim.toValue= #1;
anim.autoreverses = YES;
anim.duration = .5;
anim.repeatDuration = 1000;
anim.repeatCount = 1000;
[baseIgv2.layer addAnimation:anim forKey:nil];
The picture in project contains btn_award_open#2x.png , btn_award_open#2x~ipad.png, btn_award_open~ipad.png
Before I use blend, it just OK,but after I use it, it's no longer retina. Anyone can help?
Although what you are doing is correct, you are using an old UIKit function to create your bitmap context.
To scale your bitmap context for retina screens you should use this function instead:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
So you need to replace this line of code:
UIGraphicsBeginImageContext(baseIgv2.bounds.size);
With this:
UIGraphicsBeginImageContextWithOptions(baseIgv2.bounds.size, YES, 0.0);
More info about the function and it's parameters:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIKitFunctionReference/Reference/reference.html
I have implemented a highlight function in my app. This highlight is being drawn in UIImage so that it can be saved as a PNG Representation. Everything is working perfectly but recently I realized somewhat a very confusing issue. Sometimes when I am highlighting, the drawings are being distorted. Here is what it looks like:
Whenever I move my finger to highlight the characters, the drawn highlights are stretching to the left. Another one:
In this one, every time I move my finger to highlight, the drawn highlights are moving upward!
This is all very confusing for me. This happens from time to time, and sometimes to certain pages only. Sometimes it works well just like this:
I am very confused on why is this happening. Can anyone tell me or at least give me an idea on why is this happening? Please help me.
THE CODE:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
currPoint = [[touches anyObject]locationInView:self];
for (int r = 0; r < [rectangles count]; r++)
{
CGRect rect = [[rectangles objectAtIndex:r]CGRectValue];
//Get the edges of the rectangles
CGFloat xEdge = rect.origin.x + rect.size.width;
CGFloat yEdge = rect.origin.y + rect.size.height;
if ((currPoint.x > rect.origin.x && currPoint.x < xEdge) && (currPoint.y < rect.origin.y && currPoint.y > yEdge))
{
imgView.image = [self drawRectsToImage:imgView.image withRectOriginX:rect.origin.x originY:rect.origin.y rectWidth:rect.size.width rectHeight:rect.size.height];
break;
}
}
}
//The function that draws the highlight
- (UIImage *)drawRectsToImage:(UIImage *)image withRectOriginX:(CGFloat)x originY:(CGFloat)y rectWidth:(CGFloat)width rectHeight:(CGFloat)ht
{
UIGraphicsBeginImageContext(self.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:self.bounds];
CGContextSetStrokeColorWithColor(context, [UIColor clearColor].CGColor);
CGRect rect = CGRectMake(x, y, width, ht);
CGContextAddRect(context, rect);
CGContextSetCMYKFillColor(context, 0, 0, 1, 0, 0.5);
CGContextFillRect(context, rect);
UIImage *ret = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return ret;
}
I can't tell you exactly why these artifacts occur, but...
I don't think it's a good idea to render an image in a touch handler. Touch handling should do as little as possible.
You might want to try using CoreAnimation's CALayer:
Set your image as background image like this (assuming your class is a subclass of UIView) or use an actual UIImageView:
self.layer.contents = (id)image.CGImage;
When you detect that another rectangle rect has been touched, add the highlight as a sublayer above your image background:
CALayer *highlightLayer = [CALayer layer];
highlightLayer.frame = rect;
highlightLayer.backgroundColor = [UIColor yellowColor].CGColor;
highlightLayer.opacity = 0.25f;
[self.layer addSublayer:highlightLayer];
The shouldRasterize layer property may help to improve performance if necessary:
self.layer.shouldRasterize = YES;
In order do use all this, use the QuartzCore framework and import <QuartzCore/QuartzCore.h> in your implementation file.
You can create a PNG representation of your layer hierarchy by rendering the self.layer to an image context, then get the image and your PNG representation.
I'am doing an app something like this: You load a photo and you put images over it, like balloons, etc..
When I try to merge one of this over images with only resize it works fine. Like 10px more than it should be but no problem.
The problem comes when you rotate the image [UIImageView] it appears much bigger that the image its, I try allot of things and nothing. I leave the code. I hope someone could help.
Note: The image size its inside UIImageView, then multiplied it by the scale of the main image
- (UIImage *)mergeImage:(UIImageView *)mainImage withImageView:(UIImageView *)imageView {
UIImage *temp = imageView.image;
UIImage *tempMain = mainImage.image;
CGFloat mainScale = [self imageViewScaleFactor:mainImage];
CGFloat tempScale = 1/mainScale;
NSLog(#"%f", tempScale);
//Rotate UIIMAGE
UIGraphicsBeginImageContext(temp.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, temp.size.width/2, temp.size.height/2);
CGFloat angle = atan2(imageView.transform.b, imageView.transform.a);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(ctx, transform);
// Draw the image into the context
CGContextDrawImage(ctx, CGRectMake(-temp.size.width/2, -temp.size.height/2, temp.size.width, temp.size.height), temp.CGImage);
// Get an image from the context
temp = [UIImage imageWithCGImage: CGBitmapContextCreateImage(ctx)];
NSLog(#"%f %f %f", mainScale, mainImage.frame.size.width, mainImage.frame.size.height);
UIGraphicsBeginImageContextWithOptions(tempMain.size, NO, 1.0f);
//Get imageView size & position
NSLog(#"%f %f %f %f", imageView.frame.origin.x, imageView.frame.origin.y, imageView.frame.size.width, imageView.frame.size.height);
CGFloat offsetX = 0;
CGFloat offsetY = -44;
if (tempMain.size.height > tempMain.size.width) {
offsetX = ((tempMain.size.width * mainScale) - 320)/2;
}else{
offsetY = ((tempMain.size.height * mainScale) - 416)/2;
offsetY -= 44;
}
CGFloat imageViewX = (imageView.frame.origin.x + offsetX) * tempScale;
CGFloat imageViewY = (imageView.frame.origin.y + offsetY) * tempScale;
CGFloat imageViewW = imageView.frame.size.width * tempScale;
CGFloat imageViewH = imageView.frame.size.height * tempScale;
CGRect tempRect = CGRectMake(imageViewX, imageViewY, imageViewW, imageViewH);
[tempMain drawAtPoint:CGPointZero];
[temp drawInRect:tempRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks
This is the solution that works for me
Merging a previosly rotated by gesture UIImageView with another one. WYS is not WYG
I just take a photo to the main screen and then crop it to the size of the photo, its faster, and clean. and the resolution it ok if the apps runs in retina in a normal device isn't too good. And you need to prepare that code to work in retina & non-retina