UIImageView get blurry if i draw on it after having zoomed in - ios

I have an UIImageView displaying a picture.
I registered a pinch gesture to zoom in/out and a pan gesture to draw on it (I have a good reason to use the pan gesture for that, instead of touchMoved)
Here is the pinch target:
- (void)pinchGestureActionOnSubject:(UIPinchGestureRecognizer *)pinch
{
if([(UIPinchGestureRecognizer*)pinch state] == UIGestureRecognizerStateBegan)
{
_lastScale = 1.0;
}
CGFloat scale = 1.0 - (_lastScale - [(UIPinchGestureRecognizer*)pinch scale]);
CGAffineTransform currentTransform = self.imageViewSubject.transform;
CGAffineTransform newTransform = CGAffineTransformScale(currentTransform, scale, scale);
[self.imageViewSubject setTransform:newTransform];
_lastScale = [(UIPinchGestureRecognizer*)pinch scale];
}
and here is the pan target:
-(void) panOnSubjectForDrawing:(id)sender
{
if([sender numberOfTouches] > 0)
{
CGPoint currentPoint = [sender locationOfTouch:0 inView:self.imageViewSubject];
if(lastPoint.x == 0 || lastPoint.y == 0)
{
lastPoint.x = currentPoint.x;
lastPoint.y = currentPoint.y;
}
UIGraphicsBeginImageContext(self.imageViewSubject.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.imageViewSubject.image drawInRect:CGRectMake(0, 0, self.imageViewSubject.bounds.size.width, self.imageViewSubject.bounds.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, 10); // Epaisseur du trait
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeClear); // Transparent
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
self.imageViewSubject.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
}
}
The problem is that if I draw after I zoom in, the picture get blurry. But if I draw without zoom, the image keep intact.
I heard that it can be because my frame has to be integer, so I add:
self.imageViewSubjectframe = CGRectIntegral(self.imageViewSubject.frame);
before I draw, but it is worst, more more more blurry.
Any idea?

Related

How can I draw a line with gradually change edge using CoreGraphics?

I want to accomplish a function just like a Brush. The area where finger swipes changes to trasparent with gradually changed border.
I can only change the color to crystal clear now with following codes:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if(self.eraser) return;
CGFloat scale = self.transform.a;
if (scale < 1) scale = 1;
CGPoint p = [[touches anyObject] locationInView: self];
CGPoint q = [[touches anyObject] previousLocationInView: self];
UIImage* image;
image = self.image;
CGSize size = self.frame.size;
UIGraphicsBeginImageContext(size);
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
[image drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineCap(context, kCGLineCapRound);
CGContextBeginPath(context);
CGContextSaveGState( context );
CGContextSetLineWidth(context, (10.0 / scale) + 1);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextMoveToPoint(context, q.x, q.y);
CGContextAddLineToPoint(context, p.x, p.y);
CGContextStrokePath(context);
CGContextRestoreGState( context );
UIImage* editedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setBounds:rect];
[self setImage:editedImage];
}
How can I get the edge with gradually change? Thanks in advance.
You can achieve this effect by drawing a radial gradient with a variable alpha in the kCGBlendModeDestinationIn mode in each spot the user passes.
This blend mode has the effect of only applying the layer's alpha to layers below. With the variable alpha of our gradient, we can achieve this effect.
const CGFloat kBrushSize = 10.f;
CGContextSaveGState(context);
// Make a radial gradient that goes from transparent black on the inside
// to opaque back on the outside.
size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 1.0, 1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components,
locations, num_locations);
CGColorSpaceRelease(myColorspace);
// Draw the gradient at the point using kCGBlendModeDestinationIn
// This mode only applies the new layer's alpha to the lower layer.
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
CGContextDrawRadialGradient(context, myGradient, p, 0.f, p, (kBrushSize / scale) + 1, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(myGradient);
CGContextRestoreGState(context);
Here is a screenshot of this code in action:
Note: Using this technique, if the user is moving his/her finger very fast, you may see a spacing effect where discrete brush dots are visible. This is a feature of some graphics software, but if this is undesirable for you, you can add code to interpolate the points between the current and last to draw more brush points, creating a more continuous stroke.
Also, you should be able to adjust the gradient color stops to achieve any kind of brush softness you like.
Source: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_shadings/dq_shadings.html

UIPoints after CGAffineTransformTranslate and CGAffineTransformScale are incorrect

I am developing simple drawing app using UIKit using the idea shared in Ray Wenderlich's tutorial. Difference is that I need to implement a feature so that I can zoom/scale into my drawing and draw finer lines. I am able to zoom in using CGAffineTransformScale (with ofcourse UIPinchGestureRecognizer) and move around the UIImage using CGAffineTransform - the problem is that once zoomed in the UITouch points detected and the actual touch points have a huge offset. This offset gets bigger as I keep scaling the image.
In the code
drawingImage - one which user interacts with savingImage - drawn lines are savedtransform_translate - CGAffinetransformlastScale - CGFloat to save last zoom scale valuelastPoint - CGPoint to save last point of touchlastPointForPinch - CGPoint to save last pinch point
Pinch gesture is initialized in viewDidLoad as -
pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchGestureDetected:)];
[self.drawingImage addGestureRecognizer:pinchGestureRecognizer];
The method for UIPinchGesture detection is is -
- (void)pinchGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
if([recognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [recognizer scale];
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
if ([recognizer state] == UIGestureRecognizerStateBegan ||
[recognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[recognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 2.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [recognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[recognizer view] transform], newScale, newScale);
self.savingImage.transform = transform;
self.drawingImage.transform=transform;
lastScale = [recognizer scale]; // Store the previous scale factor for the next pinch gesture call
CGPoint point = [recognizer locationInView:self.drawingImage];
transform_translate = CGAffineTransformTranslate([[recognizer view] transform], point.x - lastPointForPinch.x, point.y - lastPointForPinch.y);
self.savingImage.transform = transform_translate;
self.drawingImage.transform=transform_translate;
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
}
The method for drawing of lines (FYI this is a fairly standard procedure taken from the above mentioned tutorial, putting it here if incase I made some mistake here it can be caught) -
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = NO;
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.savingImage.frame.size);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(ctxt, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(ctxt, currentPoint.x, currentPoint.y);
CGContextSetLineCap(ctxt, kCGLineCapRound);
CGContextSetLineWidth(ctxt, brush );
CGContextSetRGBStrokeColor(ctxt, red, green, blue, opacity);
CGContextSetBlendMode(ctxt,kCGBlendModeNormal);
CGContextSetShouldAntialias(ctxt,YES);
CGContextSetAllowsAntialiasing(ctxt, YES);
CGContextStrokePath(ctxt);
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
lastPoint = currentPoint;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if(!mouseSwiped) {
UIGraphicsEndImageContext();
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.drawingImage.frame.size);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, opacity);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.savingImage.frame.size);
[self.savingImage.image drawInRect:CGRectMake(0, 0, self.savingImage.frame.size.width, self.savingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.savingImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.drawingImage.image=nil;
UIGraphicsEndImageContext();
}
}
I have tried doing CGPointApplyAffineTransform(point, transform_translate) but the huge offset still remains.
Hope my question was explained clearly and someone can help me. I have been struggling to make progress in this. Thanks in advance
I found the solution finally...one silly mistake done again and again.
locationInView was needed to be from self.view and not from the image.
#davidkonard thanks for the suggestion - actually I did not realize that (in context of drawing app) the user touches the screen with an intent that exactly at that point the drawing will done, therefore even if the UIImageView is moved still the user wants to draw a point/line/whatever under his finger. So locationInView is supposed to be self.view (and self.view in my case was not transformed ever).
Hope this explains why I was making a mistake and how I came up with solution.

How to get rotated image of imageview in uiview?

I created RotatedView subclass of UIView and added UIImageView as subview of RotatedView and also draw a image on RotatedView using drawRect: method same as image of imageView. I applied pinch and rotate gestures on imageView. When i pinch the imageView drawing image is also changed. But when i rotate the imageView, the drawing image is not changed. I used following code::
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor redColor];
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(80, 150, 100, 150)];
imageView.image = [UIImage imageNamed:#"Images_6.jpg"];
imageView.userInteractionEnabled = YES;
[self addSubview:imageView];
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationMethod:)];
rotationGesture.delegate = self;
[imageView addGestureRecognizer:rotationGesture];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGRect rectFrame = CGRectMake(0, 0, imageView.frame.size.width, imageView.frame.size.height);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, 90, 0);
CGContextAddLineToPoint(context, 90, 90);
CGContextAddLineToPoint(context, 0, 90);
CGContextAddLineToPoint(context, 0, 0);
CGContextClip(context);
CGContextStrokePath(context);
CGContextDrawImage(context,rectFrame, imageView.image.CGImage);
}
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
NSLog(#"rotationMethod");
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan || [gestureRecognizer state]==UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.view.transform = transform;
[gestureRecognizer setRotation:0];
}
[self setNeedsDisplay];
}
How do i get rotate image in UIView when the imageView is rotated?
**
Edited:
**
I got solution using first method. Now i am using second method. I think this is simple and best but I am not sure which one is best. In second method, image is rotated but not at center point. I am struggle to solve this problem. Please help me.
Modifying methods are:
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawRect");
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextClip(context);
CGContextStrokePath(context);
CGContextDrawImage(context, imageRect, [self rotateImage:imageView.image].CGImage);
}
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
NSLog(#"rotationMethod");
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan || [gestureRecognizer state]==UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.view.transform = transform;
lastRotation = gestureRecognizer.rotation;
[gestureRecognizer setRotation:0];
}
[self setNeedsDisplay];
}
- (UIImage *)rotateImage:(UIImage *) img
{
NSLog(#"rotateImage");
CGAffineTransform transform = imageView.transform;
transform = CGAffineTransformTranslate(transform, img.size.width, img.size.height);
transform = CGAffineTransformRotate(transform, lastRotation);
transform = CGAffineTransformScale(transform, -1, -1);
CGContextRef ctx = CGBitmapContextCreate(NULL, img.size.width, img.size.height,
CGImageGetBitsPerComponent(img.CGImage), 0,
CGImageGetColorSpace(img.CGImage),
CGImageGetBitmapInfo(img.CGImage));
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(20,20,img.size.width,img.size.height), img.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
return newImg;
}
You need to use CGContextScaleCTM and CGContextRotateCTM (and possibly to appropriately transform your CGContextRef to match your UIImageView's transform.
Take a look at the Quartz 2D Programming Guide
Ok. Finally i got it using ANImageBitmapRep. I declared lastRotation, angle,rotateMe in RotatedView.h class.
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan) {
if (!rotateMe) {
rotateMe = [[ANImageBitmapRep alloc] initWithImage:displayImageView.image];
}else{
rotateMe =[ANImageBitmapRep imageBitmapRepWithImage:displayImageView.image];
}
}
CGFloat rotation = 0.0 - (lastRotation - [gestureRecognizer rotation]);
CGAffineTransform currentTransform = [gestureRecognizer view].transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform,rotation);
[[gestureRecognizer view] setTransform:newTransform];
lastRotation = [gestureRecognizer rotation];
angle =lastRotation * (180 / M_PI);
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGRect imageRect = CGRectMake(0, 0, 90, 90);
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextClip(context);
CGContextStrokePath(context);
UIImage *rotatedImage1=[UIImage imageWithCGImage:[rotateMe imageByRotating:angle]];
CGContextDrawImage(context, imageRect, rotatedImage1.CGImage);
}
I find this solution. If anyone find best solution, give me suggestions according this problem.

How to save UIImageView after UIGestureRecognizer have passed?

I have a app with 2 UIImageView that are Layed over each other. I have backgroundImg and FrontImg. Where frontImg can be: rotated, moved, scaled via the UIGestureRecognizers.
When I want to save my UIImages's I merge them, but they are saved as if they were never touched.
Does anyone knows how to fix this?
This is my saving method:
UIGraphicsBeginImageContext(self.backgroundImg.image.size);
CGRect rect = CGRectMake(0, 0, self.backgroundImg.image.size.width, self.backgroundImg.image.size.height);
self.frontImg.contentMode = UIViewContentModeScaleAspectFit;
[self.backgroundImg.image drawInRect:rect];
[self.frontImg.image drawInRect:rect];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[imageView3 setImage:resultingImage];
UIGraphicsBeginImageContextWithOptions(self.backgroundImg.bounds.size, NO,0.0);
[self.backgroundImg.image drawInRect:CGRectMake(0, 0, self.backgroundImg.frame.size.width, self.backgroundImg.frame.size.height)];
//UIImage *SaveImage = UIGraphicsGetImageFromCurrentImageContext();
//UIImage *resultingImage = [self mergeImage:self.backgroundImg.image withImage:self.frontImg.image];
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
These are my gesture functions:
-(IBAction)handlePan:(UIPanGestureRecognizer *)recognizer{
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
-(IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale =1;
}
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer{
return ![gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]];
}
-(IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer
{
recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
self.frontImg.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
recognizer.rotation = 0;
}
Your front image isn't being drawn at the coordinates, scale, etc it has in the view. You are drawing it with the same rect as the background (0, 0, backgroundWidth, backgroundHeight). You need to make sure that when you draw it to the graphics context you're getting the proper location, size, and scale.
If both of the images are contained within the same view, you can just screenshot that view. This post should help you out: http://ios-dev-blog.com/how-to-create-uiimage-from-uiview/

Custom alert view causes crash when iPad is rotated

I am showing a custom alert view which is derived from UIAlertView in my view controller, and when I rotate the device for like 2-3 times, both the view controller and the alert view gets rotated. But then the app crashes with no clear clues. I have a breakpoint on All Exceptions but it can't catch it.
This crash is not happening if I use a standard UIAlertView. I found the custom alertview's code from someone else. Is there something misimplemented here? Or how can I get more clues of what's happening?
#implementation CustomAlertView
- (void) setBackgroundColor:(UIColor *) background
withStrokeColor:(UIColor *) stroke
{
if(fillColor != nil)
{
[fillColor release];
[borderColor release];
}
fillColor = [background retain];
borderColor = [stroke retain];
}
- (id)initWithFrame:(CGRect)frame
{
if((self = [super initWithFrame:frame]))
{
if(fillColor == nil)
{
}
}
return self;
}
- (void)layoutSubviews
{
for (UIView *sub in [self subviews])
{
if([sub class] == [UIImageView class] && sub.tag == 0)
{
// The alert background UIImageView tag is 0,
// if you are adding your own UIImageView's
// make sure your tags != 0 or this fix
// will remove your UIImageView's as well!
[sub removeFromSuperview];
break;
}
}
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetAllowsAntialiasing(context, true);
CGContextSetLineWidth(context, 0.0);
CGContextSetAlpha(context, 0.8);
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, [borderColor CGColor]);
CGContextSetFillColorWithColor(context, [fillColor CGColor]);
// Draw background
CGFloat backOffset = 2;
CGRect backRect = CGRectMake(rect.origin.x + backOffset,
rect.origin.y + backOffset,
rect.size.width - backOffset*2,
rect.size.height - backOffset*2);
[self drawRoundedRect:backRect inContext:context withRadius:8];
CGContextDrawPath(context, kCGPathFillStroke);
// Clip Context
CGRect clipRect = CGRectMake(backRect.origin.x + backOffset-1,
backRect.origin.y + backOffset-1,
backRect.size.width - (backOffset-1)*2,
backRect.size.height - (backOffset-1)*2);
[self drawRoundedRect:clipRect inContext:context withRadius:8];
CGContextClip (context);
//Draw highlight
CGGradientRef glossGradient;
CGColorSpaceRef rgbColorspace;
size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 1.0, 1.0, 1.0, 0.35, 1.0, 1.0, 1.0, 0.06 };
rgbColorspace = CGColorSpaceCreateDeviceRGB();
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace,
components, locations, num_locations);
CGRect ovalRect = CGRectMake(-130, -115, (rect.size.width*2),
rect.size.width/2);
CGPoint start = CGPointMake(rect.origin.x, rect.origin.y);
CGPoint end = CGPointMake(rect.origin.x, rect.size.height/5);
CGContextSetAlpha(context, 1.0);
CGContextAddEllipseInRect(context, ovalRect);
CGContextClip (context);
CGContextDrawLinearGradient(context, glossGradient, start, end, 0);
CGGradientRelease(glossGradient);
CGColorSpaceRelease(rgbColorspace);
}
- (void) drawRoundedRect:(CGRect) rrect inContext:(CGContextRef) context
withRadius:(CGFloat) radius
{
CGContextBeginPath (context);
CGFloat minx = CGRectGetMinX(rrect), midx = CGRectGetMidX(rrect),
maxx = CGRectGetMaxX(rrect);
CGFloat miny = CGRectGetMinY(rrect), midy = CGRectGetMidY(rrect),
maxy = CGRectGetMaxY(rrect);
CGContextMoveToPoint(context, minx, midy);
CGContextAddArcToPoint(context, minx, miny, midx, miny, radius);
CGContextAddArcToPoint(context, maxx, miny, maxx, midy, radius);
CGContextAddArcToPoint(context, maxx, maxy, midx, maxy, radius);
CGContextAddArcToPoint(context, minx, maxy, minx, midy, radius);
CGContextClosePath(context);
}
- (void)disableDismissForIndex:(int)index_{
canIndex = index_;
disableDismiss = TRUE;
}
- (void)dismissAlert{
[self dismissWithClickedButtonIndex:[self cancelButtonIndex] animated:YES];
}
- (void)vibrateAlert:(float)seconds{
canVirate = TRUE;
[self moveLeft];
[self performSelector:#selector (stopVibration) withObject:nil afterDelay:seconds];
}
-(void)dismissWithClickedButtonIndex:(NSInteger)buttonIndex animated:(BOOL)animated {
if (disableDismiss == TRUE && canIndex == buttonIndex){
}else {
[super dismissWithClickedButtonIndex:buttonIndex animated:animated];
}
}
- (void)hideAfter:(float)seconds{
[self performSelector:#selector (dismissAlert) withObject:nil afterDelay:seconds];
}
- (void)moveRight{
if (canVirate){
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.05];
self.transform = CGAffineTransformMakeTranslation(-10.0, 0.0);
[UIView commitAnimations];
[self performSelector:#selector (moveLeft) withObject:nil afterDelay:0.05];
}
}
- (void)moveLeft{
if (canVirate){
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.05];
self.transform = CGAffineTransformMakeTranslation(10.0, 0.0);
[UIView commitAnimations];
[self performSelector:#selector (moveRight) withObject:nil afterDelay:0.05];
}
}
- (void)stopVibration{
canVirate = FALSE;
self.transform = CGAffineTransformMakeTranslation(0.0, 0.0);
}
#end
OK now I found it. In layoutViews method, replacing [sub removeFromSuperview]; line with ((UIImageView*)sub).image = nil; fixed the problem for me. Still I'm not sure about the exact reason though.

Resources