objective-c draw an image - ios

I am trying to draw an image on a page on touch.
I have this method here, the comment out code will draw a rectangle and that works, but when I try to draw an image, it does not work at all:
- (void)draw
{
//CGContextRef context = UIGraphicsGetCurrentContext();
// set the properties
//CGContextSetAlpha(context, self.lineAlpha);
// draw the rectangle
//CGRect rectToFill = CGRectMake(self.firstPoint.x, self.firstPoint.y, self.lastPoint.x - self.firstPoint.x, self.lastPoint.y - self.firstPoint.y);
//CGContextSetFillColorWithColor(context, self.lineColor.CGColor);
//CGContextFillRect(UIGraphicsGetCurrentContext(), rectToFill);
UIImage *_originalImage = [UIImage imageNamed: #"approved"]; // your image
UIGraphicsBeginImageContext(_originalImage.size);
CGContextRef _context = UIGraphicsGetCurrentContext(); // here you don't need this reference for the context but if you want to use in the future for drawing anything else on the context you could get it for it
[_originalImage drawInRect:CGRectMake(0.f, 0.f, _originalImage.size.width, _originalImage.size.height)];
UIImage *_newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Here is my entire code:
#pragma mark - LazyPDFDrawingApproved
#interface LazyPDFDrawingApproved ()
#property (nonatomic, assign) CGPoint firstPoint;
#property (nonatomic, assign) CGPoint lastPoint;
#end
#pragma mark -
#implementation LazyPDFDrawingApproved
#synthesize lineColor = _lineColor;
#synthesize lineAlpha = _lineAlpha;
#synthesize lineWidth = _lineWidth;
- (void)setInitialPoint:(CGPoint)firstPoint
{
self.firstPoint = firstPoint;
}
- (void)moveFromPoint:(CGPoint)startPoint toPoint:(CGPoint)endPoint
{
self.lastPoint = endPoint;
}
- (void)draw
{
//CGContextRef context = UIGraphicsGetCurrentContext();
// set the properties
//CGContextSetAlpha(context, self.lineAlpha);
// draw the rectangle
//CGRect rectToFill = CGRectMake(self.firstPoint.x, self.firstPoint.y, self.lastPoint.x - self.firstPoint.x, self.lastPoint.y - self.firstPoint.y);
//CGContextSetFillColorWithColor(context, self.lineColor.CGColor);
//CGContextFillRect(UIGraphicsGetCurrentContext(), rectToFill);
UIImage *_originalImage = [UIImage imageNamed: #"approved"]; // your image
UIGraphicsBeginImageContext(_originalImage.size);
CGContextRef _context = UIGraphicsGetCurrentContext(); // here you don't need this reference for the context but if you want to use in the future for drawing anything else on the context you could get it for it
[_originalImage drawInRect:CGRectMake(0.f, 0.f, _originalImage.size.width, _originalImage.size.height)];
UIImage *_newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
- (void)dealloc
{
self.lineColor = nil;
#if !LazyPDF_HAS_ARC
[super dealloc];
#endif
}
#end
What am I doing wrong?
UPDATE
More code, these methods call draw:
#pragma mark - Drawing
- (void)drawRect:(CGRect)rect
{
#if PARTIAL_REDRAW
// TODO: draw only the updated part of the image
[self drawPath];
#else
[self.image drawInRect:self.bounds];
[self.currentTool draw];
#endif
}
- (void)updateCacheImage:(BOOL)redraw
{
// init a context
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
if (redraw) {
// erase the previous image
self.image = nil;
// load previous image (if returning to screen)
[[self.prev_image copy] drawInRect:self.bounds];
// I need to redraw all the lines
for (id<LazyPDFDrawingTool> tool in self.pathArray) {
[tool draw];
}
} else {
// set the draw point
[self.image drawAtPoint:CGPointZero];
[self.currentTool draw];
}
// store the image
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

In your draw method, you don't need to begin a new image context.
You already begin an image context before invoking the method, so you can draw directly into that.
For example:
-(void) draw {
CGContextRef context = UIGraphicsGetCurrentContext(); // not needed, but if you're doing other drawing, it'll be needed.
UIImage *_originalImage = [UIImage imageNamed: #"approved"]; // your image
[_originalImage drawInRect:CGRectMake(0.f, 0.f, _originalImage.size.width, _originalImage.size.height)];
}
The only reason a new image context would make sense here is if you were applying transforms to the image itself, and therefore required a context that was the same size as the image.
The reason your current code doesn't work is that you're creating a new image context, drawing into it, creating a UIImage from the result, but then you're not doing anything with that image. Therefore it'll never reach the screen.
If you wanted to do it that way, then you would need to call drawInRect: again on your outputted image (_newImage in your case) in order to draw it into the previous context.

Related

How to programmatically take a screenshot in iOS?

How can I programmatically take a screenshot of the entire screen (iPhone) and save it to the photo library in iOS? There is nothing on the screen other than some labels and a button.
The button is named "ScreenshotButton" that needs to trigger this.
Do I still need to import QuartzCore?
Where exactly would I need to place the Screenshot Function in the ViewController?
You need CoreGraphics. Here's the code I use within the IBAction of a button:
CGRect rect = [[UIScreen mainScreen] bounds];
CGSize imageSize = rect.size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextConcatCTM(ctx, [self.view.layer affineTransform]);
if ([self.view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) { // iOS 7+
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
} else { // iOS 6
[self.view.layer renderInContext:ctx];
}
screengrab = UIGraphicsGetImageFromCurrentImageContext();
CGContextRestoreGState(ctx);
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screengrab, nil, nil, nil);
I define static UIImage *screengrab; at the top of my code after #implementation.
You should use Analyze to check for leaks -- I don't have any myself in this code, but CG code always seems to create some.
Hereunder the class used to take snapshot of a view :
#import "ScreenSnapshot.h"
#import <QuartzCore/QuartzCore.h>
#implementation ScreenSnapshot
static NSUInteger imgNumber;
+ (void)initialize { imgNumber = 0; }
+ (void)saveScreenSnapshot:(UIView *)view {
UIImage * img = [self screenSnapshotOf:view];
NSData * data = UIImagePNGRepresentation(img);
imgNumber += 1;
NSString * fullPath = [NSString stringWithFormat:#"%#/Documents/img_%ld.png",NSHomeDirectory(),imgNumber];
[data writeToFile:fullPath
atomically:YES];
}
+ (UIImage *)screenSnapshotOf:(UIView *)view {
UIGraphicsBeginImageContext(view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end
Copy this code as is in a new class called 'ScreenSnapshot'.
In your "ScreenshotButton" action, write:
[ScreenSnapshot saveScreenSnapshot:self.view];
...after having imported "ScreenSnapshot.h".
You can replace the location used in my code to store the images and you'll be able to programmatically take a screenshot in iOS.

How to cut UIImage to pieces(like pizza) and to save each one in array?

There is no problem to save it in array, but I have no idea how to cut it. I've found how to cut it in rectangles, but I couldn't find how to cut it like pizza.
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)rect {
rect = CGRectMake(rect.origin.x*self.scale,
rect.origin.y*self.scale,
rect.size.width*self.scale,
rect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
#end
On pictures I tried to show what I would like to do.
Before: http://tinypic.com/r/287g1vq/8
After: http://i58.tinypic.com/1zwnn93.jpg
So the core is that you'd like to cut ARC from your image. I won't comment the code here, as I wrote some comments in the code.
- (NSArray *)pizzaSlicesFromImage:(UIImage *)image withNumberOfSlices:(NSUInteger)numberOfSlices {
if (!image) {
NSLog(#"You need to pass an image to slice it! nil image argument occured."); // use cocoa lumberjack instead of logs
return nil;
} else if (numberOfSlices == 0) { // 0 slices? then you don't want the image at all
return nil;
} else if (numberOfSlices == 1) { // 1 slice? then it's whole image, just wrapped in array
return #[image];
}
CGFloat fullCircle = 2 * M_PI;
CGFloat singleSliceAngle = fullCircle / numberOfSlices;
CGFloat previousSliceStartAngle = 0;
NSMutableArray *mSlicesOfImage = [NSMutableArray new];
for (NSUInteger i = 0; i < numberOfSlices; i++) {
UIImage *sliceImage = [self imageFromAngle:previousSliceStartAngle toAngle:(previousSliceStartAngle + singleSliceAngle) fromImage:image];
if (sliceImage) {
[mSlicesOfImage addObject:sliceImage];
}
previousSliceStartAngle += singleSliceAngle;
}
// return non-mutable array
return mSlicesOfImage.copy;
}
- (UIImage *)imageFromAngle:(CGFloat)fromAngle toAngle:(CGFloat)toAngle fromImage:(UIImage *)image {
// firstly let's get proper size for the canvas
CGRect imageRect = CGRectMake(0.f, 0.f, image.size.width, image.size.height);
CGPoint centerPoint = CGPointMake(CGRectGetMidX(imageRect), CGRectGetMidY(imageRect));
// start the drawing
UIGraphicsBeginImageContext(imageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// we need to perform this to fix upside-down rotation
CGContextTranslateCTM(context, 0, imageRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// now slice an arc from the circle
CGContextBeginPath(context);
CGContextMoveToPoint(context, centerPoint.x, centerPoint.y); // move to the center of drawing
CGContextAddArc(context, centerPoint.x, centerPoint.y, MAX(CGRectGetWidth(imageRect), CGRectGetHeight(imageRect)) / 2.f, fromAngle, toAngle, 0); // draw the arc
// ! if you want NOT to cut outer area to form the circle, just increase 4th value (radius) to cover "corners" of the rect drawn on the circle. I did it this way, because it looks like a pizza, like you wanted.
CGContextClosePath(context);
CGContextClip(context);
// get the image, purge memory
CGContextDrawImage(context, imageRect, image.CGImage);
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// this is one slice
return resultImage;
}
That's not a full solution, as I don't have more time to complete it, however I'll try to finish it later. The only thing left is to cut the image so it is smaller, but I lack time to finish it, so it's a partial answer.
I hope it helps :) Hopefully will edit this answer later to add that missing cutting part.
BTW: I've created a module based on above implementation. You're welcome to check it out: https://github.com/natalia-osa/NORosettaView.

Taking a screnshot of a UIView iOS

I want to take a screenshot of a UIView (the view would contain a signature) and save it to a local file in the application files, so that the image can be called up at a later point to be displayed in something like a UIImageView. Below is the code behind the signature UIView.
#import "NISignatureViewQuartz.h"
#import <QuartzCore/QuartzCore.h>
#implementation NISignatureViewQuartz
UIBezierPath *path;
- (void)commonInit
{
path = [UIBezierPath bezierPath];
// Capture touches
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan:)];
pan.maximumNumberOfTouches = pan.minimumNumberOfTouches = 1;
[self addGestureRecognizer:pan];
// Erase with long press
[self addGestureRecognizer:[[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(erase)]];
}
- (id)initWithCoder:(NSCoder *)aDecoder
{
if (self = [super initWithCoder:aDecoder]) [self commonInit];
return self;
}
- (id)initWithFrame:(CGRect)frame
{
if (self = [super initWithFrame:frame]) [self commonInit];
return self;
}
- (void)erase
{
path = [UIBezierPath bezierPath];
[self setNeedsDisplay];
}
- (void)pan:(UIPanGestureRecognizer *)pan {
CGPoint currentPoint = [pan locationInView:self];
if (pan.state == UIGestureRecognizerStateBegan) {
[path moveToPoint:currentPoint];
} else if (pan.state == UIGestureRecognizerStateChanged)
[path addLineToPoint:currentPoint];
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
[[UIColor blackColor] setStroke];
[path stroke];
}
#end
How would I go about doing this?
You want to render the view's layer into a graphics context. It's very straightforward. In your NISignatureViewQuartz class you can add this method:
- (UIImage *)snapshot {
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I wrote an useful helper class, to take and manage screenshot:
#implementation MGImageHelper
/* Get the screenshot of an UIView (so take just UIKit elements and not OpenGL or AVFoundation stuff. */
+ (UIImage *)getScreenshotFromView:(UIView *)captureView
{
CGRect rect = [captureView bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[captureView.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
/* Get the screenshot of a determinate rect of an UIView, and scale it to the size that you want. */
+ (UIImage *)getScreenshotFromView:(UIView *)captureView withRect:(CGRect)captureRect andScaleToSize:(CGSize)newSize
{
UIImage *image = [[self class] getScreenshotFromView:captureView];
image = [[self class] cropImage:image withRect:captureRect];
image = [[self class] scaleImage:image toSize:newSize];
return image;
}
/* Get the screenshot of the screen (useful when you have UIKit elements and OpenGL or AVFoundation stuff */
+ (UIImage *)screenshotFromScreen
{
CGImageRef UIGetScreenImage(void);
CGImageRef screen = UIGetScreenImage();
UIImage* screenImage = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
return screenImage;
}
/* Get the screenshot of a determinate rect of the screen, and scale it to the size that you want. */
+ (UIImage *)getScreenshotFromScreenWithRect:(CGRect)captureRect andScaleToSize:(CGSize)newSize
{
UIImage *image = [[self class] screenshotFromScreen];
image = [[self class] cropImage:image withRect:captureRect];
image = [[self class] scaleImage:image toSize:newSize];
return image;
}
/* Methods used from methods above but also usable in singular */
+ (UIImage *)cropImage:(UIImage *)image withRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
return cropedImage;
}
+ (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
#end
You can use UIView method available starting from iOS 7, designed specifically for that:
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates;
e.g.
UIGraphicsBeginImageContext(self.bounds.size);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Render UIView to image without subviews

I want to convert a UIView to a UIImage
- (UIImage *)renderToImage:(UIView *)view {
if(UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(view.frame.size);
}
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Two questions:
I have subviews on my view. Is there a way to create an image of the view without any of the subviews? Ideally id like to not have to remove them just to add them back later.
Also, it isn't rendering the images properly on retina devices. I followed the advice here to use context with options but it did not help. How to capture UIView to UIImage without loss of quality on retina display
You have to hide the subviews that you not want to be in the image of view. Below is the Method that render image of view for retina devices too.
- (UIImage *)imageOfView:(UIView *)view
{
// This if-else clause used to check whether the device support retina display or not so that
// we can render image for both retina and non retina devices.
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
} else {
UIGraphicsBeginImageContext(view.bounds.size);
}
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (CGImageRef)toImageRef
{
int width = self.frame.size.width;
int height = self.frame.size.height;
CGContextRef ref = CGBitmapContextCreate(NULL, width, height, 8, width*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipLast);
[self drawRect:CGRectMake(0.0, 0.0, width, height) withContext:ref];
CGImageRef result = CGBitmapContextCreateImage(ref);
CGContextRelease(ref);
return result;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// move your drawing commands from here...
[self drawRect:rect withContext:context];
}
- (void)drawRect:(CGRect)rect withContext:(CGContextRef)context
{
// ...to here
}

Printing subview image to PDF in iOS

I am attempting to take a fingerpainted signature from the iPad's touch screen, print it to PDF, and then email the resulting PDF to a preset address. I have a UIView subclass written that adds lines between the current drag event's location to the last drag event's location, as shown below. I have had no troubles implementing the emailing portion. The subclass's declaration:
#import <UIKit/UIKit.h>
#interface SignatureView : UIView {
UIImageView *drawImage;
#public
UIImage *cachedSignature;
}
#property (nonatomic, retain) UIImage* cachedSignature;
-(void) clearView;
#end
And implementation:
#import "SignatureView.h"
#implementation SignatureView
Boolean drawSignature=FALSE;
float oldTouchX=-1;
float oldTouchY=-1;
float nowTouchX=-1;
float nowTouchY=-1;
#synthesize cachedSignature;
//UIImage *cachedSignature=nil;
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code.
}
return self;
if (cachedSignature==nil){
if(UIGraphicsBeginImageContextWithOptions!=NULL){
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
}else{
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
}
- (void)drawRect:(CGRect)rect {
//get image of current state of signature field
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
//draw cached signature onto signature field, no matter what.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0,0,302,90),cachedSignature.CGImage);
if(oldTouchX>0 && oldTouchY>0 && nowTouchX>0 && nowTouchY>0){
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 2);
//make change to signature field
CGContextMoveToPoint(context, oldTouchX, oldTouchY);
CGContextAddLineToPoint(context, nowTouchX, nowTouchY);
CGContextStrokePath(context);
}
}
-(void) clearView{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetGrayFillColor(context, 0.75, 1.0);
CGContextFillRect(context, CGRectMake(0,0,800,600));
CGContextFlush(context);
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
[self setNeedsDisplay];
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint location=[[touches anyObject] locationInView:self];
oldTouchX=nowTouchX;
oldTouchY=nowTouchY;
nowTouchX=location.x;
nowTouchY=location.y;
[self setNeedsDisplay];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint location=[[touches anyObject] locationInView:self];
oldTouchX=nowTouchX;
oldTouchY=nowTouchY;
nowTouchX=location.x;
nowTouchY=location.y;
[self setNeedsDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
oldTouchX=-1;
oldTouchY=-1;
nowTouchX=-1;
nowTouchY=-1;
}
- (void)dealloc {
[super dealloc];
}
#end
I am, however, having trouble printing the signature to a PDF. As I understand the above code, it should keep a copy of the signature before the last move was made as a UIImage named cachedSignature, accessible to all. However, when I try to write it to a PDF context with the following:
UIGraphicsBeginPDFContextToFile(fileName, CGRectZero, nil);
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, 612, 792), nil);
UIImage* background=[[UIImage alloc] initWithContentsOfFile:backgroundPath];
// Create the PDF context using the default page size of 612 x 792.
// Mark the beginning of a new page.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, 792);
CGContextScaleCTM(context, 1.0, -1.0);
//CGContextDrawImage(context, CGRectMake(0,0,612,790),background.CGImage);
//draw signature images
UIImage *y=clientSignatureView.cachedSignature;
CGContextDrawImage(context, CGRectMake(450, 653, 600, 75), y.CGImage);
//CGContextDrawImage(context, CGRectMake(75, 75, 300, 300), techSignatureView.cachedSignature.CGImage);
CGContextTranslateCTM(context, 0.0, 792);
CGContextScaleCTM(context, 1.0, -1.0);
It fails. In this instance 'techSignatureView' and 'clientSignatureView' are instances of the custom UIView as defined above, connected to outlets in the same parent UIView as is running this code.
I don't know what's going wrong. I've taken out the call to print the background image, in case they were being printed 'behind' it, with no results. Using the debugger to inspect 'y' in the above code reveals that it has nil protocol entries and nil method entries; so I suspect it's not being accessed properly - beyond that, I'm clueless and don't know how to proceed.
First of all, your cachedSignature probably doesn't exist anymore when you try to draw it, because it's autoreleased. Use the property setter when you assign it, so instead of cachedSignature = foo write self.cachedSignature = foo.
Also, UIGraphicsGetImageFromCurrentImageContext only works for image contexts, you cannot use this in drawRect: with the current context. The drawRect: method takes care of drawing your view to the screen, you cannot create an image from the same context, you have to use UIGraphicsBeginImageContext to create a new context that you draw the image into. The drawRect: method is not the best place for that anyway, because performance will likely be bad if you render the image every time a touch moves. Instead, I'd suggest to render the signature to the image in your touchesEnded:withEvent: implementation.

Resources