I'm writing application which user can draw line with his finger.
This is code:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(#"BEGAN");//TEST OK
UITouch* tap=[touches anyObject];
start_point=[tap locationInView:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(#"MOVED");//TEST OK
UITouch* tap=[touches anyObject];
current_point=[tap locationInView:self];
[self DrawLine:start_point end:current_point];
start_point=current_point;
}
-(void)DrawLine: (CGPoint)start end:(CGPoint)end
{
context= UIGraphicsGetCurrentContext();
CGColorSpaceRef space_color= CGColorSpaceCreateDeviceRGB();
CGFloat component[]={1.0,0.0,0.0,1};
CGColorRef color = CGColorCreate(space_color, component);
//draw line
CGContextSetLineWidth(context, 1);
CGContextSetStrokeColorWithColor(context, color);
CGContextMoveToPoint(context, start.x, start.y);
CGContextAddLineToPoint(context,end.x, end.y);
CGContextStrokePath(context);
}
My problem is when I draw line on screen but the line is not visible.
P.S I draw on main View of application
context= UIGraphicsGetCurrentContext();
You're calling UIGraphicsGetCurrentContext() from outside the drawRect: method. So it will return nil. Therefore the following functions try to draw on a context that is actually nil which obviously cannot work
As #Jorg mentioned, there is no current context outside of drawRect: method, so UIGraphicsGetCurrentContext() will return nil most probably.
You can use a CGLayerRef for drawing offscreen, while in your drawRect: method, you can draw the contents of the layer on your view.
First, you'll need to declare the layer as a member of your class, so in your #interface declare CGLayerRef _offscreenLayer;. You may also create a property for it, however, I'll use it directly in this example.
Somewhere in your init method:
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, self.frame.size.width, self.frame.size.height, 8, 4 * self.frame.size.width, colorspace, (uint32_t)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
_offscreenLayer = CGLayerCreateWithContext(context, self.frame.size, NULL);
Now, let's handle the drawing:
-(void)DrawLine: (CGPoint)start end:(CGPoint)end
{
CGContextRef context = CGLayerGetContext(_offscreenLayer);
// ** draw your line using context defined above
[self setNeedsDisplay]; // or even better, use setNeedsDisplayInRect:, and compute the dirty rect using start and end points
}
-(void)drawRect:(CGRect)rect {
CGContextRef currentContext = UIGraphicsGetCurrentContext(); // this will work now, since we're in drawRect:
CGRect drawRect = CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height);
CGContextDrawLayerInRect(currentContext, drawRect, _offscreenLayer);
}
Note that you may need to make small changes in order for the code to work, but should give you some idea on how you can implement it.
Related
I want to change the width of the line with 3D-touch while drawing. I'm using the library ACEDrawingView which uses this code to draw:
- (void)draw
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, path);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetAlpha(context, self.lineAlpha);
CGContextStrokePath(context);
}
I use this code to change the lineWidth based on the pressure in touchesMoved:
// save all the touches in the path
UITouch *touch = [touches anyObject];
//3d touch
CGFloat maximumPossibleForce = touch.maximumPossibleForce;
CGFloat force = touch.force;
CGFloat normalizedForce = force/maximumPossibleForce;
CGFloat lineWidth = normalizedForce * 10;
self.currentTool.lineWidth = lineWidth;
It all works good until I draw another line which crosses the previous one. Then it looks like the image above.
Any help would be greatly appreciated! Please ask for more code if needed, or take a quick look at ACEDrawingView. Thank you!
In my app, I draw a circle on the user's tap and drag. Where they tapped is the center of the circle and as they drag, the radius is up to their finger. Once you draw one circle, if you tap around a couple of times (without dragging), nothing happens but the next time you tap and drag, circles get created (the same sized circles as the last circle made) at each tap you made previously. This doesn't make sense to me because the only things I call on touchesBegan:withEvent are
UITouch *touch = [touches anyObject];
center = [touch locationInView:self];
It should be replacing center each time, but it draws an individual circle at each tap when you finally move your finger.
touchesMoved:
UITouch *touch = [touches anyObject];
endPoint = [touch locationInView:self];
distance = (uses center's attributes);
[self setNeedsDisplay];
touchesEnded:
[self drawBitmap];
drawBitmap:
if (!CGPointEqualToPoint(center, CGPointZero) && !CGPointEqualToPoint(endPoint, CGPointZero)) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
if (!incrementalImage) { // first draw;
UIBezierPath *rectpath = [UIBezierPath bezierPathWithRect:self.bounds]; // enclosing bitmap by a rectangle defined by another UIBezierPath object
[[UIColor clearColor] setFill];
[rectpath fill]; // fill it
}
[incrementalImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, sW);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), r, g, b, o);
CGContextSetLineCap(context, kCGLineCapRound);
CGRect rectangle = CGRectMake(center.x - distance, center.y - distance, distance * 2, distance * 2);
CGContextAddEllipseInRect(context, rectangle);
CGContextStrokePath(context);
incrementalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
drawRect
[incrementalImage drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, sW);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), r, g, b, o);
CGContextSetLineCap(context, kCGLineCapRound);
CGRect rectangle = CGRectMake(center.x - distance, center.y - distance, distance * 2, distance * 2);
CGContextAddEllipseInRect(context, rectangle);
CGContextStrokePath(context);
Why do circles get made after the taps, but only appear when you drag? Circles should only be drawn when you drag... Right? I'm almost 99% sure the problem is drawBitmap, but i'm not sure what I should do to check if it was only a tap or a drag. Also, if the problem is drawBitmap, why don't I see circles when touchesEnd? It's only on drag events that I see them. EDIT to be clear, I don't want circles to be drawn at all on single taps, only on drags.
You don't call drawing routines on your own. They have to be called when the system is ready for you do to your drawing -- it signals that it's ready by calling your drawRect:. You can run whatever drawing routines you like from there. You indicate to the system that you have something that needs to be drawn as soon as it's ready with setNeedsDisplay. That's why touchesMoved: is causing circles to be drawn. Add a call to setNeedsDisplay in touchesEnded: and you should see the results you're looking for.
Well, you only have [self setNeedsDisplay]; in touchesMoved:, which means that if you only tap, the view will not redraw. You do not redraw on touchesEnded:. So that explains part of the behavior.
As for the rest of it, I would put breakpoints or NSLogs to debug what's really happening.
I'm having some issues while trying to draw lines smoothly on touches moved when my UIView is zoomed in. The thing is that lines start to look very pixelated upon certain zoom level.
My view hierarchy is really simple as of now, since this is more like a proof of concept. I have a UIScrollView, my UIView as a child and also it is set as the zoom view.
My drawRect: implementation looks like this:
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
CGPoint mid1 = midPoint(_previousPoint1, _previousPoint2);
CGPoint mid2 = midPoint(_currentPoint, _previousPoint1);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextMoveToPoint(context, mid1.x, mid1.y);
CGContextAddQuadCurveToPoint(context, _previousPoint1.x, _previousPoint1.y, mid2.x, mid2.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
CGContextStrokePath(context);
Dumping the contents of the layer into the context is an important part since I'm only refreshing the bounding box enclosing the path formed by the last three touched points (_previousPoint1, _previousPoint2 and _currentPoint).
That operation is actually done on touchesMoved method and the function that handles it is the following:
- (void)calculateMinImageArea:(CGPoint)pp1 :(CGPoint)pp2 :(CGPoint)cp
{
// calculate mid point
CGPoint mid1 = midPoint(pp1, pp2);
CGPoint mid2 = midPoint(cp, pp1);
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(path, NULL, pp1.x, pp1.y, mid2.x, mid2.y);
CGRect bounds = CGPathGetBoundingBox(path);
CGPathRelease(path);
CGRect drawBox = [self extendedRect:bounds];
UIGraphicsBeginImageContext(drawBox.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIGraphicsEndImageContext();
[self setNeedsDisplayInRect:drawBox];
}
I was able to find a workaround using CAShapeLayers to draw the paths. However, the low performance wasn't acceptable.
I would really appreciate if someone can point me to the right approach.
Thanks.
[self setContentScaleFactor:scale];
use this in scrollViewDidEndZooming: delegate method
Hope this help's you
friends, I am working with UIBezierPaths, for free hand drawing , and everything works fine, for this I am storing my paths in a path array, everything works fine, while rendering, I am looping the whole array and rendering the paths,but soon as the array count increases, I see a lag while drawing, below is my drawRect code. please help me out in finding where I am going wrong
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
m_previousPoint2 = m_previousPoint1;
m_previousPoint1 = [mytouch previousLocationInView:self];
m_currentPoint = [mytouch locationInView:self];
CGPoint mid1 = midPoint(m_previousPoint1, m_previousPoint2);
CGPoint mid2 = midPoint(m_currentPoint, m_previousPoint1);
testpath = CGPathCreateMutable();
CGPathMoveToPoint(testpath, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(testpath, NULL, m_previousPoint1.x, m_previousPoint1.y, mid2.x, mid2.y);
CGRect bounds = CGPathGetBoundingBox(testpath);
CGPathRelease(testpath);
CGRect drawBox = bounds;
//Pad our values so the bounding box respects our line width
drawBox.origin.x -= self.lineWidth * 2;
drawBox.origin.y -= self.lineWidth * 2;
drawBox.size.width += self.lineWidth * 4;
drawBox.size.height += self.lineWidth * 4;
CGContextRef context = UIGraphicsGetCurrentContext();
context = CGLayerGetContext(myLayerRef);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
[self setNeedsDisplayInRect:drawBox];
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
if(myLayerRef == nil)
{
myLayerRef = CGLayerCreateWithContext(context, self.bounds.size, NULL);
}
[self.layer renderInContext:context];
CGPoint mid1 = midPoint(m_previousPoint1, m_previousPoint2);
CGPoint mid2 = midPoint(m_currentPoint, m_previousPoint1);
CGContextMoveToPoint(context, mid1.x, mid1.y);
CGContextAddQuadCurveToPoint(context, m_previousPoint1.x, m_previousPoint1.y, mid2.x, mid2.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
CGContextSetFlatness(context, 0.1);
CGContextSetAllowsAntialiasing(context, true);
CGContextStrokePath(context);
[super drawRect:rect];
}
Code updated according to the below discussion by #borrrden
Regards
Ranjit
There is an excellent session in the WWDC 2012 sessions about this exact problem. I think it is called iOS performance: Graphics or something similar. You should definitely watch it.
The summary is, DON'T store it in a path. There is no need to keep this path information around. Store a separate context (CGLayer is best) and draw into that (just like you add points to the path, add points to the context intead). Then in your drawRect, simply draw that layer into the current graphics context. Also be sure to only call setNeedsDisplay on the rectangle that changed, or you will most likely run into problems on a high resolution device.
I am attempting to take a fingerpainted signature from the iPad's touch screen, print it to PDF, and then email the resulting PDF to a preset address. I have a UIView subclass written that adds lines between the current drag event's location to the last drag event's location, as shown below. I have had no troubles implementing the emailing portion. The subclass's declaration:
#import <UIKit/UIKit.h>
#interface SignatureView : UIView {
UIImageView *drawImage;
#public
UIImage *cachedSignature;
}
#property (nonatomic, retain) UIImage* cachedSignature;
-(void) clearView;
#end
And implementation:
#import "SignatureView.h"
#implementation SignatureView
Boolean drawSignature=FALSE;
float oldTouchX=-1;
float oldTouchY=-1;
float nowTouchX=-1;
float nowTouchY=-1;
#synthesize cachedSignature;
//UIImage *cachedSignature=nil;
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code.
}
return self;
if (cachedSignature==nil){
if(UIGraphicsBeginImageContextWithOptions!=NULL){
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
}else{
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
}
- (void)drawRect:(CGRect)rect {
//get image of current state of signature field
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
//draw cached signature onto signature field, no matter what.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0,0,302,90),cachedSignature.CGImage);
if(oldTouchX>0 && oldTouchY>0 && nowTouchX>0 && nowTouchY>0){
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 2);
//make change to signature field
CGContextMoveToPoint(context, oldTouchX, oldTouchY);
CGContextAddLineToPoint(context, nowTouchX, nowTouchY);
CGContextStrokePath(context);
}
}
-(void) clearView{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetGrayFillColor(context, 0.75, 1.0);
CGContextFillRect(context, CGRectMake(0,0,800,600));
CGContextFlush(context);
cachedSignature = UIGraphicsGetImageFromCurrentImageContext();
[self setNeedsDisplay];
}
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint location=[[touches anyObject] locationInView:self];
oldTouchX=nowTouchX;
oldTouchY=nowTouchY;
nowTouchX=location.x;
nowTouchY=location.y;
[self setNeedsDisplay];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint location=[[touches anyObject] locationInView:self];
oldTouchX=nowTouchX;
oldTouchY=nowTouchY;
nowTouchX=location.x;
nowTouchY=location.y;
[self setNeedsDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
oldTouchX=-1;
oldTouchY=-1;
nowTouchX=-1;
nowTouchY=-1;
}
- (void)dealloc {
[super dealloc];
}
#end
I am, however, having trouble printing the signature to a PDF. As I understand the above code, it should keep a copy of the signature before the last move was made as a UIImage named cachedSignature, accessible to all. However, when I try to write it to a PDF context with the following:
UIGraphicsBeginPDFContextToFile(fileName, CGRectZero, nil);
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, 612, 792), nil);
UIImage* background=[[UIImage alloc] initWithContentsOfFile:backgroundPath];
// Create the PDF context using the default page size of 612 x 792.
// Mark the beginning of a new page.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, 792);
CGContextScaleCTM(context, 1.0, -1.0);
//CGContextDrawImage(context, CGRectMake(0,0,612,790),background.CGImage);
//draw signature images
UIImage *y=clientSignatureView.cachedSignature;
CGContextDrawImage(context, CGRectMake(450, 653, 600, 75), y.CGImage);
//CGContextDrawImage(context, CGRectMake(75, 75, 300, 300), techSignatureView.cachedSignature.CGImage);
CGContextTranslateCTM(context, 0.0, 792);
CGContextScaleCTM(context, 1.0, -1.0);
It fails. In this instance 'techSignatureView' and 'clientSignatureView' are instances of the custom UIView as defined above, connected to outlets in the same parent UIView as is running this code.
I don't know what's going wrong. I've taken out the call to print the background image, in case they were being printed 'behind' it, with no results. Using the debugger to inspect 'y' in the above code reveals that it has nil protocol entries and nil method entries; so I suspect it's not being accessed properly - beyond that, I'm clueless and don't know how to proceed.
First of all, your cachedSignature probably doesn't exist anymore when you try to draw it, because it's autoreleased. Use the property setter when you assign it, so instead of cachedSignature = foo write self.cachedSignature = foo.
Also, UIGraphicsGetImageFromCurrentImageContext only works for image contexts, you cannot use this in drawRect: with the current context. The drawRect: method takes care of drawing your view to the screen, you cannot create an image from the same context, you have to use UIGraphicsBeginImageContext to create a new context that you draw the image into. The drawRect: method is not the best place for that anyway, because performance will likely be bad if you render the image every time a touch moves. Instead, I'd suggest to render the signature to the image in your touchesEnded:withEvent: implementation.